Cognitive ADAS problems and solutions at the Edge
After billions of dollars in moonshot investments, ADAS is coming back to Earth. I called this article βHouston, We Have a Problemβ because the ADAS industry is facing itβs Apollo 13 moment: Will we run out of oxygen or MacGyver a solution that gets ADAS back to earth with products that are safe, reliable, and commercially viable?
To make this right, we have to look at where we went wrong.
Asked what he would do if he only had an hour to save the world, Einstein famously remarked that he would spend 55 minutes defining the problem, and only five minutes solving it.
Unfortunately, spurred on by a gold rush of investment, the ADAS industry has clearly not taken the time to properly define the problem.
The industry consensus is that automated driving is a cognitive problem. A google search for βcognitive radarβ generates over 40K hits and nearly all companies working in the ADAS L2 – L4 space have some form of βcognitiveβ or βintelligentβ or βsmartβ prominently featured in their branding materials.
Branding your companyβs products as βintelligentβ may boost funding, but framing the ADAS challenge as βcognitiveβ fails to properly define the problem. And if weβre trying to solve the wrong problem, we will fail to meet the need: safe, commercially viable ADAS that works under all driving conditions.
Maslowβs hammer: Why cognitive ADAS fails.
To call attention to the limiting effects of a cognitive bias, psychologist Abraham Maslow would reference the timeless expression, βIf the tool you have is a hammer, everything starts looking like a nail.β
The over-reliance on a cognitive model for ADAS is evident in more than just marketing materials. Itβs hard-wired into the mobility AI the industry is creating. (And, coincidentally, itβs also hard-wired into the way many of us think.)
Sound cognitive assessments require a systematic analysis of all relevant data. It makes sense that cognitive-engineered AIβs would require a complicated perception stack to integrate and process the mountain of data generated by high resolution cameras, LIDAR and traditional radar.
The problem with all that data cognition? The snake underneath your desk.
Did you instantly jump back?Β That instinctual response could save your life. Taking time βto think about it?β
Deadly.Β Itβs no surprise that cognitive-engineered AI performs like the distracted drivers it is meant to replace: drowning in data, slow to respond to unexpected threats.
The response by the ADAS industry to its disturbing accident record? Let their bet on cognitive AI ride. Processing speed has to catch up with the marketing campaign sometime, right?
Actually, it already has.
βCognitiveβ radar is already here. Itβs just not the droid weβre looking for.
As research at the University of Pennsylvania has shown, the human retina transmits visual input at roughly 10 mbps. About the broadband speed youβd expect when you check into a chain motel.
If you think thatβs slow, consider the glacial pace of human cognition. While difficult to clock precisely, most experts agree that conscious thought crawls along at about 50 – 60 bits per second. A fraction of the processing speed of even a ten-year-old laptop.
The onboard supercomputers from Nvidia, Qualcomm and Intel driving todayβs autonomous vehicles are orders of magnitude faster than that.Β So we have plenty of processing speed. Why arenβt we winning the race for safe, reliable, autonomous vehicles?
Letβs look at that snake example again.
Our survival brain boils things down to fight / flight / freeze to allow for the instant responses necessary to successfully navigate unexpected, life and death situations. Instinctual and elegant heuristic responses like this require almost no processing power.
Heuristics = The culmination of evolution: established patterns that happen before cognition – near immediate.
Itβs like a line of BASIC – IF snake, THEN run!
Safely navigating the world depends upon myriad situation-specific sub-routines like this that operate beneath our conscious awareness to guide our steps and keep us safe.
This guidance system has been beta-tested through millions of years of evolution. A much better model to emulate for ADAS guidance systems than the cognitive model embraced by the industry today.
Most ADAS stacks rely on a central computer to pull all the data in, make sense of it, and take actionβlike our cerebral cortex. Makes perfect sense, right? But if we touch a hot pan and wait for the conscious brain to do the right thing, youβre in for a lot of pain.
The way the body does this is through the sympathetic nervous systemβspecifically the spinal column. Instead of the signal going all the way to the brain and waiting for it to disengage from the task it’s focusing on, the spinal column registers the threat and reflexively pulls back. The brain then checks in and takes additional protective steps.
Biomimetic = The emulation of the models, systems, and elements of nature for the purpose of solving complex problems.
If the onboard supercomputers driving todayβs automated vehicles canβt emulate the human reflexes needed for safe driving, how do we do it?
Itβs actually in aerospace and defence that weβre finding the tech that gets this done. The high-speed, high-efficiency, low-cost edge processing solutions ADAS needs to finally pull into the driveway have been keeping military and commercial pilots alive for decades.
Edge Processing = faster, more accurate, sit rep
Instead of relying on more and more resources, more and more codeβlike the conventional ADAS stackβmilitary and aerospace engineers have been successfully working for years on a very different approach. Theyβve been putting powerful algorithms into the simple ARM processors that run many aircraft and weapons systems.
βOld schoolβ ARM processors are lower cost, smaller, lighter, and can be more robust (think the nose cone computer for the Apollo spacecraft). Critically, they can also be attached directly to sensors.
Decades spent mastering the unique limitationsβand advantagesβof these small ARM chips has spurred tremendous innovation. Through an obsessive focus on “timing & sizing”, these engineers have created hyper-efficient code that maximizes speed, reliability, and resource consumption.
Compare this with conventional engineering: just stacking on code and hardwareβdependent on massive computers and pushing off massive computation to the cloud. This brings us to where we are today in the automated vehicle world. Running to stand still.
Edge processing leverages lightning-fast proximal devices to respond reflexively and reduce processing load on the central navigation computer. Biomimetic insight: when humans are overwhelmed, we don’t have the “bandwidth” to make good decisions.Β Reducing the processing load for an autonomous vehicle’s central navigation computer helps it make better decisions, too.
Our team is finding that military grade edge processing + biomimetic engineering = real solutions at all levels of ADAS. Military tech + a human touch tech may seem an odd marriage to some. But if weβre not using the right technology to solve problems for our fellow humans, weβre going about this all wrong.
Article by Nathan Mintz, Chief Executive Officer, Spartan Radar.
















