Cognitive ADAS problems and solutions at the Edge
After billions of dollars in moonshot investments, ADAS is coming back to Earth. I called this article “Houston, We Have a Problem” because the ADAS industry is facing it’s Apollo 13 moment: Will we run out of oxygen or MacGyver a solution that gets ADAS back to earth with products that are safe, reliable, and commercially viable?
To make this right, we have to look at where we went wrong.
Asked what he would do if he only had an hour to save the world, Einstein famously remarked that he would spend 55 minutes defining the problem, and only five minutes solving it.
Unfortunately, spurred on by a gold rush of investment, the ADAS industry has clearly not taken the time to properly define the problem.
The industry consensus is that automated driving is a cognitive problem. A google search for “cognitive radar” generates over 40K hits and nearly all companies working in the ADAS L2 – L4 space have some form of “cognitive” or “intelligent” or “smart” prominently featured in their branding materials.
Branding your company’s products as “intelligent” may boost funding, but framing the ADAS challenge as “cognitive” fails to properly define the problem. And if we’re trying to solve the wrong problem, we will fail to meet the need: safe, commercially viable ADAS that works under all driving conditions.
Maslow’s hammer: Why cognitive ADAS fails.
To call attention to the limiting effects of a cognitive bias, psychologist Abraham Maslow would reference the timeless expression, “If the tool you have is a hammer, everything starts looking like a nail.”
The over-reliance on a cognitive model for ADAS is evident in more than just marketing materials. It’s hard-wired into the mobility AI the industry is creating. (And, coincidentally, it’s also hard-wired into the way many of us think.)
Sound cognitive assessments require a systematic analysis of all relevant data. It makes sense that cognitive-engineered AI’s would require a complicated perception stack to integrate and process the mountain of data generated by high resolution cameras, LIDAR and traditional radar.
The problem with all that data cognition? The snake underneath your desk.
Did you instantly jump back? That instinctual response could save your life. Taking time “to think about it?”
Deadly. It’s no surprise that cognitive-engineered AI performs like the distracted drivers it is meant to replace: drowning in data, slow to respond to unexpected threats.
The response by the ADAS industry to its disturbing accident record? Let their bet on cognitive AI ride. Processing speed has to catch up with the marketing campaign sometime, right?
Actually, it already has.
“Cognitive” radar is already here. It’s just not the droid we’re looking for.
As research at the University of Pennsylvania has shown, the human retina transmits visual input at roughly 10 mbps. About the broadband speed you’d expect when you check into a chain motel.
If you think that’s slow, consider the glacial pace of human cognition. While difficult to clock precisely, most experts agree that conscious thought crawls along at about 50 – 60 bits per second. A fraction of the processing speed of even a ten-year-old laptop.
The onboard supercomputers from Nvidia, Qualcomm and Intel driving today’s autonomous vehicles are orders of magnitude faster than that. So we have plenty of processing speed. Why aren’t we winning the race for safe, reliable, autonomous vehicles?
Let’s look at that snake example again.
Our survival brain boils things down to fight / flight / freeze to allow for the instant responses necessary to successfully navigate unexpected, life and death situations. Instinctual and elegant heuristic responses like this require almost no processing power.
Heuristics = The culmination of evolution: established patterns that happen before cognition – near immediate.
It’s like a line of BASIC – IF snake, THEN run!
Safely navigating the world depends upon myriad situation-specific sub-routines like this that operate beneath our conscious awareness to guide our steps and keep us safe.
This guidance system has been beta-tested through millions of years of evolution. A much better model to emulate for ADAS guidance systems than the cognitive model embraced by the industry today.
Most ADAS stacks rely on a central computer to pull all the data in, make sense of it, and take action—like our cerebral cortex. Makes perfect sense, right? But if we touch a hot pan and wait for the conscious brain to do the right thing, you’re in for a lot of pain.
The way the body does this is through the sympathetic nervous system—specifically the spinal column. Instead of the signal going all the way to the brain and waiting for it to disengage from the task it’s focusing on, the spinal column registers the threat and reflexively pulls back. The brain then checks in and takes additional protective steps.
Biomimetic = The emulation of the models, systems, and elements of nature for the purpose of solving complex problems.
If the onboard supercomputers driving today’s automated vehicles can’t emulate the human reflexes needed for safe driving, how do we do it?
It’s actually in aerospace and defence that we’re finding the tech that gets this done. The high-speed, high-efficiency, low-cost edge processing solutions ADAS needs to finally pull into the driveway have been keeping military and commercial pilots alive for decades.
Edge Processing = faster, more accurate, sit rep
Instead of relying on more and more resources, more and more code—like the conventional ADAS stack—military and aerospace engineers have been successfully working for years on a very different approach. They’ve been putting powerful algorithms into the simple ARM processors that run many aircraft and weapons systems.
“Old school” ARM processors are lower cost, smaller, lighter, and can be more robust (think the nose cone computer for the Apollo spacecraft). Critically, they can also be attached directly to sensors.
Decades spent mastering the unique limitations—and advantages—of these small ARM chips has spurred tremendous innovation. Through an obsessive focus on “timing & sizing”, these engineers have created hyper-efficient code that maximizes speed, reliability, and resource consumption.
Compare this with conventional engineering: just stacking on code and hardware—dependent on massive computers and pushing off massive computation to the cloud. This brings us to where we are today in the automated vehicle world. Running to stand still.
Edge processing leverages lightning-fast proximal devices to respond reflexively and reduce processing load on the central navigation computer. Biomimetic insight: when humans are overwhelmed, we don’t have the “bandwidth” to make good decisions. Reducing the processing load for an autonomous vehicle’s central navigation computer helps it make better decisions, too.
Our team is finding that military grade edge processing + biomimetic engineering = real solutions at all levels of ADAS. Military tech + a human touch tech may seem an odd marriage to some. But if we’re not using the right technology to solve problems for our fellow humans, we’re going about this all wrong.
Article by Nathan Mintz, Chief Executive Officer, Spartan Radar.