Drowned Robot Highlights Current Challenges in Autonomous Vehicle Development
My father always told me to hope for the best but prepare for the worst. This advice has prepared me well for endless failure modes and effects analysis (FMEA) and quality control meetings where we try to think of every way something could go wrong. In some industries, things can go more wrong than in others. If I’m designing something like smart silverware and it has a bug, no one really cares. If I’m building an autonomous vehicle and it has a glitch, things can get ugly. The recent highly publicized accidental drowning of an autonomous guard robot in Washington DC can show us just how things might go wrong with self-driving cars. These robots are equipped with the same kinds of sensors as connected cars, and their failures can show us what we need to watch out for when constructing the future of transportation.
Bot Breakdowns
Knightscope is a company that’s pioneering the future of autonomous robot security. They currently have two guard robots available, with more planned for the future. However, before they move forwards perhaps they should look back. Their robots have shown two critical areas where autonomous machines can fail: security and object detection.
A big beefy security guard can keep a property safe, but who guards the guards? Knightscope’s robots are both large and in charge, but earlier this year one was bested in fisticuffs by a drunk man in a parking lot. This example serves to remind us that sometimes hostile humans will interact with our devices. It’s very important to think about both physical security and cyber security when designing embedded systems like autonomous vehicles.
These guard robots have also made two public mistakes regarding object detection. The first was when one bot that was guarding a mall ran over a child’s foot. These robots are heavy, weighing around 300 lbs. so it’s lucky the child wasn’t seriously injured. Now imagine if that robot had been a self-driving car. Object detection must be taken seriously. More recently one of these robots fell into a fountain in DC while on patrol. Again the extrapolation is easy to imagine. If you’re riding in an autonomous vehicle and it can’t tell the difference between pavement and water, you might end up in the drink.
Make sure your car knows the difference between water and land. Editorial credit: Sergey Edentod / Shutterstock.com
Sensor Comparison
The scary thing is that these robots actually compare quite well to autonomous vehicles. Maybe if they had a single passive visual sensor I could understand these failures. However, these bots are equipped with: LIDAR, HD low light video cameras, thermal imaging, license plate recognition, directional microphones, proximity sensors, position sensors, and GPS. These are the same kinds of sensors the autonomous vehicle industry is considering using for their cars.
They don’t mention what kind of processor this thing is packing, but it can stream and process data from all of its sensors. That means it must have processing capabilities similar to a self-driving car as well. Since this robot has failed in these ways, we need to work to mitigate these risks in autonomous vehicles.
Protection and Multi-Sensor Fusion
Vehicles need security for both wired and wireless systems. For object detection, multi-sensor fusion and good logic are the answers.
It’s easy to forget about physical layer security for embedded systems when you’re focusing on the failures from internal problems. However, if a car and its circuits are easy to access a criminal could hack into it. That’s why it’s important to design tamper proof, or at least resistant, circuits for your car. Likewise, if a car’s wireless systems are not secure, it can leave the whole vehicle vulnerable to attack. Hackers will have a whole new field of targets as more and more cars become connected. Don’t let your product fall victim to them.
It seems to me that both of Knightscope’s object detection problems come from the way sensor data is handled. Multi-sensor fusion is critical for autonomous object detection. You want to use different sensors to bolster each other’s weaknesses. Passive visual doesn’t work well at night, but LIDAR does. Radar isn’t good for near object detection, but ultrasonic is. Using more than one sensor will ensure that you can detect an object. However, it’s not enough to know something is there, you also have to avoid it. That’s why it’s important to listen to all your sensors together, not just one. Maybe your LIDAR is reading a false negative, but ultrasonic and passive visual are both reading positives. Compare the three results and decide you really are about to drive into a lake. One company that appears to be using multi-sensor fusion for object detection is Boston Dynamics. Their Atlas robot uses LIDAR and stereo sensors in its head, combined with other sensors in its legs, to navigate difficult terrain. Knightscope and the autonomous vehicle industry should both follow their example.
Take physical and cyber security seriously when designing your cars.
The challenges of security and object detection are big ones for the autonomous vehicle industry. Luckily, companies like Knightscope are giving us real life failure modes to look at and prepare for. That’s why you should make sure you protect your car in both the physical and wireless worlds. You should also make multi-sensor fusion a base part of your design to make sure you know what is or isn’t ahead around your vehicle.
Now the real challenge comes, designing the software to make it all happen. TASKING can help you with that. With products like a great static analyzer and a standalone debugger, they can help ensure your cars don’t end up in the news for the wrong reasons.
Have more questions about autonomous robots? Call an expert at TASKING.