Machine learning is considered to be the next big step for self-driving cars. You may not know, however, that machine learning cars first hit the road decades ago. One notable example is ALVINN, which stands for Autonomous Land Vehicle in a Neural Network. As its name suggests, ALVINN used a neural network to watch a human driver and learn how to drive itself. The project was a great example of the potential of self-driving cars. It also clearly showed the roadblocks standing in the way of autonomous vehicles, roadblocks that still remain today. There are two physical problems that limit machine learning cars today: power and size. The neural networks that make up the backbone of machine learning systems are simply too large and take up too much energy. Additionally, there are some more existential issues. Namely, machine learning systems lack common sense.
Our brains are constantly processing huge amounts of information. Current self-driving cars, like Google’s, don’t have to worry about processing that much material. That’s because many of the details they use have already been interpreted and stored for them. Google maps out the roadways they drive in great detail. Their cars already know how high the curb is going to be and which signs are coming up. These are details that humans are continuously processing on the road.Only Google has the capability to use maps like this, and it’s not really machine learning. If we want cars to act like humans and react to new situations, they’ll be analyzing the same data that human drivers automatically process.
The problem is, data collection and interpretation takes a lot of processing power. That old ALVINN car used a 5,000 W generator to power a CPU 1/10th as powerful as an apple watch. Intel estimates that next generation self-driving cars will need to process 1GB of data per second to accurately make decisions. That’s going to take a lot of electricity; maybe enough to drain your batteries while you’re driving. We’ll need very low power chips if we want our cars to take the wheel.
Lucky for us, many chip companies are working on just those kinds of chips. Qualcomm is working on a mobile phone processor called “Snapdragon” that could fit the bill. While it’s made for mobile applications, it could be tailored to self-driving cars. Snapdragon uses the now popular approach of incorporating a GPU to help with image processing.
ALVINN looked something like this. Editorial Credit: Angela N Perryman / Shutterstock.com
Remember that 5,000 W CPU that ALVINN used? It was also the size of a refrigerator. Today’s CPUs are significantly smaller but can still have trouble fitting where they need to go. Fully autonomous machine learning vehicles are going to need lots of processors and a wide array of sensors to function. All of these boards are going to take up space, but they’re not the only problem. The biggest complication is the wires needed to connect everything.
The space inside car frames is already chock full of wires. Wiring harnesses weigh up to 110 pounds and affect fuel efficiency. There is simply not enough space to fit more. That’s why wiring harness manufacturers are looking into ways to reduce cabling. They’re looking at reducing metal weight by using aluminum instead of copper. That could be tricky, as copper-aluminum connections carry a risk of corrosion. Yaziki is also thinking about multiplexing their cables so they can have multiple signals on one conductor. Wireless could be another solution, but it is generally less reliable and less secure.
Limited space is definitely a problem but not an insurmountable one.
Someday cars will be as smart as this guy. Editorial Credit: Cedric Weber / Shutterstock.com
At present, machine learning cars have been exposed to situations that researchers expect them to understand. What about situations where things randomly occur? AI cars also have to learn by example, what if they learn from a reckless driver?
Recently a performance artist/programmer decided to demonstrate the limits of self-driving cars. He trapped his car in a circle made up of lane markings that forbid the car from crossing the line. His presentation showed how foolish artificial intelligence can be. I often drive on roads where lane markings are incorrectly placed on the road. How will an autonomous vehicle know where to drive? Our cars will not only need intelligence, they’ll also need common sense.
The teaching process for machine learning cars is another concern. NVIDIA recently tested a car that had 20 lessons with researchers and could then drive in most conditions. However, everyone drives differently, and some drive dangerously. Some cars could learn by watching other cars too. I don’t want my car learning from drivers who speed or pass in the right lane. How can we ensure that cars only have good role models?
These are questions that are a bit more difficult to answer. Nonetheless, I’m sure that software developers like you can find some solutions. You’ll just have to wait for the electrical engineers to reign in their power requirements, and for the mechanical engineers to find space to fit all the required systems.
After all the power and size requirements have been met, it’ll be up to you to build the programs that guide these cars. That’s going to take a lot of work, and I’m sure you could use some help along the way. That’s why TASKING has developed a wide range of tools to help developers make self-driving cars a reality.
Have more questions about machine learning? Call an expert at TASKING.