Multiple Sensor Data Fusion: Raw Data vs. Object Data Incorporation

September 27, 2017

Autonomous car with LIDAR scan signal

There are some people who love GPS navigation systems and some who hate them. I usually trust mine, but when my father rides with me he’s always trying to get me to take some kind of shortcut that will get us lost. No matter which way we take, though, we always make it to our final destination. In the same way, there are several paths to take with multiple sensor fusion, namely early and late data incorporation. Both will likely leave you with the same results, but early data fusion can get you to the goal more quickly, accurately, and with less power than late integration. As always, though, there are pros and cons to both sides. The optimal solution may be one that allows designers to mix early and late amalgamation.

Multiple Sensor Fusion

Just in case you’re a bit rusty on the idea of multi-sensor fusion let me refresh your memory. Vehicles with advanced driver assistance systems (ADAS) detect their environments with a variety of sensors like ultrasonic, LIDAR, radar, and visual. Each sensor has its own strengths and weaknesses, but by combining the data coming in from each sensor you can create a complete system that is more reliable than any one of its individual parts. Multiple sensor fusion can help reduce error and keep autonomous systems from incorrectly assessing their environments. There is no question that this kind of system is good, but there are questions about when data incorporation should come into play.

Early Fusion vs. Late Fusion

There are as many different sensors, controllers, and neural networks available to process ADAS data as there are wrong turns when I follow my father’s directions. Some of these sensors simply gather raw information and pipe it directly to the microcontroller where it is processed. Others have their own controllers that process incoming data and then send it on to the electronic control unit (ECU) in the form of object data. Depending on which sensors you choose, some may be sending raw data and others passing on object data to your processor. In general, a system that mixes raw and object data would be a “late” fusion system. This means that information is being mixed together after some of the data handling has already been done. An “early” scheme would mean that all raw data is blended together before any processing is done. Both arrangements have their own benefits and shortfalls.

autonomous car driving on road with sensing systems
With multi-sensor fusion your car can get a clear picture of its surroundings.

Late Fusion Pros

The main advantage of late fusion is that it’s currently being done. The idea of early fusion hasn’t really been implemented yet, so many original equipment manufacturers (OEMs) have developed late fusion systems that work. They’ve spent a good bit of time and money on optimizing these methods, so late fusion has already had some good work put into it.

Late Fusion Cons

There are, however, a few more negatives than positives. Incorporating object data instead of raw data still exposes the system to the weaknesses of individual sensors. Let’s say, for example, your passive visual sensor pre-processes data. The sensor sees a stop sign, but because of stickers placed on the sign, it misidentifies the sign as a speed limit posting. LIDAR data coming in will indicate that the sign is hexagonal, meaning it’s a stop sign and not a speed limit sign. You’ll then have two data streams indicating conflicting opinions, and will need a third to break the tie or decide to trust one sensor over the other. In addition to propagating sensor deficiencies through the system, late fusion takes a long time to make decisions. Some developers think all of these problems can be solved with early fusion.

Man putting puzzles pieces of a car together
Early data fusion will help fit together all your data into a cohesive environment.

Early Fusion Pros

Several companies, like DeepScale, say that combining information earlier will allow them to detect environments with a higher resolution while using less energy. When all the raw data from various sensors is thrown into the mix before being processed, a deep neural network (DNN) can create a more complete picture of the surroundings. Instead of LIDAR and passive visual arguing about what kind of sign they’re seeing, the processor can combine the raw data from both to determine what it’s actually looking at. Along with making a system smarter, early fusion can boost its performance as well. Using an algorithm that deals only with raw data can decrease system latency.

Early Fusion Cons

Early data fusion suffers from being quite complex. While many companies have developed DNNs and algorithms to deal with late fusion, few have ventured into this area. OEMs are hesitant to embrace early fusion for that reason, and because they have already invested significantly in late fusion.

The Way Forward

Fortunately for the industry there are a few businesses willing to take the lead in developing early fusion. DeepScale is working on a system that will be sensor and processor independent. In addition, they’re working on building a fully trained DNN to run it. If you want to build a more accurate system and boost its performance, early sensor fusion may be the way to go.

Sensor fusion is just one of the many systems needed to bring about the next generation of ADAS enabled vehicles. If you’re working in that field, or developing other auto-related programs, you need development software as advanced as your products. TASKING makes a variety of tools tailored specifically for the ADAS car space. Their stand alone debugger can save you time in development, and their static analysis software can help you ensure your system is free from interference.

Have more questions about sensor fusion? Call an expert at TASKING.

most recent articles

Back to Home