top of page

Surround Sensor Fussion

Driverless vehicle sees, listens, thinks, and then acts; reminiscent to what a human does. Sensing with cognition is the equivalence of seeing. If the vision is fuzzy, non instantaneous, nearsighted, blindspotted, then the rest of the driverless technology would be subpar, regardless whether the developer has spent umpteenth times trial-running his craft.   

All sensors have their unique characteristics and deficiencies. It is only realistic to fuse a plurality of types to compliment for an all-weather all-time readiness in sensing capability for driverless machine cognition on road situations. 

Classic ADAS typically support visual presentation to driver, or template matching for audible alerts. Sensor fusion is done by the driver. Since the driverless vehicle must be able to sense, detect, identify, and decide what to respond without human interaction, the classical ADAS sensors must evolve for actionable resolutions to remain useful. 

Short/medium/long range radars in classic ADAS provide unrefined object detection through radar cross sectional (RCS) analysis in their defined conical sectors. The alerts might be adequate for the driver since the driver will attend to the alert, equivalent to zooming in for further investigative scans, but likely not specific enough for pilotless responses. The radar data are not readily fused with the cameras. Latest mmWave radar employs 1 emitter but 2/4/8 receivers to produce up to 8 spots/dots within a 4 degree beamwidth cone. Lidar (lightwave radar) provides 3D RCS dots without object identifications. The lidar outputs require posting on a Point Cloud before projectile cognition.  

Actual lidar road scenario is successively collected over time, famously through spinning on top of vehicle, and depending on the number of beams in the system. Lidar is an expensive proposition (circa 2017) even with optimistic projection declaring unit cost of $80,000+ be reduced to <$50 in mass production someday. In-motion autonomous vehicle must react to evolving scenario, consuming huge signal processing capacity to parse incoming data against (prior scanned) 3D streetscape. Digital map provider has started to issue high resolution 3D roadmaps, yet the map would need to be transposed to the perspective of the lidar, in reference to a high accuracy GNSS, to be useful.  

Indeed, it has been reported in 2015 a Self Driving Car needs 10 Technologies to work: Combined radar and camera; Surround radars; 360° surround vision; Multiple beam laser scanner; Trifocal camera; Long-range radars; Ultrasonic sensors; High definition 3D digital map; High performance positioning and Cloud services. 

It is against this backdrop that Moovee offers its cost effective surround sensor fusion system to market. As Moovee has developed its autonomous controller on its robotic kinematic-by-wire platform, we see the need also in providing a cognitive engine in lieu of a parametric signal processor. The performed road training and the self learning capability of the system will be primed to meet the unknown complexity of the driverless future.  

Anchor 1
bottom of page