Bosch is leveraging deep learning radar technology to enhance the capabilities of automated driving, aiming to improve road safety, traffic flow, and driver convenience. For advanced driver assistance systems (ADAS) to operate safely and reliably, it is crucial that a vehicle’s surroundings are accurately mapped. This requires integrating data from various sensors, including radar, cameras, and sometimes LiDAR systems.
“Radar sensor technology offers significant advantages,” says Michael Ulrich, head of the AI-based Radar Perception project at Bosch Research. “Radar is robust, reliable, cost-effective, and easy to integrate into vehicles, often in inconspicuous locations.”
However, radar data presents challenges. It typically appears as a point cloud with low spatial resolution. Radar works by emitting radio waves that bounce off objects, providing information about their position and distance.
To overcome the limitations of radar alone, Bosch is combining it with data from other sensors. Cameras, for example, provide high-density information but can be unreliable in poor visibility. “Our research aims to use deep learning to train radar systems, improving their ability to detect objects accurately,” Ulrich explains.
Bosch’s interdisciplinary team, composed of ten experts from Bosch Research and the Cross-Domain Computing Solutions division, is working to ensure that their research has practical applications. Spread across Germany, the U.S., and Israel, these experts specialize in perception, signal processing, and machine learning.
The team is applying deep learning, commonly used in language processing and image recognition, to interpret radar data. Neural networks, which are fundamental to deep learning, process data through multiple layers, gradually creating more abstract representations until the final output — the detected object — is produced.
The team uses vast amounts of data to train these neural networks to accurately identify vehicles, pedestrians, and other objects from radar point clouds, learning from both correct and incorrect identifications. This knowledge will be used in real vehicles to merge data from various sensors, creating precise environmental maps.
Looking ahead, Ulrich mentions that the next step in their research is developing a foundation model to unify the different data formats from various sensors. This model would allow the sensors to communicate more effectively, using a common language derived from machine learning. Additionally, Bosch plans to expand the deep-learning process to include radar signal processing, fully harnessing the combination of radar technology and AI for automated driving.