My Account Log in

1 option

Low-Level Data Fusion between Camera and Automotive RADAR for Vehicle and Pedestrian Detection Using nuScenes Database Automotive Research Center, University of Brasilia, Brazil

SAE Technical Papers (1906-current) Available online

View online
Format:
Book
Conference/Event
Author/Creator:
Cury, Hachid Habib, author.
Contributor:
Silva, Rafael Rodrigues
Teixeira, Evandro Leonardo Silva
Conference Name:
SAE Brasil 2024 Congress (2024-10-16 : Sao Paolo, Brazil)
Language:
English
Physical Description:
1 online resource cm
Place of Publication:
Warrendale, PA SAE International 2024
Summary:
Autonomous driving technology has indeed become a focal point of research globally, with significant efforts directed towards enhancing its key components: environment perception, vehicle localization, path planning, and motion control. These components work together to enable autonomous vehicles to navigate complex environments safely and efficiently. Among these components, environment perception stands out as critical, as it involves the robust, real-time detection of targets on the road. This process relies heavily on the integration of various sensors, making data fusion an indispensable tool in the early stages of automation. Sensor fusion between the camera and RADAR (Radio Detection and Ranging) has advantages because they are complementary sensors, where fusion combines the high lateral resolution from the vision system with the robustness in the face of adverse weather conditions and light invulnerability of RADAR, as well as having a lower production cost compared to the LiDAR (Light Detection and Ranging) sensor. Given the importance of sensor fusion for automated driving, this paper examines the low-level sensory fusion method that uses RADAR detection to generate Regions of Interest (ROIs) in the camera coordinate system. To do so, it was selected a fusion algorithm based on RRPN (Radar Region Proposal Network), which combines RADAR and camera data, and compared it to Faster R-CNN, which uses only camera data. Our goal was to study the advantages and limitations of the proposed method. We explored the NuScenes database to determine the best aspect ratios for different object sizes and modified the RRPN algorithm to generate more effective anchors. For training, we used camera and frontal RADAR data from the NuScenes database. The COCO dataset metrics under three different temporal conditions: day, night, and rain was used to evaluate the proposed models
Notes:
Vendor supplied data
Publisher Number:
2024-36-0064
Access Restriction:
Restricted for use by site license

The Penn Libraries is committed to describing library materials using current, accurate, and responsible language. If you discover outdated or inaccurate language, please fill out this feedback form to report it and suggest alternative language.

My Account

Shelf Request an item Bookmarks Fines and fees Settings

Guides

Using the Library Catalog Using Articles+ Library Account