My Account Log in

1 option

Multi-Modal Neural Feature Fusion for Pose Estimation and Scene Perception of Intelligent Vehicle Tongji University

SAE Technical Papers (1906-current) Available online

View online
Format:
Book
Conference/Event
Author/Creator:
Zhou, Aiguo, author.
Contributor:
Dong, Zhenbiao
Li, Zhenyu
Pu, Jiakun
Wei, Ronghui
Yu, Jiangyang
Conference Name:
SAE WCX Digital Summit (2021-04-13 : Live Online, Pennsylvania, United States)
Language:
English
Physical Description:
1 online resource cm
Place of Publication:
Warrendale, PA SAE International 2021
Summary:
The main challenge for future autonomous vehicles is to identify their location and body pose in real time during driving, that is, "where am I? and how will I go?". We address the problems of pose estimation and scene perception from continuous visual frames in intelligent vehicle. Recent advanced technology in the domain of deep learning proposes to train some learning models for vehicle's series detection tasks in a supervised or unsupervised manner, which has numerous advances over traditional approaches, mainly reflected in the absence of manual calibration and synchronization of the camera and IMU. In the paper, we propose a novel approach for pose estimation and scene recognition with a deep fusion of multi-modal neural features in the manner of unsupervised. Firstly, low-cost camera and IMU are used to extract original visual and inertial data, then the visual and inertial encoders are utilized to encoder the feature of the two modes. Then, a Long Short-Term Memory (LSTM) takes in the combined feature representation (visual and inertial), and outputs the pose information of intelligent vehicle through the next three fully connectional neural layers. Further, we also propose to train a slight-weight convolutional neural network (CNN) with only five convolutional modules (13 convolutional neural layers) for the representation of the salient features in driving scene, by comparison with the scenes in database, to identify the location of vehicle in a special scene. All of the above processes are carried out in an end-to-end fashion. Lastly, we evaluate the proposed method on some driving datasets, e.g. KITTI and VPRICE, and the results show the proposed approach is able to improve the level of autonomy of intelligent vehicle greatly
Notes:
Vendor supplied data
Publisher Number:
2021-01-0188
Access Restriction:
Restricted for use by site license

The Penn Libraries is committed to describing library materials using current, accurate, and responsible language. If you discover outdated or inaccurate language, please fill out this feedback form to report it and suggest alternative language.

My Account

Shelf Request an item Bookmarks Fines and fees Settings

Guides

Using the Library Catalog Using Articles+ Library Account