1 option
Semantic Segmentation for Traffic Scene Understanding Based on Mobile Networks Tongji Univ
- Format:
- Conference/Event
- Author/Creator:
- Hao, Hao, author.
- Conference Name:
- Intelligent and Connected Vehicles Symposium (2018-08-14 : Kunshan City, Jiangsu, China)
- Language:
- English
- Physical Description:
- 1 online resource
- Place of Publication:
- Warrendale, PA SAE International 2018
- Summary:
- AbstractReal-time and reliable perception of the surrounding environment is an important prerequisite for advanced driving assistance system (ADAS) and automatic driving. And vision-based detection plays a significant role in environment perception for automatic vehicles. Although deep convolutional neural networks enable efficient recognition of various objects, it has difficulty in accurately detecting special vehicles, rocks, road pile, construction site, fence and so on. In this work, we address the task of traffic scene understanding with semantic image segmentation. Both driveable area and the classification of object can be attained from the segmentation result. First, we define 29 classes of objects in traffic scenarios with different labels and modify the Deeplab V2 network. Then in order to reduce the running time, MobileNet architecture is applied to generate the feature map instead of the original models. After that, the Cityscapes Dataset, which focuses on semantic understanding of urban street scenes, is used to train the network with the modified labels. Finally, we test the network and measure the performance. With the same network (Deeplab V2), VGG-16 and ResNet-101 are also tested. Consequently, we attain similar performance with MobileNet and ResNet-101 models, but using MobileNet requires much fewer operations and time. Compared with VGG-16, MobileNet architecture has better performance and is also more efficient. The using of lightweight mobile models reduce the computation and enable the on-device applications for semantic segmentation in traffic scene understanding
- Notes:
- Vendor supplied data
- Publisher Number:
- 2018-01-1600
- Access Restriction:
- Restricted for use by site license
The Penn Libraries is committed to describing library materials using current, accurate, and responsible language. If you discover outdated or inaccurate language, please fill out this feedback form to report it and suggest alternative language.