1 option
Driving Behavior Modeling Based on Inverse Reinforcement Learning Tongji University, China
- Format:
- Book
- Conference/Event
- Author/Creator:
- Xu, Xiaobin, author.
- Conference Name:
- SAE 2024 Intelligent and Connected Vehicles Symposium (2024-09-22 : Shanghai, China)
- Language:
- English
- Physical Description:
- 1 online resource cm
- Place of Publication:
- Warrendale, PA SAE International 2024
- Summary:
- With the advancement of intelligent driving technology, today's smart vehicles must not only make accurate and safe driving decisions but also exhibit high human-likeness to ensure better acceptance from people. Developing vehicle behavior models with increased human-likeness has become a significant industry focus. However, existing vehicle behavior models often struggle to balance human-likeness and interpretability. While some researchers use inverse reinforcement learning (IRL) to model vehicle behavior, ensuring both human-likeness and a degree of interpretability, challenges such as reward function design difficulties and low human-likeness in background vehicle modeling persist. This study addresses these issues by focusing on highway scenarios without on-ramps, specifically following and lane-changing behaviors, using the CitySim dataset. IRL is employed to create a vehicle behavior model with improved human-likeness, utilizing a linear reward function to capture driver decision-making motives. Building on prior research, this study further explores various feature combinations for the reward function and introduces new features. The final feature combination resulted in a 12.6% and 14.4% reduction in planning errors on the training and test sets, respectively, compared to the baseline method. Additionally, the study enhances background vehicle modeling methods based on the Intelligent Driver Model (IDM) and the Minimizing Overall Braking Induced by Lane-change (MOBIL) model by adding traffic flow and patience correction terms. The results show that the improved background vehicle modeling method reduced test set errors by 4.3%, demonstrating greater human-likeness and making it more suitable for simulation environments
- Notes:
- Vendor supplied data
- Publisher Number:
- 2024-01-7029
- Access Restriction:
- Restricted for use by site license
The Penn Libraries is committed to describing library materials using current, accurate, and responsible language. If you discover outdated or inaccurate language, please fill out this feedback form to report it and suggest alternative language.