1 option
Lifelong reinforcement learning on mobile robots / David Isele.
LIBRA QA003 2019 .I78
Available from offsite location
- Format:
- Book
- Manuscript
- Thesis/Dissertation
- Author/Creator:
- Isele, David, author.
- Language:
- English
- Subjects (All):
- Penn dissertations--Computer and information science.
- Computer and information science--Penn dissertations.
- Local Subjects:
- Penn dissertations--Computer and information science.
- Computer and information science--Penn dissertations.
- Physical Description:
- xix, 176 leaves : color illustrations ; 29 cm
- Production:
- [Philadelphia, Pennsylvania] : University of Pennsylvania, 2019.
- Summary:
- Machine learning has shown tremendous growth in the past decades, unlocking new capabilities in a variety of fields including computer vision, natural language processing, and robotic control. While the sophistication of individual problems a learning system can handle has greatly advanced, the ability of a system to extend beyond an individual problem to adapt and solve new problems has progressed more slowly. This thesis explores the problem of progressive learning. The goal is to develop methodologies that accumulate, transfer, and adapt knowledge in applied settings where the system is faced with the ambiguity and resource limitations of operating in the physical world.
- There are undoubtedly many challenges to designing such a system, my thesis looks at the component of this problem related to how knowledge from previous tasks can be a benefit in the domain of reinforcement learning where the agent receives rewards for positive actions. Reinforcement learning is particularly difficult when training on physical systems, like mobile robots, where repeated trials can damage the system and unrestricted exploration is often associated with safety risks. I investigate how knowledge can be efficiently accumulated and applied to future reinforcement learning problems on mobile robots in order to reduce sample complexity and enable systems to adapt to novel settings. Doing this involves mathematical models which can combine knowledge from multiple tasks, methods for restructuring optimizations and data collection to handle sequential updates, and data selection strategies that can be used to address resource limitations.
- Notes:
- Ph. D. University of Pennsylvania 2019.
- Department: Computer and Information Science.
- Supervisors: Eric Eaton; Camillo J. Taylor.
- Includes bibliographical references.
- Other Format:
- Online version: Isele, David. Lifelong reinforcement learning on mobile robots.
- OCLC:
- 1127054160
The Penn Libraries is committed to describing library materials using current, accurate, and responsible language. If you discover outdated or inaccurate language, please fill out this feedback form to report it and suggest alternative language.