Imitation learning

Imitation learning is a paradigm in reinforcement learning, where an agent learns to perform a task by supervised learning from expert demonstrations. It is also called learning from demonstration and apprenticeship learning.[1][2][3]

It has been applied to underactuated robotics,[4] self-driving cars,[5][6][7] quadcopter navigation,[8] helicopter aerobatics,[9] and locomotion.[10][11]

  1. ^ Russell, Stuart J.; Norvig, Peter (2021). "22.6 Apprenticeship and Inverse Reinforcement Learning". Artificial intelligence: a modern approach. Pearson series in artificial intelligence (Fourth ed.). Hoboken: Pearson. ISBN 978-0-13-461099-3.
  2. ^ Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement learning: an introduction. Adaptive computation and machine learning series (Second ed.). Cambridge, Massachusetts: The MIT Press. p. 470. ISBN 978-0-262-03924-6.
  3. ^ Hussein, Ahmed; Gaber, Mohamed Medhat; Elyan, Eyad; Jayne, Chrisina (2017-04-06). "Imitation Learning: A Survey of Learning Methods". ACM Comput. Surv. 50 (2): 21:1–21:35. doi:10.1145/3054912. hdl:10059/2298. ISSN 0360-0300.
  4. ^ "Ch. 21 - Imitation Learning". underactuated.mit.edu. Retrieved 2024-08-08.
  5. ^ Pomerleau, Dean A. (1988). "ALVINN: An Autonomous Land Vehicle in a Neural Network". Advances in Neural Information Processing Systems. 1. Morgan-Kaufmann.
  6. ^ Bojarski, Mariusz; Del Testa, Davide; Dworakowski, Daniel; Firner, Bernhard; Flepp, Beat; Goyal, Prasoon; Jackel, Lawrence D.; Monfort, Mathew; Muller, Urs (2016-04-25). "End to End Learning for Self-Driving Cars". arXiv:1604.07316v1 [cs.CV].
  7. ^ Kiran, B Ravi; Sobh, Ibrahim; Talpaert, Victor; Mannion, Patrick; Sallab, Ahmad A. Al; Yogamani, Senthil; Perez, Patrick (June 2022). "Deep Reinforcement Learning for Autonomous Driving: A Survey". IEEE Transactions on Intelligent Transportation Systems. 23 (6): 4909–4926. arXiv:2002.00444. doi:10.1109/TITS.2021.3054625. ISSN 1524-9050.
  8. ^ Giusti, Alessandro; Guzzi, Jerome; Ciresan, Dan C.; He, Fang-Lin; Rodriguez, Juan P.; Fontana, Flavio; Faessler, Matthias; Forster, Christian; Schmidhuber, Jurgen; Caro, Gianni Di; Scaramuzza, Davide; Gambardella, Luca M. (July 2016). "A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots". IEEE Robotics and Automation Letters. 1 (2): 661–667. doi:10.1109/LRA.2015.2509024. ISSN 2377-3766.
  9. ^ "Autonomous Helicopter: Stanford University AI Lab". heli.stanford.edu. Retrieved 2024-08-08.
  10. ^ Nakanishi, Jun; Morimoto, Jun; Endo, Gen; Cheng, Gordon; Schaal, Stefan; Kawato, Mitsuo (June 2004). "Learning from demonstration and adaptation of biped locomotion". Robotics and Autonomous Systems. 47 (2–3): 79–91. doi:10.1016/j.robot.2004.03.003.
  11. ^ Kalakrishnan, Mrinal; Buchli, Jonas; Pastor, Peter; Schaal, Stefan (October 2009). "Learning locomotion over rough terrain using terrain templates". 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE. pp. 167–172. doi:10.1109/iros.2009.5354701. ISBN 978-1-4244-3803-7.