Ghalamzan Esfahani, Amir, Nazari Sasikolomi, Kiyanoush, Hashempour, Hamidreza and Zhong, Fangxun (2021) Deep-LfD: Deep robot learning from demonstrations. Software Impacts, 9 . p. 100087. ISSN 2665-9638
Full content URL: https://doi.org/10.1016/j.simpa.2021.100087
Documents |
|
|
PDF
main_SIMPA_final.pdf - Whole Document Available under License Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International. 7MB |
Item Type: | Article |
---|---|
Item Status: | Live Archive |
Abstract
Like other robot learning from demonstration (LfD) approaches, deep-LfD builds a task model from sample demonstrations. However, unlike conventional LfD, the deep-LfD model learns the relation between high dimensional visual sensory information and robot trajectory/path. This paper presents a dataset of successful needle insertion by da Vinci Research Kit into deformable objects based on which several deep-LfD models are built as a benchmark of models learning robot controller for the needle insertion task.
Keywords: | Deep learning, Robot Learning from Demonstration, deformable object manipulation, surgical robots |
---|---|
Subjects: | H Engineering > H671 Robotics |
Divisions: | College of Science > Lincoln Institute for Agri-Food Technology |
ID Code: | 45212 |
Deposited On: | 14 Jun 2021 11:26 |
Repository Staff Only: item control page