The role of Artificial Intelligence inside Robotics researches and implementations during the last few years, is becoming more and more important in many aspects. In particular, the application of deep artificial neural networks to robotics systems for solving numerous complex problems and helping robots to perform complicated demanding tasks. The aim of this thesis is to present a solution for real-time collision avoidance problem by means of Deep Reinforcement Learning approach. In particular, two different Deep Reinforcement Learning algorithms (Deep Deterministic Policy Gradient and Twin Delayed Deep Deterministic Policy Gradient) are applied to a robot system, that is a Epson VT6 anthropomorphic robot manipulator, in simulation by means of CoppeliaSim. The workspace of the manipulator is surrounded by obstacles which move randomly and its tasks is to follow a randomly moving target. Different workspaces are available to be use using the algorithms presented. Simulations results are reported to show the performance of proposed model-free deep reinforcement learning algorithms.
METODI DI DEEP REINFORCEMENT LEARNING PER LA PREVENZIONE DELLE COLLISIONI DI UN MANIPOLATORE INDUSTRIALE. Il ruolo dell'Intelligenza Artificiale nell'ambito della ricerca Robotica durante gli ultimi anni sta diventando molto importante. In particolare, il Deep Reinforcement Learning è stato applicato diffusamente ai sistemi robotici con l’obiettivo di risolvere problemi complessi ottenendo prestazioni soddisfacenti. L'obiettivo di questa tesi è di presentare una soluzione per il problema della prevenzione delle collisioni in real-time basata su tecniche di Deep Reinforcement Learning. Si utilizzeranno due algoritmi diversi (Deep Deterministic Policy Gradient e Twin Delayed Deep Deterministic Policy Gradient) che saranno applicati a un sistema robotico in simulazione mediante Coppeliasim. Il sistema robotico considerato è un manipolatore industriale (EpsonVT6) e l'ambiente di lavoro è caratterizzato da ostacoli che si muovono in modo casuale. Il manipolatore ha il compito di seguire un target che si muove lungo una traiettoria prestabilita anch'esso in modo casuale. Nella tesi sono presentati varie casi di studio. I risultati simulativi riportati nella tesi illustrano le funzionalità dei algoritmi di tipo model-free.
DEEP REINFORCEMENT LEARNING FOR COLLISION AVOIDANCE OF AN INDUSTRIAL MANIPULATOR
MATIRA, KEVIN KLEN HERNANDEZ
2020/2021
Abstract
The role of Artificial Intelligence inside Robotics researches and implementations during the last few years, is becoming more and more important in many aspects. In particular, the application of deep artificial neural networks to robotics systems for solving numerous complex problems and helping robots to perform complicated demanding tasks. The aim of this thesis is to present a solution for real-time collision avoidance problem by means of Deep Reinforcement Learning approach. In particular, two different Deep Reinforcement Learning algorithms (Deep Deterministic Policy Gradient and Twin Delayed Deep Deterministic Policy Gradient) are applied to a robot system, that is a Epson VT6 anthropomorphic robot manipulator, in simulation by means of CoppeliaSim. The workspace of the manipulator is surrounded by obstacles which move randomly and its tasks is to follow a randomly moving target. Different workspaces are available to be use using the algorithms presented. Simulations results are reported to show the performance of proposed model-free deep reinforcement learning algorithms.È consentito all'utente scaricare e condividere i documenti disponibili a testo pieno in UNITESI UNIPV nel rispetto della licenza Creative Commons del tipo CC BY NC ND.
Per maggiori informazioni e per verifiche sull'eventuale disponibilità del file scrivere a: unitesi@unipv.it.
https://hdl.handle.net/20.500.14239/13125