Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
SES 7.1: Robots in AVM
Time:
Wednesday, 28/Jun/2017:
4:30pm - 5:50pm

Session Chair: Georgios Michalos
Location: Aula Convegni (first floor)

Show help for 'Increase or decrease the abstract text size'
Presentations

255. Cognitive Robot Referencing System for High Accuracy Manufacturing Task

Cristina Cristalli, Luca Lattanzi, Daniele Massa, Giacomo Angione

AEA s.r.l. - Loccioni, Italy

Industrial robots can be considered very repeatable machines, but they usually lack of absolute accuracy. However, high accuracy during the execution of the task is becoming a more and more critical factor in industrial manufacturing domains. For that reason, in order to fully automatize manufacturing processes, high-precision tasks usually need the integration of additional sensors to improve robot accuracy. This paper proposes an embedded, cognitive and self-learning stereo-vision system that can be used to reference the robot position with respect to the work-piece, increasing robot accuracy locally. An industrial use-case is also proposed and experimental results are presented.


284. A machine learning approach for visual recognition of complex parts in robotic manipulation

Panagiotis Aivaliotis, Anastasios Zampetis, Georgios Michalos, Sotiris Makris

Laboratory for Manufacturing Systems and Automation (LMS), Greece

The research presents a method for visual recognition using machine learning services for complex part manipulation. The robotic manipulation of complex parts is an application with high uncertainty caused by the instability of gripper’s grasping. The accurate estimation of part’s position and orientation after grasping it is needed in order to execute successfully a manipulation task. A visual recognition approach using classifiers is implemented for the accurate estimation of part’s position and orientation. Finally, a case study of the robotic manipulation of complex parts using the machine learning services for visual recognition is demonstrated and evaluated.


286. Flexible programming tool enabling synergy between human and robot

Stereos Alexandros Matthaiakis1, Konstantinos Dimoulas1, Athanasios Athanasatos1, Konstantinos Mparis1, George Dimitrakopoulos1, Christos Gkournelos1, Apostolis Papavasileiou1, Nikos Fousekis1, Stergios Papanastastiou1, Georgios Michalos1, Giacomo Angione2, Sotiris Makris1

1Laboratory for Manufacturing Systems and Automation (LMS), Greece; 2AEA s.r.l , Loccioni Group, Via Fiume 16, 60030, Angeli di Rosora (AN), Italy

This paper discusses a method for flexible robot programming and execution control, enabling human-robot collaborative tasks. This method considers data that are initially generated through an offline programming tool and are corrected online considering feedback from force and vision sensors. A number of tasks is dynamically assigned to both human and robot resources. The user tests the programming result through an external control system. The initially assigned tasks can be online re-assigned. The method has been implemented on the top of ROS as an android application and has been applied in automotive and aeronautics industries for screwing and inserting operations.


28. Reinforcement Learning for Manipulators Without Direct Obstacle Perception in Physically Constrained Environments

Marie Ossenkopf1, Philipp Ennen2, Rene Vossen2, Sabina Jeschke2

1Universität Kassel, Germany; 2Institute of Information Management in Mechanical Engineering, RWTH Aachen University, Dennewartstr. 25, 52068 Aachen, Germany

Adapting a robotic assembly system to a new task requires manual setup. This represents a major bottleneck for the automated production of small batch sizes. One approach to reduce manual ramp-up time is to make the manufacturing system self-learning [Baroglio et al. 1996, Ennen et al. 2016].

Serial manipulators in industrial use for assembly or pick&place tasks yield high physical danger for themselves and their environment. The physical danger emerges from potential collisions of the robot with obstacles in the environment. Without knowledge or sensing of the obstacles it is not possible to avoid a collision in the first place. The robot is also endangered by movements that try to exceed its mechanical constraints with respect to maximum joint angles, maximum velocity and maximum acceleration. This becomes critical in the free exploration phase of a learning process. Learning algorithms for use on industrial serial manipulators therefore need to be adapted to meet this problem.

We present two enhancements of the Relative Entropy Policy Search (REPS) algorithm [Peters 2010] that enable a robot to: (1) detect collisions during the learning process, (2) react to these collisions, (3) learn from these collisions and (4) avoid to plan movements outside the maximum joint angles. The enhancements utilize Dynamic Movement Primitives (DMPs) as policy representation. DMPs are an established policy representation for serial manipulators [Schaal et al. 2005, Ijspeert et al. 2002, Deisenroth et al. 2013]. They map an acceleration onto a position and a velocity in state space by modeling every dimension as a spring-damper-system. To be independent of the kinematic model, we use the dimensions of the robot’s joint angle space instead of world coordinates.

In particular, the two enhancements are: (1) We integrate potential fields into the DMPs to lower the possibility of exceeding the maximum joint angles. (2) We monitor the deviation between the planned trajectory and the current position to detect collisions and the exceedance of mechanical constraints. This enables us to interrupt a colliding movement. The deviation is also used as an evaluation of the policy.

The new features work on any serial robot with an angular encoder. There is no need of additional sensors, an elaborated vision or modeling of the environment. The approach is independent of the knowledge of the kinematic and dynamic model of the robot, so no exact model has to be determined and the algorithm stays unaffected by model errors. The obstacle avoidance can be learned without knowledge of the obstacles. Hence, these additional properties reduce the requirements imposed on the assembly system and the ramp-up time.

We tested the algorithm in a simulation of the ABB IRB120 robot. As evaluation task we used a simple reaching task with obstacles. We show that the exceedance of maximum joint angles can be significantly reduced, that collisions can be detected instantaneously, and that the algorithm learns to avoid experienced collisions.



 
Contact and Legal Notice · Contact Address:
Conference: FAIM 2017
Conference Software - ConfTool Pro 2.6.107
© 2001 - 2017 by H. Weinreich, Hamburg, Germany