A dominant trend in manufacturing is the move toward small production volumes and high product variability. It is thus anticipated that future manufacturing automation systems will be characterized by a high degree of autonomy, and will be able to learn new behaviors without explicit programming. Siemens Corporate Technology collaborates with leading researches and academic institutions in this area.
In November 2017, 250 of the leading researchers working on robotics and machine learning met at the 1st Annual Conference on Robot Learning (CoRL 2017) in Mountain View, CA. Siemens proudly served as a Gold sponsor to support this community.
Robot Learning, and more generic, Autonomous Manufacturing, is an exciting research field at the intersection of Machine Learning and Automation. The combination of "traditional" control techniques with Artificial Intelligence holds the promise of allowing robots to learn new behaviors through experience. This has motivated many labs around the world to focus their attention to this area of research.
The question arises how can we benchmark different machine learning algorithms and apply them to the challenges of industrial automation?
Researchers at Siemens Corporate Technology in Berkeley, CA, have developed a set of gears to test different robot learning approaches to assembly. The assembly of these gears requires high precision and the ability to learn changing complex dynamics.
If you want to benchmark your robot learning algorithms and apply them to a challenging problem, 3D print the gears and share your results with us!
How fast can your system learn? How much training data is required? What would you change in the design to make it even more challenging? These are all important questions that we want to open to the research community.
You can access the CAD files of the gears here.
Robot AssemblyRobot Learning covers the methodology, theory and art of enabling a robot, or any other automation system, to learn new skills and adapt to a flexible environment. Traditional control and Artificial Intelligence approaches are combined to increase the automation flexibility in tasks such as locomotion, grasping or assembly.
Robotic assembly typically involves object manipulation tasks with substantial contacts and friction, such as inserting or removing tight fitting objects, or twisting a bolt into place. Designing robot controllers for such tasks is difficult, due to the complexity of modelling and estimating contact dynamics accurately. Consequently, nearly all real-world robotic assembly applications are implemented in repetitive scenarios, which can pay off the substantial engineering efforts required. In addition, the implementations often rely on clever (special-purpose) fixtures to guide the assembly, and part feeders for assuring repetitive initial conditions.
Prominent approaches for autonomous manipulation are based on either motion planning, or reinforcement learning (RL). Recently, many promising results for autonomous control applications in the area of Deep Reinforcement Learning (DRL) emerged. DRL is a synergy between Reinforcement Learning (RL) and Deep Learning (DL). DRL algorithms have already been applied to different problems, ranging from video games to robotics.
The question is, however, how can Siemens researchers maximize the robustness and precision of DRL algorithms so that they can be applied with the highest level of confidence to the challenges of industrial applications?
See related works of our collaborators: