Datasets

ACRV Picking Benchmark

We propose a physical benchmark for robotic picking: overall design, objects, configuration, and guidance on appropriate technologies to solve it. Challenges are an important way to drive progress but they occur only occasionally and the test conditions are difficult to replicate outside the challenge. This benchmark is motivated by experience in the recent Amazon Picking Challenge and contains a commonly-available shelf, 42 objects, a set of stencils and standardized task setups.
A major focus through the design of this benchmark was to maximise reproducibility: a number of carefully chosen scenarios with precise instructions on how to place, orient, and align objects with the help of printable stencils are defined. To make the benchmark as accessible as possible to the research community, a white IKEA shelf is used for all picking tasks. Furthermore, we carefully curated a set of 42 objects to ensure global availability and reduced chance of import restrictions.

ACRV Picking Benchmark

More Datasets

Author - Juxi Leitner

Juxi Leitner

Research Fellow at the QUT node. Project Leader of the Vision and Action (VA) work on learning robotic hand-eye coordination. Juxi is also the leader of the ACRV's Amazon Robotics Challenge (and last year's Amazon Picking Challenge) team. He received a PhD for his work at the IDSIA Robotics on the iCub humanoid. His background is in space robotics and he has previously worked at the Advanced Concepts Team of the European Space Agency Link

Category: Datasets
Posted 24 April 2017

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549