Annotated RGB-D point clouds of everyday objects
from unstructured indoor scenes
In a plethora of robotic tasks like in service or industrial robotics, such as pick and place in households or unloading goods from shipping containers, visuoperceptual capabilities are required to detect objects with a large variety of appearances. Objects can appear in varying shapes, spatial dimensions, texture-content or with occlusions. Also in such tasks it may not be feasible to be aware of a vast set of object models beforehand for detection or manipulation purposes. Therefore, the discovery of unknown objects in unstructured scenes is an interesting and challenging research topic in robotics. Approaches which cope with such detection problem require real annotated sensor data to conduct adequate experiments to assess their performance.
The dataset (Object Discovery Dataset - ODD) is established and made here available for evaluation purposes. It consists of scenes which are captured by a Kinect-like RGB-D camera based on structured light principle in a resolution of 640x480 points. The scenes range from table-tops to unstructured scenes with a variety of simple and complex-shaped objects which are contacted, stacked or occluded like e.g. on shelves. The dataset consists of simple-shaped objects like boxes, cans or cones, and complex-shaped objects like teddy-bears, cordless drills, electric irons, binoculars or jars. In total 296 human-annotated objects in 30 scenes were captured.
Each scene consists of
Christian A. Mueller and Andreas Birk
"Hierarchical Graph-Based Discovery of Non-Primitive-Shaped Objects in Unstructured Environments"
In IEEE International Conference on Robotics and Automation (ICRA), May 2016.
Please cite the paper (Bibtex file) if you use the dataset.
Contributions to the dataset are welcome! Please contact us.
Contact: Christian A. Mueller [chr.mueller(at)jacobs-university.de]
In the following scene previews are displayed (for a proper rendering your browser requires WebGL support).