Below you can find information regarding data sets that our research group has released or can provide upon request:

Rutgers APC RGB-D Dataset

To better equip the research community in evaluating and improving robotic perception solutions for warehouse picking challenges, the PRACSYS lab at Rutgers University provides a new rich RGB-D data set for warehouse picking and software for utilizing it. The dataset contains 10,368 depth and RGB registered images, complete with hand-annotated 6DOF poses for 24 of the Amazon Picking Challenge (APC) objects (mead_index_cards excluded). Also provided are 3D mesh models of the 25 APC objects, which may be used for training of recognition algorithms.

Rutgers Extended RGBD dataset

Dataset download link: download

For each scene in the dataset, we share:

  • RGB Image
  • Depth Image
  • Segmentation mask
  • Parameters
    • camera_pose: pose of the camera in a global frame.
    • camera_intrinsics: intrinsic parameters of the camera.
    • rest_surface: pose of the resting surface such as a table or shelf bin.
    • dependency_order: physical and visual dependency of objects upon each other.
    • pose: ground-truth object pose in a global frame.

Examples of scenes in the dataset and results of pose estimation with physics-based reasoning.
Cinque Terre

Autonomous data generation to train CNNs for object segmentation

The training dataset is generated by physical simulation of the setup in which the robot operates. The tool we developed for autonomous data generation, labeling and training is shared below.

Dataset Generation toolbox: https://github.com/cmitash/physim-dataset-generator

Examples of scenes generated by the toolbox
Cinque Terre Cinque Terre Cinque Terre Cinque Terre

Cinque Terre Cinque Terre Cinque Terre Cinque Terre

Cinque Terre Cinque Terre Cinque Terre Cinque Terre

Cinque Terre Cinque Terre Cinque Terre

Learnt models for object segmentation
Faster-RCNN (VGG16) Physics Simulation + Self Learning (Shelf): download
Faster-RCNN (VGG16) Physics Simulation + Self Learning (Table-top): download

6D pose hypotheses dataset for manipulation planning in table setup

The 6D pose hypotheses dataset for manipulation planning is generated by

(1) Image taking via an Azure Kinetic camera

(2) A fully convolutional neural network which returns object classification and segmentation probability maps

(2) A geometric model matching process which returns pose hypotheses for each detected object

Examples of scenes (The 1st row shows the RGB images and the 2nd row shows the visualization of pose hypotheses in physics simulator PyBullet)

The dataset can be referenced here.