Towards Robust Product Packing
with a Minimalistic End-Effector

Rahul Shome*, Wei N. Tang*, Changkyu Song, Chaitanya Mitash, Chris Kourtev,
Jingjin Yu, Abdeslam Boularias, and Kostas E. Bekris

Finalist for Best Paper Award in Automation 
at the IEEE International Conference on Robotics and Automation (ICRA), 2019
arXiV

Abstract: — Advances in sensor technologies, object detection algorithms, planning frameworks and hardware designs have motivated the deployment of robots in warehouse automation. A variety of such applications, like order fulfillment or packing tasks, require picking objects from unstructured piles and carefully arranging them in bins or containers. Desirable solutions need to be low-cost, easily deployable and controllable, making minimalistic hardware choices desirable. The challenge in designing an effective solution to this problem relates to appropriately integrating multiple components, so as to achieve a robust pipeline that minimizes failure conditions. The current work proposes a complete pipeline for solving such packing tasks, given access only to RGB-D data and a single robot arm with a minimalistic, vacuum-based end-effector. To achieve the desired level of robustness, three key manipulation primitives are identified, which take advantage of the environment and simple operations to successfully pack multiple cubic objects. The overall approach is demonstrated to be robust to execution and perception errors. The impact of each manipulation primitive is evaluated by considering different versions of the proposed pipeline that incrementally introduce reasoning about object poses and corrective manipulation actions.

Fig 1: Left: Pipeline in terms of control, data flow (green lines) and failure handling (red lines). The blocks identify the modules of the system. Sensing receives an RGBD image of initial bin and object CAD models to return a grasp point. Based on the picking surface, the object is either transferred to the target bin or is handled by the Toppling module, which flips the object and places it back in initial bin. When the object is transferred, a robust Placement module places the object at the target pose. The Packing module validates and corrects the placement to achieve tight packing. Right: a) Instance segmentation. b) Pose estimation and picking point selection are provided by sensing, c) Picking d) Toppling e) Placement and f) Packing.

 

Experiments with links to Video Recordings

V1OUR APPROACH: The complete pipeline with all the primitives achieves the highest accuracy and success rate.

V2W/O CORRECTIVE ACTIONS: The experiment corresponds to the use of V1 without the packing module of Fig 1, that performs corrective actions.

V3W/O PUSH TO PLACE PRIMITIVE: This version is V2 without the use of the robust placement module (Fig 1) that does push actions to achieve robust placements.

V4W/O TOPPLING ACTION: These experiments used V2 without considering toppling actions to deal with objects not exposing a valid surface that allows the target placement.

V5W/O POSE ESTIMATION: The naive baseline that solely uses a pose-unaware grasping module that reports locally graspable points and drops the grasped object at an end-effector pose raised from the center of the desired object position, with no adjustment in orientation.

Fig 2: Left: The final set of object poses in the target poses at the end of every experiment. Different column represents different versions. The top row is the best case, and the bottom row is the worst case. Right: the blue bar represents the fraction of successful object transfers, the orange bar represents the percentage of unoccupied volume within the ideal target placement volume.

The authors are affiliated with the Computer Science Dept. of, Rutgers University, New Brunswick, NJ, USA. The authors would like to acknowledge the support of NSF IIS:1617744 and JD-X Research and Development Center (RDC). Any opinions expressed here or findings do not reflect those of the sponsor.