A requirement in order to be able to plan the motion of a robotic arm in a cluttered environment is to be able to detect the objects in the robot’s vicinity and their 6D pose (i.e., location and orientation). The goal of our work is to build the capability to identify accurate pose estimates for objects in cluttered scenarios. Particularly, we have been working on 1) developing intelligent techniques to autonomously generate labeled datasets for training object recognition pipelines, and 2) developing search-based algorithms for scene estimation, given RGBD data and 3D CAD models of objects.