2022 |
Lu, S; Johnson, W; Wang, K; Huang, X; Booth, J; Kramer-Bottiglio, R; Bekris, K E 6N-DoF Pose Tracking for Tensegrity Robots Inproceedings International Symposium on Robotics Research (ISRR), 2022. Abstract | Links | BibTeX | Tags: Robot Perception, tensegrity @inproceedings{tensegrity_pose_tracking, title = {6N-DoF Pose Tracking for Tensegrity Robots}, author = {S Lu and W Johnson and K Wang and X Huang and J Booth and R Kramer-Bottiglio and K E Bekris}, url = {https://arxiv.org/abs/2205.14764}, year = {2022}, date = {2022-09-26}, booktitle = {International Symposium on Robotics Research (ISRR)}, abstract = {Tensegrity robots, which are composed of rigid compressive elements (rods) and flexible tensile elements (e.g., cables), have a variety of advantages, including flexibility, light weight, and resistance to mechanical impact. Nevertheless, the hybrid soft-rigid nature of these robots also complicates the ability to localize and track their state. This work aims to address what has been recognized as a grand challenge in this domain, i.e., the pose tracking of tensegrity robots through a markerless, vision-based method, as well as novel, onboard sensors that can measure the length of the robot's cables. In particular, an iterative optimization process is proposed to estimate the 6-DoF poses of each rigid element of a tensegrity robot from an RGB-D video as well as endcap distance measurements from the cable sensors. To ensure the pose estimates of rigid elements are physically feasible, i.e., they are not resulting in collisions between rods or with the environment, physical constraints are introduced during the optimization. Real-world experiments are performed with a 3-bar tensegrity robot, which performs locomotion gaits. Given ground truth data from a motion capture system, the proposed method achieves less than 1 cm translation error and 3 degrees rotation error, which significantly outperforms alternatives. At the same time, the approach can provide pose estimates throughout the robot's motion, while motion capture often fails due to occlusions. }, keywords = {Robot Perception, tensegrity}, pubstate = {published}, tppubtype = {inproceedings} } Tensegrity robots, which are composed of rigid compressive elements (rods) and flexible tensile elements (e.g., cables), have a variety of advantages, including flexibility, light weight, and resistance to mechanical impact. Nevertheless, the hybrid soft-rigid nature of these robots also complicates the ability to localize and track their state. This work aims to address what has been recognized as a grand challenge in this domain, i.e., the pose tracking of tensegrity robots through a markerless, vision-based method, as well as novel, onboard sensors that can measure the length of the robot's cables. In particular, an iterative optimization process is proposed to estimate the 6-DoF poses of each rigid element of a tensegrity robot from an RGB-D video as well as endcap distance measurements from the cable sensors. To ensure the pose estimates of rigid elements are physically feasible, i.e., they are not resulting in collisions between rods or with the environment, physical constraints are introduced during the optimization. Real-world experiments are performed with a 3-bar tensegrity robot, which performs locomotion gaits. Given ground truth data from a motion capture system, the proposed method achieves less than 1 cm translation error and 3 degrees rotation error, which significantly outperforms alternatives. At the same time, the approach can provide pose estimates throughout the robot's motion, while motion capture often fails due to occlusions. |
Miao, Y; Wang, R; Bekris, K E Safe, Occlusion-Aware Manipulation for Online Object Reconstruction in Confined Space Inproceedings International Symposium on Robotics Research (ISRR) , 2022. Abstract | Links | BibTeX | Tags: Manipulation, Planning, Robot Perception @inproceedings{safe_reconstruction, title = {Safe, Occlusion-Aware Manipulation for Online Object Reconstruction in Confined Space}, author = {Y Miao and R Wang and K E Bekris}, url = {https://arxiv.org/abs/2205.11719}, year = {2022}, date = {2022-09-25}, booktitle = {International Symposium on Robotics Research (ISRR) }, abstract = {Recent work in robotic manipulation focuses on object retrieval in cluttered space under occlusion. Nevertheless, the majority of efforts lack an analysis of conditions for the completeness of the approaches or the methods apply only when objects can be removed from the workspace. This work formulates the general, occlusion-aware manipulation task, and focuses on safe object reconstruction in a confined space with in-place relocation. A framework that ensures safety with completeness guarantees is proposed. Furthermore, an algorithm, which is an instantiation of this framework for monotone instances, is developed and evaluated empirically by comparing against a random and a greedy baseline on randomly generated experiments in simulation. Even for cluttered scenes with realistic objects, the proposed algorithm significantly outperforms the baselines and maintains a high success rate across experimental conditions. }, keywords = {Manipulation, Planning, Robot Perception}, pubstate = {published}, tppubtype = {inproceedings} } Recent work in robotic manipulation focuses on object retrieval in cluttered space under occlusion. Nevertheless, the majority of efforts lack an analysis of conditions for the completeness of the approaches or the methods apply only when objects can be removed from the workspace. This work formulates the general, occlusion-aware manipulation task, and focuses on safe object reconstruction in a confined space with in-place relocation. A framework that ensures safety with completeness guarantees is proposed. Furthermore, an algorithm, which is an instantiation of this framework for monotone instances, is developed and evaluated empirically by comparing against a random and a greedy baseline on randomly generated experiments in simulation. Even for cluttered scenes with realistic objects, the proposed algorithm significantly outperforms the baselines and maintains a high success rate across experimental conditions. |
Wen, B; Lian, W; Bekris, K E; Schaal, S You Only Demonstrate Once: Category-Level Manipulation from Single Visual Demonstration Inproceedings Robotics: Science and Systems (RSS), 2022, (Nomination for Best Paper Award). Abstract | Links | BibTeX | Tags: Learning, Manipulation, Robot Perception @inproceedings{yodo_rss22, title = {You Only Demonstrate Once: Category-Level Manipulation from Single Visual Demonstration}, author = {B Wen and W Lian and K E Bekris and S Schaal}, url = {https://arxiv.org/abs/2201.12716}, year = {2022}, date = {2022-06-29}, booktitle = {Robotics: Science and Systems (RSS)}, abstract = {Promising results have been achieved recently in category-level manipulation that generalizes across object instances. Nevertheless, it often requires expensive real-world data collection and manual specification of semantic keypoints for each object category and task. Additionally, coarse keypoint predictions and ignoring intermediate action sequences hinder adoption in complex manipulation tasks beyond pick-and-place. This work proposes a novel, category-level manipulation framework that leverages an object-centric, category-level representation and model-free 6 DoF motion tracking. The canonical object representation is learned solely in simulation and then used to parse a category-level, task trajectory from a single demonstration video. The demonstration is reprojected to a target trajectory tailored to a novel object via the canonical representation. During execution, the manipulation horizon is decomposed into long range, collision-free motion and last-inch manipulation. For the latter part, a category-level behavior cloning (CatBC) method leverages motion tracking to perform closed-loop control. CatBC follows the target trajectory, projected from the demonstration and anchored to a dynamically selected category-level coordinate frame. The frame is automatically selected along the manipulation horizon by a local attention mechanism. This framework allows to teach different manipulation strategies by solely providing a single demonstration, without complicated manual programming. Extensive experiments demonstrate its efficacy in a range of challenging industrial tasks in high precision assembly, which involve learning complex, long-horizon policies. The process exhibits robustness against uncertainty due to dynamics as well as generalization across object instances and scene configurations. }, note = {Nomination for Best Paper Award}, keywords = {Learning, Manipulation, Robot Perception}, pubstate = {published}, tppubtype = {inproceedings} } Promising results have been achieved recently in category-level manipulation that generalizes across object instances. Nevertheless, it often requires expensive real-world data collection and manual specification of semantic keypoints for each object category and task. Additionally, coarse keypoint predictions and ignoring intermediate action sequences hinder adoption in complex manipulation tasks beyond pick-and-place. This work proposes a novel, category-level manipulation framework that leverages an object-centric, category-level representation and model-free 6 DoF motion tracking. The canonical object representation is learned solely in simulation and then used to parse a category-level, task trajectory from a single demonstration video. The demonstration is reprojected to a target trajectory tailored to a novel object via the canonical representation. During execution, the manipulation horizon is decomposed into long range, collision-free motion and last-inch manipulation. For the latter part, a category-level behavior cloning (CatBC) method leverages motion tracking to perform closed-loop control. CatBC follows the target trajectory, projected from the demonstration and anchored to a dynamically selected category-level coordinate frame. The frame is automatically selected along the manipulation horizon by a local attention mechanism. This framework allows to teach different manipulation strategies by solely providing a single demonstration, without complicated manual programming. Extensive experiments demonstrate its efficacy in a range of challenging industrial tasks in high precision assembly, which involve learning complex, long-horizon policies. The process exhibits robustness against uncertainty due to dynamics as well as generalization across object instances and scene configurations. |
Vieira, E; Granados, E; Sivaramakrishnan, A; Gameiro, M; Mischaikow, K; Bekris, K E Morse Graphs: Topological Tools for Analyzing the Global Dynamics of Robot Controllers Inproceedings Workshop on the Algorithmic Foundations of Robotics (WAFR), 2022. Abstract | Links | BibTeX | Tags: Dynamics @inproceedings{morsegraphs_wafr22, title = {Morse Graphs: Topological Tools for Analyzing the Global Dynamics of Robot Controllers}, author = {E Vieira and E Granados and A Sivaramakrishnan and M Gameiro and K Mischaikow and K E Bekris}, url = {https://arxiv.org/abs/2202.08383}, year = {2022}, date = {2022-06-23}, booktitle = {Workshop on the Algorithmic Foundations of Robotics (WAFR)}, abstract = {Understanding the global dynamics of a robot controller, such as identifying attractors and their regions of attraction (RoA), is important for safe deployment and synthesizing more effective hybrid controllers. This paper proposes a topological framework to analyze the global dynamics of robot controllers, even data-driven ones, in an effective and explainable way. It builds a combinatorial representation representing the underlying system's state space and non-linear dynamics, which is summarized in a directed acyclic graph, the Morse graph. The approach only probes the dynamics locally by forward propagating short trajectories over a state-space discretization, which needs to be a Lipschitz-continuous function. The framework is evaluated given either numerical or data-driven controllers for classical robotic benchmarks. It is compared against established analytical and recent machine learning alternatives for estimating the RoAs of such controllers. It is shown to outperform them in accuracy and efficiency. It also provides deeper insights as it describes the global dynamics up to the discretization's resolution. This allows to use the Morse graph to identify how to synthesize controllers to form improved hybrid solutions or how to identify the physical limitations of a robotic system. }, keywords = {Dynamics}, pubstate = {published}, tppubtype = {inproceedings} } Understanding the global dynamics of a robot controller, such as identifying attractors and their regions of attraction (RoA), is important for safe deployment and synthesizing more effective hybrid controllers. This paper proposes a topological framework to analyze the global dynamics of robot controllers, even data-driven ones, in an effective and explainable way. It builds a combinatorial representation representing the underlying system's state space and non-linear dynamics, which is summarized in a directed acyclic graph, the Morse graph. The approach only probes the dynamics locally by forward propagating short trajectories over a state-space discretization, which needs to be a Lipschitz-continuous function. The framework is evaluated given either numerical or data-driven controllers for classical robotic benchmarks. It is compared against established analytical and recent machine learning alternatives for estimating the RoAs of such controllers. It is shown to outperform them in accuracy and efficiency. It also provides deeper insights as it describes the global dynamics up to the discretization's resolution. This allows to use the Morse graph to identify how to synthesize controllers to form improved hybrid solutions or how to identify the physical limitations of a robotic system. |
Wang, R; Gao, K; Yu, J; Bekris, K E Lazy Rearrangement Planning in Confined Spaces Inproceedings International Conference on Automated Planning and Scheduling (ICAPS), 2022. Abstract | Links | BibTeX | Tags: Manipulation, Rearrangement @inproceedings{lazy_confined_rearrangement, title = {Lazy Rearrangement Planning in Confined Spaces}, author = {R Wang and K Gao and J Yu and K E Bekris }, url = {https://arxiv.org/abs/2203.10379}, year = {2022}, date = {2022-06-20}, booktitle = {International Conference on Automated Planning and Scheduling (ICAPS)}, abstract = {Object rearrangement is important for many applications but remains challenging, especially in confined spaces, such as shelves, where objects cannot be accessed from above and they block reachability to each other. Such constraints require many motion planning and collision checking calls, which are computationally expensive. In addition, the arrangement space grows exponentially with the number of objects. To address these issues, this work introduces a lazy evaluation framework with a local monotone solver and a global planner. Monotone instances are those that can be solved by moving each object at most once. A key insight is that reachability constraints at the grasps for objects’ starts and goals can quickly reveal dependencies between objects without having to execute expensive motion planning queries. Given that, the local solver builds lazily a search tree that respects these reachability constraints without verifying that the arm paths are collision free. It only collision checks when a promising solution is found. If a monotone solution is not found, the non-monotone planner loads the lazy search tree and explores ways to move objects to intermediate locations from where monotone solutions to the goal can be found. Results show that the proposed framework can solve difficult instances in confined spaces with up to 16 objects, which state-of-the-art methods fail to solve. It also solves problems faster than alter- natives, when the alternatives find a solution. It also achieves high-quality solutions, i.e., only 1.8 additional actions on av- erage are needed for non-monotone instances.}, keywords = {Manipulation, Rearrangement}, pubstate = {published}, tppubtype = {inproceedings} } Object rearrangement is important for many applications but remains challenging, especially in confined spaces, such as shelves, where objects cannot be accessed from above and they block reachability to each other. Such constraints require many motion planning and collision checking calls, which are computationally expensive. In addition, the arrangement space grows exponentially with the number of objects. To address these issues, this work introduces a lazy evaluation framework with a local monotone solver and a global planner. Monotone instances are those that can be solved by moving each object at most once. A key insight is that reachability constraints at the grasps for objects’ starts and goals can quickly reveal dependencies between objects without having to execute expensive motion planning queries. Given that, the local solver builds lazily a search tree that respects these reachability constraints without verifying that the arm paths are collision free. It only collision checks when a promising solution is found. If a monotone solution is not found, the non-monotone planner loads the lazy search tree and explores ways to move objects to intermediate locations from where monotone solutions to the goal can be found. Results show that the proposed framework can solve difficult instances in confined spaces with up to 16 objects, which state-of-the-art methods fail to solve. It also solves problems faster than alter- natives, when the alternatives find a solution. It also achieves high-quality solutions, i.e., only 1.8 additional actions on av- erage are needed for non-monotone instances. |
McMahon, T; Sivaramakrishnan, A; Granados, E; Bekris, K E A Survey on the Integration of Machine Learning with Sampling-based Motion Planning Journal Article Forthcoming Foundations and Trends in Robotics, Forthcoming. BibTeX | Tags: Learning, Planning @article{survey_learning_planning, title = {A Survey on the Integration of Machine Learning with Sampling-based Motion Planning}, author = {T McMahon and A Sivaramakrishnan and E Granados and K E Bekris}, year = {2022}, date = {2022-06-20}, journal = {Foundations and Trends in Robotics}, keywords = {Learning, Planning}, pubstate = {forthcoming}, tppubtype = {article} } |
McMahon, T; Sivaramakrishnan, A; Kedia, K; Granados, E; Bekris, K E Terrain-Aware Learned Controllers for Sampling-Based Kinodynamic Planning over Physically Simulated Terrains Inproceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022. BibTeX | Tags: Dynamics, Learning, Planning @inproceedings{terrain_sampling_simulated, title = {Terrain-Aware Learned Controllers for Sampling-Based Kinodynamic Planning over Physically Simulated Terrains}, author = {T McMahon and A Sivaramakrishnan and K Kedia and E Granados and K E Bekris }, year = {2022}, date = {2022-06-01}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, keywords = {Dynamics, Learning, Planning}, pubstate = {published}, tppubtype = {inproceedings} } |
Lu, S; Wang, R; Miao, Y; Mitash, C; Bekris, K E Online Object Model Reconstruction and Reuse for Lifelong Improvement of Robot Manipulation Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022, (Nomination for Best Paper Award in Manipulation). Abstract | Links | BibTeX | Tags: Manipulation, Robot Perception @inproceedings{reconstruct_lifelong_manipulation, title = {Online Object Model Reconstruction and Reuse for Lifelong Improvement of Robot Manipulation}, author = {S Lu and R Wang and Y Miao and C Mitash and K E Bekris}, url = {https://arxiv.org/abs/2109.13910}, year = {2022}, date = {2022-05-28}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {This work proposes a robotic pipeline for picking and constrained placement of objects without geometric shape priors. Compared to recent efforts developed for similar tasks, where every object was assumed to be novel, the proposed system recognizes previously manipulated objects and performs online model reconstruction and reuse. Over a lifelong manipulation process, the system keeps learning features of objects it has interacted with and updates their reconstructed models. Whenever an instance of a previously manipulated object reappears, the system aims to first recognize it and then register its previously reconstructed model given the current observation. This step greatly reduces object shape uncertainty allowing the system to even reason for parts of objects, which are currently not observable. This also results in better manipulation efficiency as it reduces the need for active perception of the target object during manipulation. To get a reusable reconstructed model, the proposed pipeline adopts: i) TSDF for object representation, and ii) a variant of the standard particle filter algorithm for pose estimation and tracking of the partial object model. Furthermore, an effective way to construct and maintain a dataset of manipulated objects is presented. A sequence of real-world manipulation experiments is performed. They show how future manipulation tasks become more effective and efficient by reusing reconstructed models of previously manipulated objects, which were generated during their prior manipulation, instead of treating objects as novel every time. }, note = {Nomination for Best Paper Award in Manipulation}, keywords = {Manipulation, Robot Perception}, pubstate = {published}, tppubtype = {inproceedings} } This work proposes a robotic pipeline for picking and constrained placement of objects without geometric shape priors. Compared to recent efforts developed for similar tasks, where every object was assumed to be novel, the proposed system recognizes previously manipulated objects and performs online model reconstruction and reuse. Over a lifelong manipulation process, the system keeps learning features of objects it has interacted with and updates their reconstructed models. Whenever an instance of a previously manipulated object reappears, the system aims to first recognize it and then register its previously reconstructed model given the current observation. This step greatly reduces object shape uncertainty allowing the system to even reason for parts of objects, which are currently not observable. This also results in better manipulation efficiency as it reduces the need for active perception of the target object during manipulation. To get a reusable reconstructed model, the proposed pipeline adopts: i) TSDF for object representation, and ii) a variant of the standard particle filter algorithm for pose estimation and tracking of the partial object model. Furthermore, an effective way to construct and maintain a dataset of manipulated objects is presented. A sequence of real-world manipulation experiments is performed. They show how future manipulation tasks become more effective and efficient by reusing reconstructed models of previously manipulated objects, which were generated during their prior manipulation, instead of treating objects as novel every time. |
Wen, B; Lian, W; Bekris, K E; Schaal, S CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. Abstract | Links | BibTeX | Tags: Manipulation, Robot Perception @inproceedings{catgrasp_icra22, title = {CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation}, author = {B Wen and W Lian and K E Bekris and S Schaal}, url = {https://arxiv.org/abs/2109.09163}, year = {2022}, date = {2022-05-25}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {Task-relevant grasping is critical for industrial assembly, where downstream manipulation tasks constrain the set of valid grasps. Learning how to perform this task, however, is challenging, since task-relevant grasp labels are hard to define and annotate. There is also yet no consensus on proper representations for modeling or off-the-shelf tools for performing task-relevant grasps. This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation. To achieve this, the entire framework is trained solely in simulation, including supervised training with synthetic label generation and self-supervised, hand-object interaction. In the context of this framework, this paper proposes a novel, object-centric canonical representation at the category level, which allows establishing dense correspondence across object instances and transferring task-relevant grasps to novel instances. Extensive experiments on task-relevant grasping of densely-cluttered industrial objects are conducted in both simulation and real-world setups, demonstrating the effectiveness of the proposed framework. Code and data is released at https://sites.google.com/view/catgrasp. }, keywords = {Manipulation, Robot Perception}, pubstate = {published}, tppubtype = {inproceedings} } Task-relevant grasping is critical for industrial assembly, where downstream manipulation tasks constrain the set of valid grasps. Learning how to perform this task, however, is challenging, since task-relevant grasp labels are hard to define and annotate. There is also yet no consensus on proper representations for modeling or off-the-shelf tools for performing task-relevant grasps. This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation. To achieve this, the entire framework is trained solely in simulation, including supervised training with synthetic label generation and self-supervised, hand-object interaction. In the context of this framework, this paper proposes a novel, object-centric canonical representation at the category level, which allows establishing dense correspondence across object instances and transferring task-relevant grasps to novel instances. Extensive experiments on task-relevant grasping of densely-cluttered industrial objects are conducted in both simulation and real-world setups, demonstrating the effectiveness of the proposed framework. Code and data is released at https://sites.google.com/view/catgrasp. |
Morgan, A; Hang, K; Wen, B; Bekris, K E; Dollar, A Complex In-Hand Manipulation via Compliance-Enabled Finger Gaiting and Multi-Modal Planning Journal Article IEEE Robotics and Automation Letters (also at ICRA), 2022. Abstract | Links | BibTeX | Tags: Manipulation @article{inhand_gaiting_multimodal, title = {Complex In-Hand Manipulation via Compliance-Enabled Finger Gaiting and Multi-Modal Planning}, author = {A Morgan and K Hang and B Wen and K E Bekris and A Dollar}, url = {https://arxiv.org/abs/2201.07928}, year = {2022}, date = {2022-05-24}, journal = {IEEE Robotics and Automation Letters (also at ICRA)}, abstract = {Constraining contacts to remain fixed on an object during manipulation limits the potential workspace size, as motion is subject to the hand's kinematic topology. Finger gaiting is one way to alleviate such restraints. It allows contacts to be freely broken and remade so as to operate on different manipulation manifolds. This capability, however, has traditionally been difficult or impossible to practically realize. A finger gaiting system must simultaneously plan for and control forces on the object while maintaining stability during contact switching. This work alleviates the traditional requirement by taking advantage of system compliance, allowing the hand to more easily switch contacts while maintaining a stable grasp. Our method achieves complete SO(3) finger gaiting control of grasped objects against gravity by developing a manipulation planner that operates via orthogonal safe modes of a compliant, underactuated hand absent of tactile sensors or joint encoders. During manipulation, a low-latency 6D pose object tracker provides feedback via vision, allowing the planner to update its plan online so as to adaptively recover from trajectory deviations. The efficacy of this method is showcased by manipulating both convex and non-convex objects on a real robot. Its robustness is evaluated via perturbation rejection and long trajectory goals. To the best of the authors' knowledge, this is the first work that has autonomously achieved full SO(3) control of objects within-hand via finger gaiting and without a support surface, elucidating a valuable step towards realizing true robot in-hand manipulation capabilities. }, keywords = {Manipulation}, pubstate = {published}, tppubtype = {article} } Constraining contacts to remain fixed on an object during manipulation limits the potential workspace size, as motion is subject to the hand's kinematic topology. Finger gaiting is one way to alleviate such restraints. It allows contacts to be freely broken and remade so as to operate on different manipulation manifolds. This capability, however, has traditionally been difficult or impossible to practically realize. A finger gaiting system must simultaneously plan for and control forces on the object while maintaining stability during contact switching. This work alleviates the traditional requirement by taking advantage of system compliance, allowing the hand to more easily switch contacts while maintaining a stable grasp. Our method achieves complete SO(3) finger gaiting control of grasped objects against gravity by developing a manipulation planner that operates via orthogonal safe modes of a compliant, underactuated hand absent of tactile sensors or joint encoders. During manipulation, a low-latency 6D pose object tracker provides feedback via vision, allowing the planner to update its plan online so as to adaptively recover from trajectory deviations. The efficacy of this method is showcased by manipulating both convex and non-convex objects on a real robot. Its robustness is evaluated via perturbation rejection and long trajectory goals. To the best of the authors' knowledge, this is the first work that has autonomously achieved full SO(3) control of objects within-hand via finger gaiting and without a support surface, elucidating a valuable step towards realizing true robot in-hand manipulation capabilities. |
Wang, K; Aanjaneya, M; Bekris, K E A Recurrent Differentiable Engine for Modeling Tensegrity Robots Trainable with Low-Frequency Data Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. Abstract | Links | BibTeX | Tags: Dynamics, tensegrity @inproceedings{diff_engine_tensegrity_low_freq, title = {A Recurrent Differentiable Engine for Modeling Tensegrity Robots Trainable with Low-Frequency Data}, author = {K Wang and M Aanjaneya and K E Bekris}, url = {https://arxiv.org/abs/2203.00041}, year = {2022}, date = {2022-05-24}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {Tensegrity robots, composed of rigid rods and flexible cables, are difficult to accurately model and control given the presence of complex dynamics and high number of DoFs. Differentiable physics engines have been recently proposed as a data-driven approach for model identification of such complex robotic systems. These engines are often executed at a high-frequency to achieve accurate simulation. Ground truth trajectories for training differentiable engines, however, are not typically available at such high frequencies due to limitations of real-world sensors. The present work focuses on this frequency mismatch, which impacts the modeling accuracy. We proposed a recurrent structure for a differentiable physics engine of tensegrity robots, which can be trained effectively even with low-frequency trajectories. To train this new recurrent engine in a robust way, this work introduces relative to prior work: (i) a new implicit integration scheme, (ii) a progressive training pipeline, and (iii) a differentiable collision checker. A model of NASA's icosahedron SUPERballBot on MuJoCo is used as the ground truth system to collect training data. Simulated experiments show that once the recurrent differentiable engine has been trained given the low-frequency trajectories from MuJoCo, it is able to match the behavior of MuJoCo's system. The criterion for success is whether a locomotion strategy learned using the differentiable engine can be transferred back to the ground-truth system and result in a similar motion. Notably, the amount of ground truth data needed to train the differentiable engine, such that the policy is transferable to the ground truth system, is 1% of the data needed to train the policy directly on the ground-truth system. }, keywords = {Dynamics, tensegrity}, pubstate = {published}, tppubtype = {inproceedings} } Tensegrity robots, composed of rigid rods and flexible cables, are difficult to accurately model and control given the presence of complex dynamics and high number of DoFs. Differentiable physics engines have been recently proposed as a data-driven approach for model identification of such complex robotic systems. These engines are often executed at a high-frequency to achieve accurate simulation. Ground truth trajectories for training differentiable engines, however, are not typically available at such high frequencies due to limitations of real-world sensors. The present work focuses on this frequency mismatch, which impacts the modeling accuracy. We proposed a recurrent structure for a differentiable physics engine of tensegrity robots, which can be trained effectively even with low-frequency trajectories. To train this new recurrent engine in a robust way, this work introduces relative to prior work: (i) a new implicit integration scheme, (ii) a progressive training pipeline, and (iii) a differentiable collision checker. A model of NASA's icosahedron SUPERballBot on MuJoCo is used as the ground truth system to collect training data. Simulated experiments show that once the recurrent differentiable engine has been trained given the low-frequency trajectories from MuJoCo, it is able to match the behavior of MuJoCo's system. The criterion for success is whether a locomotion strategy learned using the differentiable engine can be transferred back to the ground-truth system and result in a similar motion. Notably, the amount of ground truth data needed to train the differentiable engine, such that the policy is transferable to the ground truth system, is 1% of the data needed to train the policy directly on the ground-truth system. |
Wang, R; Miao, Y; Bekris, K E Efficient and High-Quality Prehensile Rearrangement in Cluttered and Confined Spaces Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. Abstract | Links | BibTeX | Tags: Manipulation, Rearrangement @inproceedings{highqual_prehensile_rearrangement, title = {Efficient and High-Quality Prehensile Rearrangement in Cluttered and Confined Spaces}, author = {R Wang and Y Miao and K E Bekris}, url = {https://arxiv.org/abs/2110.02814}, year = {2022}, date = {2022-05-24}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {Prehensile object rearrangement in cluttered and confined spaces has broad applications but is also challenging. For instance, rearranging products in a grocery shelf means that the robot cannot directly access all objects and has limited free space. This is harder than tabletop rearrangement where objects are easily accessible with top-down grasps, which simplifies robot-object interactions. This work focuses on problems where such interactions are critical for completing tasks. It proposes a new efficient and complete solver under general constraints for monotone instances, which can be solved by moving each object at most once. The monotone solver reasons about robot-object constraints and uses them to effectively prune the search space. The new monotone solver is integrated with a global planner to solve non-monotone instances with high-quality solutions fast. Furthermore, this work contributes an effective pre-processing tool to significantly speed up online motion planning queries for rearrangement in confined spaces. Experiments further demonstrate that the proposed monotone solver, equipped with the pre-processing tool, results in 57.3% faster computation and 3 times higher success rate than state-of-the-art methods. Similarly, the resulting global planner is computationally more efficient and has a higher success rate, while producing high-quality solutions for non-monotone instances (i.e., only 1.3 additional actions are needed on average). }, keywords = {Manipulation, Rearrangement}, pubstate = {published}, tppubtype = {inproceedings} } Prehensile object rearrangement in cluttered and confined spaces has broad applications but is also challenging. For instance, rearranging products in a grocery shelf means that the robot cannot directly access all objects and has limited free space. This is harder than tabletop rearrangement where objects are easily accessible with top-down grasps, which simplifies robot-object interactions. This work focuses on problems where such interactions are critical for completing tasks. It proposes a new efficient and complete solver under general constraints for monotone instances, which can be solved by moving each object at most once. The monotone solver reasons about robot-object constraints and uses them to effectively prune the search space. The new monotone solver is integrated with a global planner to solve non-monotone instances with high-quality solutions fast. Furthermore, this work contributes an effective pre-processing tool to significantly speed up online motion planning queries for rearrangement in confined spaces. Experiments further demonstrate that the proposed monotone solver, equipped with the pre-processing tool, results in 57.3% faster computation and 3 times higher success rate than state-of-the-art methods. Similarly, the resulting global planner is computationally more efficient and has a higher success rate, while producing high-quality solutions for non-monotone instances (i.e., only 1.3 additional actions are needed on average). |
Vieira, E; Nakhimovich, D; Gao, K; Wang, R; Yu, J; Bekris, K E Persistent Homology for Effective Non-Prehensile Manipulation Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. Abstract | Links | BibTeX | Tags: Manipulation @inproceedings{homology_nonprehensile, title = {Persistent Homology for Effective Non-Prehensile Manipulation}, author = {E Vieira and D Nakhimovich and K Gao and R Wang and J Yu and K E Bekris}, url = {https://arxiv.org/abs/2202.02937}, year = {2022}, date = {2022-05-24}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {This work explores the use of topological tools for achieving effective non-prehensile manipulation in cluttered, constrained workspaces. In particular, it proposes the use of persistent homology as a guiding principle in identifying the appropriate non-prehensile actions, such as pushing, to clean a cluttered space with a robotic arm so as to allow the retrieval of a target object. Persistent homology enables the automatic identification of connected components of blocking objects in the space without the need for manual input or tuning of parameters. The proposed algorithm uses this information to push groups of cylindrical objects together and aims to minimize the number of pushing actions needed to reach to the target. Simulated experiments in a physics engine using a model of the Baxter robot show that the proposed topology-driven solution is achieving significantly higher success rate in solving such constrained problems relatively to state-of-the-art alternatives from the literature. It manages to keep the number of pushing actions low, is computationally efficient and the resulting decisions and motion appear natural for effectively solving such tasks. }, keywords = {Manipulation}, pubstate = {published}, tppubtype = {inproceedings} } This work explores the use of topological tools for achieving effective non-prehensile manipulation in cluttered, constrained workspaces. In particular, it proposes the use of persistent homology as a guiding principle in identifying the appropriate non-prehensile actions, such as pushing, to clean a cluttered space with a robotic arm so as to allow the retrieval of a target object. Persistent homology enables the automatic identification of connected components of blocking objects in the space without the need for manual input or tuning of parameters. The proposed algorithm uses this information to push groups of cylindrical objects together and aims to minimize the number of pushing actions needed to reach to the target. Simulated experiments in a physics engine using a model of the Baxter robot show that the proposed topology-driven solution is achieving significantly higher success rate in solving such constrained problems relatively to state-of-the-art alternatives from the literature. It manages to keep the number of pushing actions low, is computationally efficient and the resulting decisions and motion appear natural for effectively solving such tasks. |
Granados, E; Boularias, A; Bekris, K E; Aanjaneya, M Model Identification and Control of a Mobile Robot with Omnidirectional Wheels Using Differentiable Physics Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. Abstract | Links | BibTeX | Tags: Dynamics, Learning @inproceedings{model_identification_omnidirectional, title = {Model Identification and Control of a Mobile Robot with Omnidirectional Wheels Using Differentiable Physics}, author = {E Granados and A Boularias and K E Bekris and M Aanjaneya}, url = {https://orionquest.github.io/papers/MICLCMR/paper.html}, year = {2022}, date = {2022-05-24}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {We present a new data-driven technique for predicting the motion of a low-cost omnidirectional mobile robot under the influence of motor torques and friction forces. Our method utilizes a novel differentiable physics engine for analytically computing the gradient of the deviation between predicted motion trajectories and real-world trajectories. This allows to automatically learn and fine-tune the unknown friction coefficients on-the-fly, by minimizing a carefully designed loss function using gradient descent. Experiments show that the predicted trajectories are in excellent agreement with their real-world counterparts. Our proposed approach is computationally superior to existing black-box optimization methods, requiring very few real-world samples for accurate trajectory prediction compared to physics-agnostic techniques, such as neural networks. Experiments also demonstrate that the proposed method allows the robot to quickly adapt to changes in the terrain. Our proposed approach combines the data-efficiency of classical analytical models that are derived from first principles, with the flexibility of data-driven methods, which makes it appropriate for low-cost mobile robots. }, keywords = {Dynamics, Learning}, pubstate = {published}, tppubtype = {inproceedings} } We present a new data-driven technique for predicting the motion of a low-cost omnidirectional mobile robot under the influence of motor torques and friction forces. Our method utilizes a novel differentiable physics engine for analytically computing the gradient of the deviation between predicted motion trajectories and real-world trajectories. This allows to automatically learn and fine-tune the unknown friction coefficients on-the-fly, by minimizing a carefully designed loss function using gradient descent. Experiments show that the predicted trajectories are in excellent agreement with their real-world counterparts. Our proposed approach is computationally superior to existing black-box optimization methods, requiring very few real-world samples for accurate trajectory prediction compared to physics-agnostic techniques, such as neural networks. Experiments also demonstrate that the proposed method allows the robot to quickly adapt to changes in the terrain. Our proposed approach combines the data-efficiency of classical analytical models that are derived from first principles, with the flexibility of data-driven methods, which makes it appropriate for low-cost mobile robots. |
Liang, J; Wen, B; Bekris, K E; Boularias, A Learning Sensorimotor Primitives of Sequential Manipulation Tasks from Visual Demonstrations Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. Abstract | Links | BibTeX | Tags: Learning, Manipulation @inproceedings{learning_sequential_manipulation, title = {Learning Sensorimotor Primitives of Sequential Manipulation Tasks from Visual Demonstrations}, author = {J Liang and B Wen and K E Bekris and A Boularias}, url = {https://arxiv.org/abs/2203.03797}, year = {2022}, date = {2022-05-24}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {This work aims to learn how to perform complex robot manipulation tasks that are composed of several, consecutively executed low-level sub-tasks, given as input a few visual demonstrations of the tasks performed by a person. The sub-tasks consist of moving the robot's end-effector until it reaches a sub-goal region in the task space, performing an action, and triggering the next sub-task when a pre-condition is met. Most prior work in this domain has been concerned with learning only low-level tasks, such as hitting a ball or reaching an object and grasping it. This paper describes a new neural network-based framework for learning simultaneously low-level policies as well as high-level policies, such as deciding which object to pick next or where to place it relative to other objects in the scene. A key feature of the proposed approach is that the policies are learned directly from raw videos of task demonstrations, without any manual annotation or post-processing of the data. Empirical results on object manipulation tasks with a robotic arm show that the proposed network can efficiently learn from real visual demonstrations to perform the tasks, and outperforms popular imitation learning algorithms. }, keywords = {Learning, Manipulation}, pubstate = {published}, tppubtype = {inproceedings} } This work aims to learn how to perform complex robot manipulation tasks that are composed of several, consecutively executed low-level sub-tasks, given as input a few visual demonstrations of the tasks performed by a person. The sub-tasks consist of moving the robot's end-effector until it reaches a sub-goal region in the task space, performing an action, and triggering the next sub-task when a pre-condition is met. Most prior work in this domain has been concerned with learning only low-level tasks, such as hitting a ball or reaching an object and grasping it. This paper describes a new neural network-based framework for learning simultaneously low-level policies as well as high-level policies, such as deciding which object to pick next or where to place it relative to other objects in the scene. A key feature of the proposed approach is that the policies are learned directly from raw videos of task demonstrations, without any manual annotation or post-processing of the data. Empirical results on object manipulation tasks with a robotic arm show that the proposed network can efficiently learn from real visual demonstrations to perform the tasks, and outperforms popular imitation learning algorithms. |
Gao, K; Lau, D; Huang, B; Bekris, K E; Yu, J Fast High-Quality Tabletop Rearrangement in Bounded Workspace Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. Abstract | Links | BibTeX | Tags: Manipulation, Rearrangement @inproceedings{fast_tabletop_rearrangement, title = {Fast High-Quality Tabletop Rearrangement in Bounded Workspace}, author = {K Gao and D Lau and B Huang and K E Bekris and J Yu }, url = {https://arxiv.org/abs/2110.12325}, year = {2022}, date = {2022-05-23}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {In this paper, we examine the problem of rearranging many objects on a tabletop in a cluttered setting using overhand grasps. Efficient solutions for the problem, which capture a common task that we solve on a daily basis, are essential in enabling truly intelligent robotic manipulation. In a given instance, objects may need to be placed at temporary positions ("buffers") to complete the rearrangement, but allocating these buffer locations can be highly challenging in a cluttered environment. To tackle the challenge, a two-step baseline planner is first developed, which generates a primitive plan based on inherent combinatorial constraints induced by start and goal poses of the objects and then selects buffer locations assisted by the primitive plan. We then employ the "lazy" planner in a tree search framework which is further sped up by adapting a novel preprocessing routine. Simulation experiments show our methods can quickly generate high-quality solutions and are more robust in solving large-scale instances than existing state-of-the-art approaches. }, keywords = {Manipulation, Rearrangement}, pubstate = {published}, tppubtype = {inproceedings} } In this paper, we examine the problem of rearranging many objects on a tabletop in a cluttered setting using overhand grasps. Efficient solutions for the problem, which capture a common task that we solve on a daily basis, are essential in enabling truly intelligent robotic manipulation. In a given instance, objects may need to be placed at temporary positions ("buffers") to complete the rearrangement, but allocating these buffer locations can be highly challenging in a cluttered environment. To tackle the challenge, a two-step baseline planner is first developed, which generates a primitive plan based on inherent combinatorial constraints induced by start and goal poses of the objects and then selects buffer locations assisted by the primitive plan. We then employ the "lazy" planner in a tree search framework which is further sped up by adapting a novel preprocessing routine. Simulation experiments show our methods can quickly generate high-quality solutions and are more robust in solving large-scale instances than existing state-of-the-art approaches. |
2021 |
Shah, D S; Booth, J W; Baines, R L; Wang, K; Vespignani, M; Bekris, K E; Kramer-Bottiglio, R Tensegrity Robotics Journal Article Soft Robotics, 2021. Abstract | Links | BibTeX | Tags: tensegrity @article{tensegrity_survey, title = {Tensegrity Robotics}, author = {D S Shah and J W Booth and R L Baines and K Wang and M Vespignani and K E Bekris and R Kramer-Bottiglio}, url = {https://www.liebertpub.com/doi/10.1089/soro.2020.0170}, year = {2021}, date = {2021-12-01}, journal = {Soft Robotics}, abstract = {Numerous recent advances in robotics have been inspired by the biological principle of tensile integrity — or “tensegrity”— to achieve remarkable feats of dexterity and resilience. Tensegrity robots contain compliant networks of rigid struts and soft cables, allowing them to change their shape by adjusting their internal tension. Local rigidity along the struts provides support to carry electronics and scientific payloads, while global compliance enabled by the flexible interconnections of struts and ca- bles allows a tensegrity to distribute impacts and prevent damage. Numerous techniques have been proposed for designing and simulating tensegrity robots, giving rise to a wide range of locomotion modes including rolling, vibrating, hopping, and crawling. Here, we review progress in the burgeoning field of tensegrity robotics, highlighting several emerging challenges, including automated design, state sensing, and kinodynamic motion planning.}, keywords = {tensegrity}, pubstate = {published}, tppubtype = {article} } Numerous recent advances in robotics have been inspired by the biological principle of tensile integrity — or “tensegrity”— to achieve remarkable feats of dexterity and resilience. Tensegrity robots contain compliant networks of rigid struts and soft cables, allowing them to change their shape by adjusting their internal tension. Local rigidity along the struts provides support to carry electronics and scientific payloads, while global compliance enabled by the flexible interconnections of struts and ca- bles allows a tensegrity to distribute impacts and prevent damage. Numerous techniques have been proposed for designing and simulating tensegrity robots, giving rise to a wide range of locomotion modes including rolling, vibrating, hopping, and crawling. Here, we review progress in the burgeoning field of tensegrity robotics, highlighting several emerging challenges, including automated design, state sensing, and kinodynamic motion planning. |
Meng, P; Wang, W; Balkcom, D; Bekris, K E Proof-of-Concept Designs for the Assembly of Modular, Dynamic Tensegrities into Easily Deployable Structures Conference ASCE Earth and Space Conference 2021, Seattle, WA, 2021. @conference{228, title = {Proof-of-Concept Designs for the Assembly of Modular, Dynamic Tensegrities into Easily Deployable Structures}, author = {P Meng and W Wang and D Balkcom and K E Bekris}, year = {2021}, date = {2021-10-06}, booktitle = {ASCE Earth and Space Conference 2021}, address = {Seattle, WA}, abstract = {Dynamic tensegrity robots are inspired by tensegrity structures in architecture; arrangements of rigid rods and flexible elements allow the robots to deform. This work proposes the use of multiple, modular, tensegrity robots that can move and compliantly connect to assemble larger, compliant, lightweight, strong structures and scaffolding. The focus is on proof-of-concept designs for the modular robots themselves and their docking mechanisms, which can allow the easy deployment of structures in unstructured environments. These mechanisms include (electro)magnets to allow each individual robot to connect and disconnect on cue. An exciting direction is the design of specific module and structure designs to fit the mission at hand. For example, this work highlights how the considered three bar structures could stack to form a column or deform on one side to create an arch. A critical component of future work will involve the development of algorithms for automatic design and layout of modules in structures.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Dynamic tensegrity robots are inspired by tensegrity structures in architecture; arrangements of rigid rods and flexible elements allow the robots to deform. This work proposes the use of multiple, modular, tensegrity robots that can move and compliantly connect to assemble larger, compliant, lightweight, strong structures and scaffolding. The focus is on proof-of-concept designs for the modular robots themselves and their docking mechanisms, which can allow the easy deployment of structures in unstructured environments. These mechanisms include (electro)magnets to allow each individual robot to connect and disconnect on cue. An exciting direction is the design of specific module and structure designs to fit the mission at hand. For example, this work highlights how the considered three bar structures could stack to form a column or deform on one side to create an arch. A critical component of future work will involve the development of algorithms for automatic design and layout of modules in structures. |
Wang, K; Aanjaneya, M; Bekris, K E Sim2Sim Evaluation of a Novel Data-Efficient Differentiable Physics Engine for Tensegrity Robots Inproceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021. @inproceedings{sim2sim_tensegrity, title = {Sim2Sim Evaluation of a Novel Data-Efficient Differentiable Physics Engine for Tensegrity Robots }, author = {K Wang and M Aanjaneya and K E Bekris}, year = {2021}, date = {2021-09-27}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, abstract = {Learning policies in simulation is promising for reducing human effort when training robot controllers. This is especially true for soft robots that are more adaptive and safe but also more difficult to accurately model and control. The sim2real gap is the main barrier to successfully transfer policies from simulation to a real robot. System identification can be applied to reduce this gap but traditional identification methods require a lot of manual tuning. Data-driven alternatives can tune dynamical models directly from data but are often data hungry, which also incorporates human effort in collecting data. This work proposes a data-driven, end-to-end differentiable simulator focused on the exciting but challenging domain of tensegrity robots. To the best of the authors' knowledge, this is the first differentiable physics engine for tensegrity robots that supports cable, contact, and actuation modeling. The aim is to develop a reasonably simplified, data-driven simulation, which can learn approximate dynamics with limited ground truth data. The dynamics must be accurate enough to generate policies that can be transferred back to the ground-truth system. As a first step in this direction, the current work demonstrates sim2sim transfer, where the unknown physical model of MuJoCo acts as a ground truth system. Two different tensegrity robots are used for evaluation and learning of locomotion policies, a 6-bar and a 3-bar tensegrity. The results indicate that only 0.25% of ground truth data are needed to train a policy that works on the ground truth system when the differentiable engine is used for training against training the policy directly on the ground truth system. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Learning policies in simulation is promising for reducing human effort when training robot controllers. This is especially true for soft robots that are more adaptive and safe but also more difficult to accurately model and control. The sim2real gap is the main barrier to successfully transfer policies from simulation to a real robot. System identification can be applied to reduce this gap but traditional identification methods require a lot of manual tuning. Data-driven alternatives can tune dynamical models directly from data but are often data hungry, which also incorporates human effort in collecting data. This work proposes a data-driven, end-to-end differentiable simulator focused on the exciting but challenging domain of tensegrity robots. To the best of the authors' knowledge, this is the first differentiable physics engine for tensegrity robots that supports cable, contact, and actuation modeling. The aim is to develop a reasonably simplified, data-driven simulation, which can learn approximate dynamics with limited ground truth data. The dynamics must be accurate enough to generate policies that can be transferred back to the ground-truth system. As a first step in this direction, the current work demonstrates sim2sim transfer, where the unknown physical model of MuJoCo acts as a ground truth system. Two different tensegrity robots are used for evaluation and learning of locomotion policies, a 6-bar and a 3-bar tensegrity. The results indicate that only 0.25% of ground truth data are needed to train a policy that works on the ground truth system when the differentiable engine is used for training against training the policy directly on the ground truth system. |
Sivaramakrishnan, A; Granados, E; Karten, S; McMahon, T; Bekris, K E Improving Kinodynamic Planners for Vehicular Navigation with Learned Goal-Reaching Controllers Inproceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021. BibTeX | Tags: @inproceedings{learned_goal_reaching_controllers, title = {Improving Kinodynamic Planners for Vehicular Navigation with Learned Goal-Reaching Controllers }, author = {A Sivaramakrishnan and E Granados and S Karten and T McMahon and K E Bekris}, year = {2021}, date = {2021-09-27}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
Wen, B; Bekris, K E BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models Inproceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021. Abstract | Links | BibTeX | Tags: Pose Estimation, Robot Perception @inproceedings{bundletrack, title = {BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models }, author = {B Wen and K E Bekris }, url = {https://arxiv.org/abs/2108.00516}, year = {2021}, date = {2021-09-27}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, abstract = {Tracking the 6D pose of objects in video sequences is important for robot manipulation. Prior efforts, however, often assume that the target object's CAD model, at least at a category-level, is available for offline training or during online template matching. This work proposes BundleTrack, a general framework for 6D pose tracking of novel objects, which does not depend upon instance or category-level 3D models. It leverages the complementary attributes of recent advances in deep learning for segmentation and robust feature extraction, as well as memory augmented pose-graph optimization for achieving spatiotemporal consistency. This enables long-term, low-drift tracking under various challenging scenarios, including significant occlusions and object motions. Comprehensive experiments given two public benchmarks demonstrate that the proposed approach significantly outperforms state-of-art category-level 6D tracking or dynamic-SLAM methods. When compared against state-of-art methods that rely on an object instance CAD model, comparable performance is achieved, despite the proposed method's reduced information requirements. An efficient implementation in CUDA provides a real-time performance of 10Hz for the entire framework. }, keywords = {Pose Estimation, Robot Perception}, pubstate = {published}, tppubtype = {inproceedings} } Tracking the 6D pose of objects in video sequences is important for robot manipulation. Prior efforts, however, often assume that the target object's CAD model, at least at a category-level, is available for offline training or during online template matching. This work proposes BundleTrack, a general framework for 6D pose tracking of novel objects, which does not depend upon instance or category-level 3D models. It leverages the complementary attributes of recent advances in deep learning for segmentation and robust feature extraction, as well as memory augmented pose-graph optimization for achieving spatiotemporal consistency. This enables long-term, low-drift tracking under various challenging scenarios, including significant occlusions and object motions. Comprehensive experiments given two public benchmarks demonstrate that the proposed approach significantly outperforms state-of-art category-level 6D tracking or dynamic-SLAM methods. When compared against state-of-art methods that rely on an object instance CAD model, comparable performance is achieved, despite the proposed method's reduced information requirements. An efficient implementation in CUDA provides a real-time performance of 10Hz for the entire framework. |
Feng, S; Guo, T; Bekris, K E; Yu, J Team RuBot’s experiences and lessons from the ARIAC Journal Article Robotics and Computer-Integrated Manufacturing, 70 , 2021. Abstract | Links | BibTeX | Tags: Rearrangement @article{siwei_ariac_21, title = {Team RuBot’s experiences and lessons from the ARIAC}, author = {S Feng and T Guo and K E Bekris and J Yu}, editor = {Craig Schlenoff, Zeid Kootbally, Erez Karpas (Special Issue: Agile Robotics for Industrial Applications)}, url = {https://www.sciencedirect.com/science/article/abs/pii/S0736584521000120}, year = {2021}, date = {2021-08-01}, journal = {Robotics and Computer-Integrated Manufacturing}, volume = {70}, abstract = {We share experiences and lessons learned in participating the annual Agile Robotics for Industrial Automation Competition (ARIAC). ARIAC is a simulation-based competition focusing on pushing the agility of robotic systems for handling industrial pick-and-place challenges. Team RuBot started competing from 2019, placing 2nd place in ARIAC 2019 and 3rd place in ARIAC 2020. The article also discusses the difficulties we faced during the contest and our strategies for tackling them.}, keywords = {Rearrangement}, pubstate = {published}, tppubtype = {article} } We share experiences and lessons learned in participating the annual Agile Robotics for Industrial Automation Competition (ARIAC). ARIAC is a simulation-based competition focusing on pushing the agility of robotic systems for handling industrial pick-and-place challenges. Team RuBot started competing from 2019, placing 2nd place in ARIAC 2019 and 3rd place in ARIAC 2020. The article also discusses the difficulties we faced during the contest and our strategies for tackling them. |
Morgan, A; Wen, B; Junchi, L; Boularias, A; Dollar, A; Bekris, K E Vision-driven Compliant Manipulation for Reliable, High-Precision Assembly Tasks Conference Robotics: Science and Systems, 2021. Abstract | Links | BibTeX | Tags: Manipulation, Robot Perception @conference{MorWen2021, title = {Vision-driven Compliant Manipulation for Reliable, High-Precision Assembly Tasks}, author = {A Morgan and B Wen and L Junchi and A Boularias and A Dollar and K E Bekris}, url = {https://arxiv.org/abs/2106.14070}, year = {2021}, date = {2021-07-12}, booktitle = {Robotics: Science and Systems}, abstract = {Highly constrained manipulation tasks continue to be challenging for autonomous robots as they require high levels of precision, typically less than 1mm, which is often incompatible with what can be achieved by traditional perception systems. This paper demonstrates that the combination of state-of-the-art object tracking with passively adaptive mechanical hardware can be leveraged to complete precision manipulation tasks with tight, industrially-relevant tolerances (0.25mm). The proposed control method closes the loop through vision by tracking the relative 6D pose of objects in the relevant workspace. It adjusts the control reference of both the compliant manipulator and the hand to complete object insertion tasks via within-hand manipulation. Contrary to previous efforts for insertion, our method does not require expensive force sensors, precision manipulators, or time-consuming, online learning, which is data hungry. Instead, this effort leverages mechanical compliance and utilizes an object-agnostic manipulation model of the hand learned offline, off-the-shelf motion planning, and an RGBD-based object tracker trained solely with synthetic data. These features allow the proposed system to easily generalize and transfer to new tasks and environments. This paper describes in detail the system components and showcases its efficacy with extensive experiments involving tight tolerance peg-in-hole insertion tasks of various geometries as well as open-world constrained placement tasks.}, keywords = {Manipulation, Robot Perception}, pubstate = {published}, tppubtype = {conference} } Highly constrained manipulation tasks continue to be challenging for autonomous robots as they require high levels of precision, typically less than 1mm, which is often incompatible with what can be achieved by traditional perception systems. This paper demonstrates that the combination of state-of-the-art object tracking with passively adaptive mechanical hardware can be leveraged to complete precision manipulation tasks with tight, industrially-relevant tolerances (0.25mm). The proposed control method closes the loop through vision by tracking the relative 6D pose of objects in the relevant workspace. It adjusts the control reference of both the compliant manipulator and the hand to complete object insertion tasks via within-hand manipulation. Contrary to previous efforts for insertion, our method does not require expensive force sensors, precision manipulators, or time-consuming, online learning, which is data hungry. Instead, this effort leverages mechanical compliance and utilizes an object-agnostic manipulation model of the hand learned offline, off-the-shelf motion planning, and an RGBD-based object tracker trained solely with synthetic data. These features allow the proposed system to easily generalize and transfer to new tasks and environments. This paper describes in detail the system components and showcases its efficacy with extensive experiments involving tight tolerance peg-in-hole insertion tasks of various geometries as well as open-world constrained placement tasks. |
Wang, R; Gao, K; Nakhimovich, D; Yu, J; Bekris, K E Uniform Object Rearrangement: From Complete Monotone Primitives to Efficient Non-Monotone Informed Search Inproceedings International Conference on Robotics and Automation (ICRA) 2021, 2021. BibTeX | Tags: Rearrangement @inproceedings{WangGNYB21_uniform_rearrangement, title = {Uniform Object Rearrangement: From Complete Monotone Primitives to Efficient Non-Monotone Informed Search}, author = {R Wang and K Gao and D Nakhimovich and J Yu and K E Bekris}, year = {2021}, date = {2021-05-30}, booktitle = {International Conference on Robotics and Automation (ICRA) 2021}, keywords = {Rearrangement}, pubstate = {published}, tppubtype = {inproceedings} } |
Wang, R; Nakhimovich, D; Roberts, F; Bekris, K E Robotics as an Enabler of Resiliency to Disasters: Promises and Pitfalls Book Chapter Roberts, Fred S; Sheremet, Igor A (Ed.): 12660 , pp. 75-101, Springer, 2021. Abstract | Links | BibTeX | Tags: Sociotechnological Systems @inbook{WangNRB21_Resiliency, title = {Robotics as an Enabler of Resiliency to Disasters: Promises and Pitfalls}, author = {R Wang and D Nakhimovich and F Roberts and K E Bekris}, editor = {Fred S. Roberts and Igor A. Sheremet}, url = {http://www.cs.rutgers.edu/~kb572/pubs/Robotics_Enabler_Resiliency_Disasters.pdf}, year = {2021}, date = {2021-03-03}, volume = {12660}, pages = {75-101}, publisher = {Springer}, series = { Lecture Notes in Computer Science}, abstract = {The Covid-19 pandemic is a reminder that modern society is still susceptible to multiple types of natural or man-made disasters, which motivates the need to improve resiliency through technological advancement. This article focuses on robotics and the role it can play towards providing resiliency to disasters. The progress in this domain brings the promise of effectively deploying robots in response to life-threatening disasters, which includes highly unstructured setups and hazardous spaces inaccessible or harmful to humans. This article discusses the maturity of robotics technology and explores the needed advances that will allow robots to become more capable and robust in disaster response measures. It also explores how robots can help in making human and natural environments preemptively more resilient without compromising long-term prospects for economic development. Despite its promise, there are also concerns that arise from the deployment of robots. Those discussed relate to safety considerations, privacy infringement, cyber-security, and financial aspects, such as the cost of development and maintenance as well as impact on employment.}, keywords = {Sociotechnological Systems}, pubstate = {published}, tppubtype = {inbook} } The Covid-19 pandemic is a reminder that modern society is still susceptible to multiple types of natural or man-made disasters, which motivates the need to improve resiliency through technological advancement. This article focuses on robotics and the role it can play towards providing resiliency to disasters. The progress in this domain brings the promise of effectively deploying robots in response to life-threatening disasters, which includes highly unstructured setups and hazardous spaces inaccessible or harmful to humans. This article discusses the maturity of robotics technology and explores the needed advances that will allow robots to become more capable and robust in disaster response measures. It also explores how robots can help in making human and natural environments preemptively more resilient without compromising long-term prospects for economic development. Despite its promise, there are also concerns that arise from the deployment of robots. Those discussed relate to safety considerations, privacy infringement, cyber-security, and financial aspects, such as the cost of development and maintenance as well as impact on employment. |
Shome, R; Solovey, K; Yu, J; Bekris, K E; Halperin, D Fast, High-Quality Two-Arm Rearrangement in Synchronous, Monotone Tabletop Setups Journal Article IEEE Transactions on Automation Science and Engineering, 2021. Abstract | BibTeX | Tags: Rearrangement @article{shome_tase_two_arm_rearrange, title = {Fast, High-Quality Two-Arm Rearrangement in Synchronous, Monotone Tabletop Setups}, author = {R Shome and K Solovey and J Yu and K E Bekris and D Halperin}, year = {2021}, date = {2021-03-01}, journal = {IEEE Transactions on Automation Science and Engineering}, abstract = {Rearranging objects on a planar surface arises in a variety of robotic applications, such as product packaging. Using two arms can improve efficiency but introduces new computational challenges. This paper studies the problem structure of object rearrangement using two arms in synchronous, monotone tabletop setups and develops an optimal mixed integer model. It then describes an efficient and scalable algorithm, which first minimizes the cost of object transfers and then of moves between objects. This is motivated by the fact that, asymptotically, object transfers dominate the cost of solutions. Moreover, a lazy strategy minimizes the number of motion planning calls and results in significant speedups. Theoretical arguments support the benefits of using two arms and indicate that synchronous execution, in which the two arms perform together either transfers or moves, introduces only a small overhead. Experiments support these claims and show that the scalable method can quickly compute solutions close to the optimal for the considered setup.}, keywords = {Rearrangement}, pubstate = {published}, tppubtype = {article} } Rearranging objects on a planar surface arises in a variety of robotic applications, such as product packaging. Using two arms can improve efficiency but introduces new computational challenges. This paper studies the problem structure of object rearrangement using two arms in synchronous, monotone tabletop setups and develops an optimal mixed integer model. It then describes an efficient and scalable algorithm, which first minimizes the cost of object transfers and then of moves between objects. This is motivated by the fact that, asymptotically, object transfers dominate the cost of solutions. Moreover, a lazy strategy minimizes the number of motion planning calls and results in significant speedups. Theoretical arguments support the benefits of using two arms and indicate that synchronous execution, in which the two arms perform together either transfers or moves, introduces only a small overhead. Experiments support these claims and show that the scalable method can quickly compute solutions close to the optimal for the considered setup. |
2020 |
Mitash, C; Shome, R; Wen, B; Boularias, A; Bekris, K E Task-driven Perception and Manipulation for Constrained Placement of Unknown Objects Journal Article IEEE Robotics and Automation Letters (RA-L) (also appearing at IEEE/RSJ IROS 2020), 2020. Abstract | Links | BibTeX | Tags: @article{231, title = {Task-driven Perception and Manipulation for Constrained Placement of Unknown Objects}, author = {C Mitash and R Shome and B Wen and A Boularias and K E Bekris}, url = {https://robotics.cs.rutgers.edu/task-driven-perception/}, year = {2020}, date = {2020-10-27}, journal = {IEEE Robotics and Automation Letters (RA-L) (also appearing at IEEE/RSJ IROS 2020)}, abstract = {Recent progress in robotic manipulation has dealt with the case of no prior object models in the context of relatively simple tasks, such as bin-picking. Existing methods for more constrained problems, however, such as deliberate placement in a tight region, depend more critically on shape information to achieve safe execution. This work introduces a possibilistic object representation for solving constrained placement tasks without shape priors. A perception method is proposed to track and update the object representation during motion execution, which respects physical and geometric constraints. The method operates directly over sensor data, modeling the seen and unseen parts of the object given observations. It results in a dynamically updated conservative representation, which can be used to plan safe manipulation actions. This task-driven perception process is integrated with manipulation task planning architecture for a dual-arm manipulator to discover efficient solutions for the constrained placement task with minimal sensing. The planning process can make use of handoff operations when necessary for safe placement given the conservative representation. The pipeline is evaluated with data from over 240 real-world experiments involving constrained placement of various unknown objects using a dual-arm manipulator. While straightforward pick-sense-and-place architectures frequently fail to solve these problems, the proposed integrated pipeline achieves more than 95% success and faster execution times.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Recent progress in robotic manipulation has dealt with the case of no prior object models in the context of relatively simple tasks, such as bin-picking. Existing methods for more constrained problems, however, such as deliberate placement in a tight region, depend more critically on shape information to achieve safe execution. This work introduces a possibilistic object representation for solving constrained placement tasks without shape priors. A perception method is proposed to track and update the object representation during motion execution, which respects physical and geometric constraints. The method operates directly over sensor data, modeling the seen and unseen parts of the object given observations. It results in a dynamically updated conservative representation, which can be used to plan safe manipulation actions. This task-driven perception process is integrated with manipulation task planning architecture for a dual-arm manipulator to discover efficient solutions for the constrained placement task with minimal sensing. The planning process can make use of handoff operations when necessary for safe placement given the conservative representation. The pipeline is evaluated with data from over 240 real-world experiments involving constrained placement of various unknown objects using a dual-arm manipulator. While straightforward pick-sense-and-place architectures frequently fail to solve these problems, the proposed integrated pipeline achieves more than 95% success and faster execution times. |
Wen, B; Mitash, C; Ren, B; Bekris, K E se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains Conference IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, 2020. Abstract | Links | BibTeX | Tags: @conference{232, title = {se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains}, author = {B Wen and C Mitash and B Ren and K E Bekris}, url = {http://arxiv.org/abs/2007.13866}, year = {2020}, date = {2020-10-26}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, address = {Las Vegas, NV}, abstract = {Tracking the 6D pose of objects in video sequences is important for robot manipulation. This task, however, in- troduces multiple challenges: (i) robot manipulation involves significant occlusions; (ii) data and annotations are troublesome and difficult to collect for 6D poses, which complicates machine learning solutions, and (iii) incremental error drift often accu- mulates in long term tracking to necessitate re-initialization of the object’s pose. This work proposes a data-driven opti- mization approach for long-term, 6D pose tracking. It aims to identify the optimal relative pose given the current RGB-D observation and a synthetic image conditioned on the previous best estimate and the object’s model. The key contribution in this context is a novel neural network architecture, which appropriately disentangles the feature encoding to help reduce domain shift, and an effective 3D orientation representation via Lie Algebra. Consequently, even when the network is trained only with synthetic data can work effectively over real images. Comprehensive experiments over benchmarks - existing ones as well as a new dataset with significant occlusions related to object manipulation - show that the proposed approach achieves consistently robust estimates and outperforms alternatives, even though they have been trained with real images. The approach is also the most computationally efficient among the alternatives and achieves a tracking frequency of 90.9Hz.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Tracking the 6D pose of objects in video sequences is important for robot manipulation. This task, however, in- troduces multiple challenges: (i) robot manipulation involves significant occlusions; (ii) data and annotations are troublesome and difficult to collect for 6D poses, which complicates machine learning solutions, and (iii) incremental error drift often accu- mulates in long term tracking to necessitate re-initialization of the object’s pose. This work proposes a data-driven opti- mization approach for long-term, 6D pose tracking. It aims to identify the optimal relative pose given the current RGB-D observation and a synthetic image conditioned on the previous best estimate and the object’s model. The key contribution in this context is a novel neural network architecture, which appropriately disentangles the feature encoding to help reduce domain shift, and an effective 3D orientation representation via Lie Algebra. Consequently, even when the network is trained only with synthetic data can work effectively over real images. Comprehensive experiments over benchmarks - existing ones as well as a new dataset with significant occlusions related to object manipulation - show that the proposed approach achieves consistently robust estimates and outperforms alternatives, even though they have been trained with real images. The approach is also the most computationally efficient among the alternatives and achieves a tracking frequency of 90.9Hz. |
Wang, R; Mitash, C; Lu, S; Boehm, D; Bekris, K E Safe and Effective Picking Paths in Clutter given Discrete Distributions of Object Poses Conference IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, 2020. Abstract | Links | BibTeX | Tags: @conference{233, title = {Safe and Effective Picking Paths in Clutter given Discrete Distributions of Object Poses}, author = {R Wang and C Mitash and S Lu and D Boehm and K E Bekris}, url = {https://arxiv.org/abs/2008.04465}, year = {2020}, date = {2020-10-25}, booktitle = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, address = {Las Vegas, NV}, abstract = {Picking an item in the presence of other objects can be challenging as it involves occlusions and partial views. Given object models, one approach is to perform object pose estimation and use the most likely candidate pose per object to pick the target without collisions. This approach, however, ignores the uncertainty of the perception process both regarding the target’s and the surrounding objects’ poses. This work proposes first a perception process for 6D pose estimation, which returns a discrete distribution of object poses in a scene. Then, an open-loop planning pipeline is proposed to return safe and effective solutions for moving a robotic arm to pick, which (a) minimizes the probability of collision with the obstructing objects; and (b) maximizes the probability of reaching the target item. The planning framework models the challenge as a stochastic variant of the Minimum Constraint Removal (MCR) problem. The effectiveness of the methodology is verified given both simulated and real data in different scenarios. The experiments demonstrate the importance of considering the uncertainty of the perception process in terms of safe execution. The results also show that the methodology is more effective than conservative MCR approaches, which avoid all possible object poses regardless of the reported uncertainty.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Picking an item in the presence of other objects can be challenging as it involves occlusions and partial views. Given object models, one approach is to perform object pose estimation and use the most likely candidate pose per object to pick the target without collisions. This approach, however, ignores the uncertainty of the perception process both regarding the target’s and the surrounding objects’ poses. This work proposes first a perception process for 6D pose estimation, which returns a discrete distribution of object poses in a scene. Then, an open-loop planning pipeline is proposed to return safe and effective solutions for moving a robotic arm to pick, which (a) minimizes the probability of collision with the obstructing objects; and (b) maximizes the probability of reaching the target item. The planning framework models the challenge as a stochastic variant of the Minimum Constraint Removal (MCR) problem. The effectiveness of the methodology is verified given both simulated and real data in different scenarios. The experiments demonstrate the importance of considering the uncertainty of the perception process in terms of safe execution. The results also show that the methodology is more effective than conservative MCR approaches, which avoid all possible object poses regardless of the reported uncertainty. |
Mitash, C Scalable, Physics-aware 6D Pose Estimation for Robot Manipulation PhD Thesis Rutgers University, 2020. Abstract | Links | BibTeX | Tags: Pose Estimation @phdthesis{Mitash:2020aa, title = {Scalable, Physics-aware 6D Pose Estimation for Robot Manipulation}, author = {C Mitash}, url = {http://www.cs.rutgers.edu/~kb572/pubs/thesis_chaitanya_mitash_final.pdf}, year = {2020}, date = {2020-09-30}, school = {Rutgers University}, abstract = {Robot manipulation often depend on some form of pose estimation to represent the state of the world and allow decision making both at the task-level and for motion or grasp planning. Recent progress in deep learning gives hope for a pose estimation solution that could generalize over textured and texture-less objects, objects with or without distinctive shape properties, and under different lighting conditions and clutter scenarios. Nevertheless, it gives rise to a new set of challenges such as the painful task of acquiring large-scale labeled training datasets and of dealing with their stochastic output over unforeseen scenarios that are not captured by the training. This restricts the scalability of such pose estimation solutions in robot manipulation tasks that often deal with a variety of objects and changing environments. The thesis first describes an automatic data generation and learning framework to address the scalability challenge. Learning is bootstrapped by generating labeled data via physics simulation and rendering. Then it self-improves over time by acquiring and labeling real-world images via a search-based pose estimation process. The thesis proposes algorithms to generate and validate object poses online based on the objects’ geometry and based on the physical consistency of their scene-level interactions. These algorithms provide robustness even when there exists a domain gap between the synthetic training and the real test scenarios. Finally, the thesis proposes a manipulation planning framework that goes beyond model-based pose estimation. By utilizing a dynamic object representation, this integrated perception and manipulation framework can efficiently solve the task of picking unknown objects and placing them in a constrained space. The algorithms are evaluated over real-world robot manipulation experiments and over large-scale public datasets. The results indicate the usefulness of physical constraints in both the training and the online estimation phase. Moreover, the proposed framework, while only utilizing simulated data can obtain robust estimation in challenging scenarios such as densely-packed bins and clutter where other approaches suffer as a result of large occlusion and ambiguities due to similar looking texture-less surfaces.}, keywords = {Pose Estimation}, pubstate = {published}, tppubtype = {phdthesis} } Robot manipulation often depend on some form of pose estimation to represent the state of the world and allow decision making both at the task-level and for motion or grasp planning. Recent progress in deep learning gives hope for a pose estimation solution that could generalize over textured and texture-less objects, objects with or without distinctive shape properties, and under different lighting conditions and clutter scenarios. Nevertheless, it gives rise to a new set of challenges such as the painful task of acquiring large-scale labeled training datasets and of dealing with their stochastic output over unforeseen scenarios that are not captured by the training. This restricts the scalability of such pose estimation solutions in robot manipulation tasks that often deal with a variety of objects and changing environments. The thesis first describes an automatic data generation and learning framework to address the scalability challenge. Learning is bootstrapped by generating labeled data via physics simulation and rendering. Then it self-improves over time by acquiring and labeling real-world images via a search-based pose estimation process. The thesis proposes algorithms to generate and validate object poses online based on the objects’ geometry and based on the physical consistency of their scene-level interactions. These algorithms provide robustness even when there exists a domain gap between the synthetic training and the real test scenarios. Finally, the thesis proposes a manipulation planning framework that goes beyond model-based pose estimation. By utilizing a dynamic object representation, this integrated perception and manipulation framework can efficiently solve the task of picking unknown objects and placing them in a constrained space. The algorithms are evaluated over real-world robot manipulation experiments and over large-scale public datasets. The results indicate the usefulness of physical constraints in both the training and the online estimation phase. Moreover, the proposed framework, while only utilizing simulated data can obtain robust estimation in challenging scenarios such as densely-packed bins and clutter where other approaches suffer as a result of large occlusion and ambiguities due to similar looking texture-less surfaces. |
Shome, R; Bekris, K E Synchronized Multi-Arm Rearrangement Guided by Mode Graphs with Capacity Constraints Conference Workshop on the Algorithmic Foundations of Robotics (WAFR), Oulu, Finland, 2020. Abstract | Links | BibTeX | Tags: @conference{229, title = {Synchronized Multi-Arm Rearrangement Guided by Mode Graphs with Capacity Constraints}, author = {R Shome and K E Bekris}, url = {http://www.cs.rutgers.edu/~kb572/pubs/multi_arm_rearrangement_capacity_constraints.pdf}, year = {2020}, date = {2020-06-16}, booktitle = {Workshop on the Algorithmic Foundations of Robotics (WAFR)}, address = {Oulu, Finland}, abstract = {Solving task planning problems involving multiple objects and mul- tiple robotic arms poses scalability challenges. Such problems involve not only coordinating multiple high-DoF arms, but also searching through possible se- quences of actions including object placements, and handoffs. The current work identifies a useful connection between multi-arm rearrangement and recent re- sults in multi-body path planning on graphs with vertex capacity constraints. Solving a synchronized multi-arm rearrangement at a high-level involves rea- soning over a modal graph, where nodes correspond to stable object placements and object transfer states by the arms. Edges of this graph correspond to pick, placement and handoff operations. The objects can be viewed as pebbles moving over this graph, which has capacity constraints. For instance, each arm can carry a single object but placement locations can accumulate many objects. Efficient inte- ger linear programming-based solvers have been proposed for the corresponding pebble problem. The current work proposes a heuristic to guide the task planning process for synchronized multi-arm rearrangement. Results indicate good scala- bility to multiple arms and objects, and an algorithm that can find high-quality solutions fast and exhibiting desirable anytime behavior.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Solving task planning problems involving multiple objects and mul- tiple robotic arms poses scalability challenges. Such problems involve not only coordinating multiple high-DoF arms, but also searching through possible se- quences of actions including object placements, and handoffs. The current work identifies a useful connection between multi-arm rearrangement and recent re- sults in multi-body path planning on graphs with vertex capacity constraints. Solving a synchronized multi-arm rearrangement at a high-level involves rea- soning over a modal graph, where nodes correspond to stable object placements and object transfer states by the arms. Edges of this graph correspond to pick, placement and handoff operations. The objects can be viewed as pebbles moving over this graph, which has capacity constraints. For instance, each arm can carry a single object but placement locations can accumulate many objects. Efficient inte- ger linear programming-based solvers have been proposed for the corresponding pebble problem. The current work proposes a heuristic to guide the task planning process for synchronized multi-arm rearrangement. Results indicate good scala- bility to multiple arms and objects, and an algorithm that can find high-quality solutions fast and exhibiting desirable anytime behavior. |
Shome, R; Nakhimovich, D; Bekris, K E Pushing the Boundaries of Asymptotic Optimality in Integrated Task and Motion Planning Conference Workshop on the Algorithmic Foundations of Robotics (WAFR), Oulu, Finland, 2020. Abstract | Links | BibTeX | Tags: @conference{225, title = {Pushing the Boundaries of Asymptotic Optimality in Integrated Task and Motion Planning}, author = {R Shome and D Nakhimovich and K E Bekris}, url = {http://www.cs.rutgers.edu/~kb572/pubs/asymptotic_optimality_task_motion_planning.pdf}, year = {2020}, date = {2020-06-15}, booktitle = {Workshop on the Algorithmic Foundations of Robotics (WAFR)}, address = {Oulu, Finland}, abstract = {Integrated task and motion planning problems describe a multi-modal state space, which is often abstracted as a set of smooth manifolds that are connected via sets of transitions states. One approach to solving such problems is to sample reachable states in each of the manifolds, while simultaneously sampling transition states. Prior work has shown that in order to achieve asymptotically optimal (AO) solutions for such piecewise-smooth task planning problems, it is sufficient to double the connection radius required for AO sampling-based motion planning. This was shown under the assumption that the transition sets themselves are smooth. The current work builds upon this result and demonstrates that it is sufficient to use the same connection radius as for standard AO motion planning. Furthermore, the current work studies the case that the transition sets are non-smooth boundary points of the valid state space, which is frequently the case in practice, such as when a gripper grasps an object. This paper generalizes the notion of clearance that is typically assumed in motion and task planning to include such individual, potentially non-smooth transition states. It is shown that asymptotic optimality is retained under this generalized regime.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Integrated task and motion planning problems describe a multi-modal state space, which is often abstracted as a set of smooth manifolds that are connected via sets of transitions states. One approach to solving such problems is to sample reachable states in each of the manifolds, while simultaneously sampling transition states. Prior work has shown that in order to achieve asymptotically optimal (AO) solutions for such piecewise-smooth task planning problems, it is sufficient to double the connection radius required for AO sampling-based motion planning. This was shown under the assumption that the transition sets themselves are smooth. The current work builds upon this result and demonstrates that it is sufficient to use the same connection radius as for standard AO motion planning. Furthermore, the current work studies the case that the transition sets are non-smooth boundary points of the valid state space, which is frequently the case in practice, such as when a gripper grasps an object. This paper generalizes the notion of clearance that is typically assumed in motion and task planning to include such individual, potentially non-smooth transition states. It is shown that asymptotic optimality is retained under this generalized regime. |
Sintov, A; Kimmel, A; Wen, B; Boularias, A; Bekris, K E Tools for Data-driven Modeling of Within-Hand Manipulation with Underactuated Adaptive Hands Conference Learning for Dynamics & Control (L4DC), Berkeley, CA, 2020. BibTeX | Tags: @conference{224, title = {Tools for Data-driven Modeling of Within-Hand Manipulation with Underactuated Adaptive Hands}, author = {A Sintov and A Kimmel and B Wen and A Boularias and K E Bekris}, year = {2020}, date = {2020-06-12}, booktitle = {Learning for Dynamics & Control (L4DC)}, address = {Berkeley, CA}, keywords = {}, pubstate = {published}, tppubtype = {conference} } |
Wang, K; Aanjaneya, M; Bekris, K E Learning for Dynamics & Control (L4DC), Berkeley, CA, 2020. @conference{223, title = {A First Principles Approach for Data-Efficient System Identification of Spring-Rod Systems via Differentiable Physics Engines}, author = {K Wang and M Aanjaneya and K E Bekris}, url = {https://arxiv.org/abs/2004.13859}, year = {2020}, date = {2020-06-11}, booktitle = {Learning for Dynamics & Control (L4DC)}, address = {Berkeley, CA}, keywords = {}, pubstate = {published}, tppubtype = {conference} } |
Sintov, A; Kimmel, A; Bekris, K E; Boularias, A Motion Planning with Competency-Aware Transition Models for Underactuated Adaptive Hands Conference IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020. BibTeX | Tags: @conference{221, title = {Motion Planning with Competency-Aware Transition Models for Underactuated Adaptive Hands}, author = {A Sintov and A Kimmel and K E Bekris and A Boularias}, year = {2020}, date = {2020-06-01}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, address = {Paris, France}, keywords = {}, pubstate = {published}, tppubtype = {conference} } |
Kleinbort, M; Solovey, K; Bonalli, R; Granados, E; Bekris, K E; Halperin, D Refined Analysis of Asymptotically-Optimal Kinodynamic Planning in the State-Cost Space Conference IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020. Abstract | Links | BibTeX | Tags: @conference{222, title = {Refined Analysis of Asymptotically-Optimal Kinodynamic Planning in the State-Cost Space}, author = {M Kleinbort and K Solovey and R Bonalli and E Granados and K E Bekris and D Halperin}, url = {https://arxiv.org/abs/1909.05569}, year = {2020}, date = {2020-06-01}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, address = {Paris, France}, abstract = {We present a novel analysis of AO-RRT: a tree-based planner for motion planning with kinodynamic constraints, originally described by Hauser and Zhou (AO-X, 2016). AO-RRT explores the state-cost space and has been shown to efficiently obtain high-quality solutions in practice without relying on the availability of a computationally-intensive two-point boundary-value solver. Our main contribution is an optimality proof for the single-tree version of the algorithm---a variant that was not analyzed before. Our proof only requires a mild and easily-verifiable set of assumptions on the problem and system: Lipschitz-continuity of the cost function and the dynamics. In particular, we prove that for any system satisfying these assumptions, any trajectory having a piecewise-constant control function and positive clearance from the obstacles can be approximated arbitrarily well by a trajectory found by AO-RRT. We also discuss practical aspects of AO-RRT and present experimental comparisons of variants of the algorithm. }, keywords = {}, pubstate = {published}, tppubtype = {conference} } We present a novel analysis of AO-RRT: a tree-based planner for motion planning with kinodynamic constraints, originally described by Hauser and Zhou (AO-X, 2016). AO-RRT explores the state-cost space and has been shown to efficiently obtain high-quality solutions in practice without relying on the availability of a computationally-intensive two-point boundary-value solver. Our main contribution is an optimality proof for the single-tree version of the algorithm---a variant that was not analyzed before. Our proof only requires a mild and easily-verifiable set of assumptions on the problem and system: Lipschitz-continuity of the cost function and the dynamics. In particular, we prove that for any system satisfying these assumptions, any trajectory having a piecewise-constant control function and positive clearance from the obstacles can be approximated arbitrarily well by a trajectory found by AO-RRT. We also discuss practical aspects of AO-RRT and present experimental comparisons of variants of the algorithm. |
Wen, B; Mitash, C; Soorian, S; Kimmel, A; Sintov, A; Bekris, K E Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands Conference IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020. Abstract | Links | BibTeX | Tags: @conference{220, title = {Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands}, author = {B Wen and C Mitash and S Soorian and A Kimmel and A Sintov and K E Bekris}, url = {https://arxiv.org/abs/2003.03518}, year = {2020}, date = {2020-06-01}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, address = {Paris, France}, abstract = {Many manipulation tasks, such as placement or within-hand manipulation, require the object’s pose relative to a robot hand. The task is difficult when the hand significantly occludes the object. It is especially hard for adaptive hands, for which it is not easy to detect the finger’s configuration. In addition, RGB-only approaches face issues with texture-less objects or when the hand and the object look similar. This paper presents a depth-based framework, which aims for robust pose estimation and short response times. The approach detects the adaptive hand’s state via efficient parallel search given the highest overlap between the hand’s model and the point cloud. The hand’s point cloud is pruned and robust global registration is performed to generate object pose hypotheses, which are clustered. False hypotheses are pruned via physical reasoning. The remaining poses’ quality is evaluated given agreement with observed data. Extensive evaluation on synthetic and real data demonstrates the accuracy and computational efficiency of the framework when applied on challenging, highly-occluded scenarios for different object types. An ablation study identifies how the framework’s components help in performance. This work also provides a dataset for in-hand 6D object pose esti- mation. Code and dataset are available at: https://github. com/wenbowen123/icra20-hand-object-pose}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Many manipulation tasks, such as placement or within-hand manipulation, require the object’s pose relative to a robot hand. The task is difficult when the hand significantly occludes the object. It is especially hard for adaptive hands, for which it is not easy to detect the finger’s configuration. In addition, RGB-only approaches face issues with texture-less objects or when the hand and the object look similar. This paper presents a depth-based framework, which aims for robust pose estimation and short response times. The approach detects the adaptive hand’s state via efficient parallel search given the highest overlap between the hand’s model and the point cloud. The hand’s point cloud is pruned and robust global registration is performed to generate object pose hypotheses, which are clustered. False hypotheses are pruned via physical reasoning. The remaining poses’ quality is evaluated given agreement with observed data. Extensive evaluation on synthetic and real data demonstrates the accuracy and computational efficiency of the framework when applied on challenging, highly-occluded scenarios for different object types. An ablation study identifies how the framework’s components help in performance. This work also provides a dataset for in-hand 6D object pose esti- mation. Code and dataset are available at: https://github. com/wenbowen123/icra20-hand-object-pose |
Littlefield, Z Efficient and Asymptotically Optimal Kinodynamic Motion Planning PhD Thesis Rutgers, the State University of New Jersey, 2020. Abstract | Links | BibTeX | Tags: @phdthesis{230, title = {Efficient and Asymptotically Optimal Kinodynamic Motion Planning}, author = {Z Littlefield}, url = {http://www.cs.rutgers.edu/~kb572/pubs/LittlefieldThesisMay2020.pdf}, year = {2020}, date = {2020-05-01}, volume = {PhD}, school = {Rutgers, the State University of New Jersey}, abstract = {This dissertation explores properties of motion planners that build tree data structures in a robottextquoterights state space. Sampling-based tree planners are especially useful for planning for systems with significant dynamics, due to the inherent forward search that is per- formed. This is in contrast to roadmap planners that require a steering local planner in order to make a graph containing multiple possible paths. This dissertation explores a family of motion planners for systems with significant dynamics, where a steering local planner may be computationally expensive or may not exist. These planners fo- cus on providing practical path quality guarantees without prohibitive computational costs. These planners can be considered successors of each other, in that each sub- sequent algorithm addresses some drawback of its predecessor. The first algorithm, Sparse-RRT, addresses a drawback of the RRT method by considering path quality dur- ing the tree construction process. Sparse-RRT is proven to be probabilistically complete under mild conditions for the first time here, albeit with a poor convergence rate. The second algorithm presented, SST, provides probabilistic completeness and asymptotic near-optimality properties that are provable, but at the cost of additional algorithmic overhead. SST is shown to improve the convergence rate compared to Sparse-RRT. The third algorithm, DIRT, incorporates learned lessons from these two algorithms and their shortcomings, incorporates task space heuristics to further improve runtime per- formance, and simplifies the parameters to more user-friendly ones. DIRT is also shown to be probabilistically complete and asymptotically near-optimal. Application areas explored using this family of algorithms include evaluation of distance functions for planning in belief space, manipulation in cluttered environments, and locomotion plan- ning for an icosahedral tensegrity-based rover prototype that requires a physics engine to simulate its motions.}, keywords = {}, pubstate = {published}, tppubtype = {phdthesis} } This dissertation explores properties of motion planners that build tree data structures in a robottextquoterights state space. Sampling-based tree planners are especially useful for planning for systems with significant dynamics, due to the inherent forward search that is per- formed. This is in contrast to roadmap planners that require a steering local planner in order to make a graph containing multiple possible paths. This dissertation explores a family of motion planners for systems with significant dynamics, where a steering local planner may be computationally expensive or may not exist. These planners fo- cus on providing practical path quality guarantees without prohibitive computational costs. These planners can be considered successors of each other, in that each sub- sequent algorithm addresses some drawback of its predecessor. The first algorithm, Sparse-RRT, addresses a drawback of the RRT method by considering path quality dur- ing the tree construction process. Sparse-RRT is proven to be probabilistically complete under mild conditions for the first time here, albeit with a poor convergence rate. The second algorithm presented, SST, provides probabilistic completeness and asymptotic near-optimality properties that are provable, but at the cost of additional algorithmic overhead. SST is shown to improve the convergence rate compared to Sparse-RRT. The third algorithm, DIRT, incorporates learned lessons from these two algorithms and their shortcomings, incorporates task space heuristics to further improve runtime per- formance, and simplifies the parameters to more user-friendly ones. DIRT is also shown to be probabilistically complete and asymptotically near-optimal. Application areas explored using this family of algorithms include evaluation of distance functions for planning in belief space, manipulation in cluttered environments, and locomotion plan- ning for an icosahedral tensegrity-based rover prototype that requires a physics engine to simulate its motions. |
Shome, R Rutgers University, 2020. @phdthesis{226, title = {The Problem of Many: Efficient Multi-arm, Multi-Object Task and and Motion Planning with Optimality Guarantees}, author = {R Shome}, url = {http://www.cs.rutgers.edu/~kb572/pubs/rahul_shome_thesis.pdf}, year = {2020}, date = {2020-04-01}, volume = {PhD}, address = {New Brunswick, NJ}, school = {Rutgers University}, keywords = {}, pubstate = {published}, tppubtype = {phdthesis} } |
Alikhani, M; Khalid, B; Shome, R; Mitash, C; Bekris, K E; Stone, M That and There: Judging the Intent of Pointing Actions with Robotic Arms Conference Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), New York, NY, 2020. Abstract | Links | BibTeX | Tags: @conference{218, title = {That and There: Judging the Intent of Pointing Actions with Robotic Arms}, author = {M Alikhani and B Khalid and R Shome and C Mitash and K E Bekris and M Stone}, url = {https://arxiv.org/abs/1912.06602}, year = {2020}, date = {2020-02-01}, booktitle = {Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)}, journal = {Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)}, address = {New York, NY}, abstract = {Collaborative robotics requires effective communication between a robot and a human partner. This work proposes a set of interpretive principles for how a robotic arm can use pointing actions to communicate task information to people by extending existing models from the related literature. These principles are evaluated through studies where English-speaking human subjects view animations of simulated robots instructing pick-and-place tasks. The evaluation distinguishes two classes of pointing actions that arise in pick-and-place tasks: referential pointing (identifying objects) and spatial pointing (identifying locations). The study indicates that human subjects show greater flexibility in interpreting the intent of referential pointing compared to spatial pointing, which needs to be more deliberate. The results also demonstrate the effects of variation in the environment and task context on the interpretation of pointing. The corpus and the experiments described in this work can impact models of context and coordination as well as the effect of common sense reasoning in human-robot interactions.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Collaborative robotics requires effective communication between a robot and a human partner. This work proposes a set of interpretive principles for how a robotic arm can use pointing actions to communicate task information to people by extending existing models from the related literature. These principles are evaluated through studies where English-speaking human subjects view animations of simulated robots instructing pick-and-place tasks. The evaluation distinguishes two classes of pointing actions that arise in pick-and-place tasks: referential pointing (identifying objects) and spatial pointing (identifying locations). The study indicates that human subjects show greater flexibility in interpreting the intent of referential pointing compared to spatial pointing, which needs to be more deliberate. The results also demonstrate the effects of variation in the environment and task context on the interpretation of pointing. The corpus and the experiments described in this work can impact models of context and coordination as well as the effect of common sense reasoning in human-robot interactions. |
Shome, R; Solovey, K; Dobson, A; Halperin, D; Bekris, K E dRRT*: Scalable and Informed Asymptotically-Optimal Multi-Robot Motion Planning Journal Article Autonomous Robots, 2020. Abstract | Links | BibTeX | Tags: Multi-Robot, Planning @article{204, title = {dRRT*: Scalable and Informed Asymptotically-Optimal Multi-Robot Motion Planning}, author = {R Shome and K Solovey and A Dobson and D Halperin and K E Bekris}, url = {https://www.cs.rutgers.edu/~kb572/pubs/drrt_star_auro.pdf}, year = {2020}, date = {2020-01-24}, journal = {Autonomous Robots}, abstract = {Many exciting robotic applications require multiple robots with many degrees of freedom, such as manipulators, to coordinate their motion in a shared workspace. Discovering high-quality paths in such scenarios can be achieved, in principle, by exploring the composite space of all robots. Sampling-based planners do so by building a roadmap or a tree data structure in the corresponding configuration space and can achieve asymptotic optimality. The hardness of motion planning, however, renders the explicit construction of such structures in the composite space of multiple robots impractical. This work proposes a scalable solution for such coupled multi-robot problems, which provides desirable path-quality guarantees and is also computationally efficient. In particular, the proposed dRRT* is an informed, asymptotically-optimal extension of a prior sampling-based multi-robot motion planner, dRRT. The prior approach introduced the idea of building roadmaps for each robot and implicitly searching the tensor product of these structures in the composite space. This work identifies the conditions for convergence to optimal paths in multi-robot problems, which the prior method was not achieving.}, keywords = {Multi-Robot, Planning}, pubstate = {published}, tppubtype = {article} } Many exciting robotic applications require multiple robots with many degrees of freedom, such as manipulators, to coordinate their motion in a shared workspace. Discovering high-quality paths in such scenarios can be achieved, in principle, by exploring the composite space of all robots. Sampling-based planners do so by building a roadmap or a tree data structure in the corresponding configuration space and can achieve asymptotic optimality. The hardness of motion planning, however, renders the explicit construction of such structures in the composite space of multiple robots impractical. This work proposes a scalable solution for such coupled multi-robot problems, which provides desirable path-quality guarantees and is also computationally efficient. In particular, the proposed dRRT* is an informed, asymptotically-optimal extension of a prior sampling-based multi-robot motion planner, dRRT. The prior approach introduced the idea of building roadmaps for each robot and implicitly searching the tensor product of these structures in the composite space. This work identifies the conditions for convergence to optimal paths in multi-robot problems, which the prior method was not achieving. |
Goldberg, K; Abbeel, P; Bekris, K E; Miller, L Algorithmic Foundations of Robotics XII Book Springer, 2020, ISBN: 978-3-030-43089-4. Abstract | Links | BibTeX | Tags: @book{227, title = {Algorithmic Foundations of Robotics XII}, author = {K Goldberg and P Abbeel and K E Bekris and L Miller}, url = {https://link.springer.com/book/10.1007/978-3-030-43089-4}, isbn = {978-3-030-43089-4}, year = {2020}, date = {2020-01-01}, volume = {13}, publisher = {Springer}, organization = {Springer}, series = {Springer Proceedings in Advanced Robotics}, abstract = {Robotics is reaching an elevated level of maturity and continues to benefit from the advances and innovations in its enabling technologies. These all are contributing to an unprecedented effort to bringing robots to the human environment in hospitals and homes, factories, and schools, in the field for robots fighting fires, making goods and products, picking fruits and watering the farmland, saving time and lives. Robots today hold the promise for making a considerable impact in a wide range of real-world applications from industrial manufacturing to health care, transportation, and exploration of the deep space and sea. Tomorrow, robots will become pervasive and touch upon many aspects of modern life. The Springer Tracts in Advanced Robotics (STAR) was launched in 2002 with the goal of bringing to the research community the latest advances in the robotics field based on their significance and quality. During the latest fifteen years, the STAR series has featured publication of both monographs and edited collections. Among the latter, the proceedings of thematic symposia devoted to excellence in robotics research, such as ISRR, ISER, FSR, and WAFR, have been regularly included in STAR. The expansion of our field as well as the emergence of new research areas has motivated us to enlarge the pool of proceedings in the STAR series in the past few years. This has ultimately led to launching a sister series in parallel to STAR. The Springer Proceedings in Advanced Robotics (SPAR) is dedicated to the timely dissemination of the latest research results presented in selected symposia and workshops. This volume of the SPAR series brings the proceedings of the twelfth edition of the Workshop Algorithmic Foundations of Robotics (WAFR). WAFR went back to its roots and was held from December 18 to 20, 2016, in San Francisco, California, the same city in which the very first WAFR was held in 1994. The volume edited by Ken Goldberg, Pieter Abbeel, Kostas Bekris, and Lauren Miller is a collection of 58 contributions spanning a wide range of applications in manufacturing, medicine, distributed robotics, human–robot interaction, intelligent prosthetics, computer animation, computational biology, and many other areas. Validation of algorithms, design concepts, or techniques is the common thread running through this focused collection. Rich by topics and authoritative contributors, WAFR culminates with this unique reference on the current developments and new directions in the field of algorithmic foundations. A very fine addition to the series! More information about the conference, including videos of the presentations can be found under the conferencetextquoterights website: http://wafr2016.berkeley.edu}, keywords = {}, pubstate = {published}, tppubtype = {book} } Robotics is reaching an elevated level of maturity and continues to benefit from the advances and innovations in its enabling technologies. These all are contributing to an unprecedented effort to bringing robots to the human environment in hospitals and homes, factories, and schools, in the field for robots fighting fires, making goods and products, picking fruits and watering the farmland, saving time and lives. Robots today hold the promise for making a considerable impact in a wide range of real-world applications from industrial manufacturing to health care, transportation, and exploration of the deep space and sea. Tomorrow, robots will become pervasive and touch upon many aspects of modern life. The Springer Tracts in Advanced Robotics (STAR) was launched in 2002 with the goal of bringing to the research community the latest advances in the robotics field based on their significance and quality. During the latest fifteen years, the STAR series has featured publication of both monographs and edited collections. Among the latter, the proceedings of thematic symposia devoted to excellence in robotics research, such as ISRR, ISER, FSR, and WAFR, have been regularly included in STAR. The expansion of our field as well as the emergence of new research areas has motivated us to enlarge the pool of proceedings in the STAR series in the past few years. This has ultimately led to launching a sister series in parallel to STAR. The Springer Proceedings in Advanced Robotics (SPAR) is dedicated to the timely dissemination of the latest research results presented in selected symposia and workshops. This volume of the SPAR series brings the proceedings of the twelfth edition of the Workshop Algorithmic Foundations of Robotics (WAFR). WAFR went back to its roots and was held from December 18 to 20, 2016, in San Francisco, California, the same city in which the very first WAFR was held in 1994. The volume edited by Ken Goldberg, Pieter Abbeel, Kostas Bekris, and Lauren Miller is a collection of 58 contributions spanning a wide range of applications in manufacturing, medicine, distributed robotics, human–robot interaction, intelligent prosthetics, computer animation, computational biology, and many other areas. Validation of algorithms, design concepts, or techniques is the common thread running through this focused collection. Rich by topics and authoritative contributors, WAFR culminates with this unique reference on the current developments and new directions in the field of algorithmic foundations. A very fine addition to the series! More information about the conference, including videos of the presentations can be found under the conferencetextquoterights website: http://wafr2016.berkeley.edu |
Bekris, K E; Shome, R Asymptotically Optimal Sampling-based Planners Book Chapter Encyclopedia of Robotics, 2020. Abstract | Links | BibTeX | Tags: @inbook{219, title = {Asymptotically Optimal Sampling-based Planners}, author = {K E Bekris and R Shome}, url = {https://arxiv.org/abs/1911.04044}, year = {2020}, date = {2020-01-01}, booktitle = {Encyclopedia of Robotics}, abstract = {An asymptotically optimal sampling-based planner employs sampling to solve robot motion planning problems and returns paths with a cost that converges to the optimal solution cost, as the number of samples approaches infinity. This comprehensive article covers the theoretical characteristics of asymptotic optimality of motion planning algorithms, and traces its origins, analysis models, practical performance, extensions, and applications. }, keywords = {}, pubstate = {published}, tppubtype = {inbook} } An asymptotically optimal sampling-based planner employs sampling to solve robot motion planning problems and returns paths with a cost that converges to the optimal solution cost, as the number of samples approaches infinity. This comprehensive article covers the theoretical characteristics of asymptotic optimality of motion planning algorithms, and traces its origins, analysis models, practical performance, extensions, and applications. |
2019 |
Kimmel, A; Shome, R; Bekris, K E Anytime Motion Planning for Prehensile Manipulation in Dense Clutter Journal Article Advanced Robotics, 2019. Abstract | Links | BibTeX | Tags: @article{217, title = {Anytime Motion Planning for Prehensile Manipulation in Dense Clutter}, author = {A Kimmel and R Shome and K E Bekris}, url = {https://www.rahulsho.me/papers/ar_gmp.pdf}, year = {2019}, date = {2019-11-17}, journal = {Advanced Robotics}, abstract = {Many methods have been developed for planning the motion of robotic arms for picking and placing, ranging from local optimization to global search techniques, which are effective for sparsely placed objects. Dense clutter, however, still adversely affects the success rate, computation times, and quality of solutions in many real-world setups. The proposed method achieves high success ratio in clutter with anytime performance by returning solutions quickly and improving their quality over time. The method first explores the lower dimensional end effector’s task space efficiently by ignoring the arm, and build a discrete approximation of a navigation function. This is performed online, without prior knowledge of the scene. Then, an informed sampling-based planner for the entire arm uses Jacobian- based steering to reach promising end effector poses given the task space guidance. This process is also comprehensive and allows the exploration of alternative paths over time if the task space guidance is misleading. This paper evaluates the proposed method against alternatives in picking or placing tasks among varying amounts of clutter for a variety of robotic manipulators with different end-effectors. The results suggest that the method reliably provides higher quality solution paths quicker, with a higher success rate relative to alternatives. }, keywords = {}, pubstate = {published}, tppubtype = {article} } Many methods have been developed for planning the motion of robotic arms for picking and placing, ranging from local optimization to global search techniques, which are effective for sparsely placed objects. Dense clutter, however, still adversely affects the success rate, computation times, and quality of solutions in many real-world setups. The proposed method achieves high success ratio in clutter with anytime performance by returning solutions quickly and improving their quality over time. The method first explores the lower dimensional end effector’s task space efficiently by ignoring the arm, and build a discrete approximation of a navigation function. This is performed online, without prior knowledge of the scene. Then, an informed sampling-based planner for the entire arm uses Jacobian- based steering to reach promising end effector poses given the task space guidance. This process is also comprehensive and allows the exploration of alternative paths over time if the task space guidance is misleading. This paper evaluates the proposed method against alternatives in picking or placing tasks among varying amounts of clutter for a variety of robotic manipulators with different end-effectors. The results suggest that the method reliably provides higher quality solution paths quicker, with a higher success rate relative to alternatives. |
Kimmel, A; Sintov, A; Tan, J; Wen, B; Boularias, A; Bekris, K E Belief-Space Planning using Learned Models with Application to Underactuated Hands Conference International Symposium on Robotics Research (ISRR), Hanoi, Vietnam, 2019. Abstract | Links | BibTeX | Tags: @conference{213, title = {Belief-Space Planning using Learned Models with Application to Underactuated Hands}, author = {A Kimmel and A Sintov and J Tan and B Wen and A Boularias and K E Bekris}, url = {http://www.cs.rutgers.edu/~kb572/pubs/belief_space_learned_models_adaptive_hands.pdf}, year = {2019}, date = {2019-10-01}, booktitle = {International Symposium on Robotics Research (ISRR)}, address = {Hanoi, Vietnam}, abstract = {Acquiring a precise model is a challenging task for many important robotic tasks and systems - including in-hand manipulation using underactuated, adaptive hands. Learning stochastic, data-driven models is a promising alternative as they provide not only a way to propagate forward the system dynamics, but also express the uncertainty present in the collected data. Therefore, such models en- able planning in the space of state distributions, i.e., in the belief space. This paper proposes a planning framework that employs stochastic, learned models, which ex- press a distribution of states as a set of particles. The integration achieves anytime behavior in terms of returning paths of increasing quality under constraints for the probability of success to achieve a goal. The focus of this effort is on pushing the efficiency of the overall methodology despite the notorious computational hardness of belief-space planning. Experiments show that the proposed framework enables reaching a desired goal with higher success rate compared to alternatives in sim- ple benchmarks. This work also provides an application to the motivating domain of in-hand manipulation with underactuated, adaptive hands, both in the case of physically-simulated experiments as well as demonstrations with a real hand.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Acquiring a precise model is a challenging task for many important robotic tasks and systems - including in-hand manipulation using underactuated, adaptive hands. Learning stochastic, data-driven models is a promising alternative as they provide not only a way to propagate forward the system dynamics, but also express the uncertainty present in the collected data. Therefore, such models en- able planning in the space of state distributions, i.e., in the belief space. This paper proposes a planning framework that employs stochastic, learned models, which ex- press a distribution of states as a set of particles. The integration achieves anytime behavior in terms of returning paths of increasing quality under constraints for the probability of success to achieve a goal. The focus of this effort is on pushing the efficiency of the overall methodology despite the notorious computational hardness of belief-space planning. Experiments show that the proposed framework enables reaching a desired goal with higher success rate compared to alternatives in sim- ple benchmarks. This work also provides an application to the motivating domain of in-hand manipulation with underactuated, adaptive hands, both in the case of physically-simulated experiments as well as demonstrations with a real hand. |
Mitash, C; Wen, B; Bekris, K E; Boularias, A Scene-level Pose Estimation for Multiple Instances of Densely Packed Objects Conference Conference on Robot Learning (CoRL), Osaka, Japan, 2019. Abstract | Links | BibTeX | Tags: @conference{214, title = {Scene-level Pose Estimation for Multiple Instances of Densely Packed Objects}, author = {C Mitash and B Wen and K E Bekris and A Boularias}, url = {https://arxiv.org/pdf/1910.04953.pdf}, year = {2019}, date = {2019-10-01}, booktitle = {Conference on Robot Learning (CoRL)}, address = {Osaka, Japan}, abstract = {This paper introduces key machine learning operations that allow the realization of robust, joint 6D pose estimation of multiple instances of objects either densely packed or in unstructured piles from RGB-D data. The first objective is to learn semantic and instance-boundary detectors without manual labeling. An adversarial training framework in conjunction with physics-based simulation is used to achieve detectors that behave similarly in synthetic and real data. Given the stochastic output of such detectors, candidates for object poses are sampled. The second objective is to automatically learn a single score for each pose candidate that represents its quality in terms of explaining the entire scene via a gradient boosted tree. The proposed method uses features derived from surface and boundary alignment between the observed scene and the object model placed at hypothesized poses. Scene-level, multi-instance pose estimation is then achieved by an integer linear programming process that selects hypotheses that maximize the sum of the learned individual scores, while respecting constraints, such as avoiding collisions. To evaluate this method, a dataset of densely packed objects with challenging setups for state-of-the-art approaches is collected. Experiments on this dataset and a public one show that the method significantly outperforms alternatives in terms of 6D pose accuracy while trained only with synthetic datasets.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } This paper introduces key machine learning operations that allow the realization of robust, joint 6D pose estimation of multiple instances of objects either densely packed or in unstructured piles from RGB-D data. The first objective is to learn semantic and instance-boundary detectors without manual labeling. An adversarial training framework in conjunction with physics-based simulation is used to achieve detectors that behave similarly in synthetic and real data. Given the stochastic output of such detectors, candidates for object poses are sampled. The second objective is to automatically learn a single score for each pose candidate that represents its quality in terms of explaining the entire scene via a gradient boosted tree. The proposed method uses features derived from surface and boundary alignment between the observed scene and the object model placed at hypothesized poses. Scene-level, multi-instance pose estimation is then achieved by an integer linear programming process that selects hypotheses that maximize the sum of the learned individual scores, while respecting constraints, such as avoiding collisions. To evaluate this method, a dataset of densely packed objects with challenging setups for state-of-the-art approaches is collected. Experiments on this dataset and a public one show that the method significantly outperforms alternatives in terms of 6D pose accuracy while trained only with synthetic datasets. |
Sivaramakrishnan, A; Littlefield, Z; Bekris, K E Towards Learning Efficient Maneuver Sets for Kinodynamic Motion Planning Technical Report PlanRob 2019 Workshop of ICAPS 2019 2019. Abstract | Links | BibTeX | Tags: @techreport{211, title = {Towards Learning Efficient Maneuver Sets for Kinodynamic Motion Planning}, author = {A Sivaramakrishnan and Z Littlefield and K E Bekris}, url = {https://www.cs.rutgers.edu/~kb572/pubs/Learning_Maneuver_Sets_PlanRob2019.pdf}, year = {2019}, date = {2019-07-01}, institution = {PlanRob 2019 Workshop of ICAPS 2019}, abstract = {Planning for systems with dynamics is challenging as often there is no local planner available and the only primitive to explore the state space is forward propagation of controls. In this context, tree sampling-based planners have been developed, some of which achieve asymptotic optimality by propagating random controls during each iteration. While desirable for the analysis, random controls result in slow convergence to high quality trajectories in practice. This short position statement first argues that if a kinodynamic planner has access to local maneuvers that appropriately balance an exploitation-exploration trade-off, the plannertextquoterights per iteration performance is significantly improved. Furthermore, this work argues for the integration of modern machine learning frameworks with state-of-the-art, informed and asymptotically optimal kinodynamic planners. The proposed approach involves using using neural networks to infer local maneuvers for a robotic system with dynamics, which properly balance the above exploitation-exploration trade-off. Preliminary indications in simulated environments and systems are promising but also point to certain challenges that motivate further research in this direction.}, keywords = {}, pubstate = {published}, tppubtype = {techreport} } Planning for systems with dynamics is challenging as often there is no local planner available and the only primitive to explore the state space is forward propagation of controls. In this context, tree sampling-based planners have been developed, some of which achieve asymptotic optimality by propagating random controls during each iteration. While desirable for the analysis, random controls result in slow convergence to high quality trajectories in practice. This short position statement first argues that if a kinodynamic planner has access to local maneuvers that appropriately balance an exploitation-exploration trade-off, the plannertextquoterights per iteration performance is significantly improved. Furthermore, this work argues for the integration of modern machine learning frameworks with state-of-the-art, informed and asymptotically optimal kinodynamic planners. The proposed approach involves using using neural networks to infer local maneuvers for a robotic system with dynamics, which properly balance the above exploitation-exploration trade-off. Preliminary indications in simulated environments and systems are promising but also point to certain challenges that motivate further research in this direction. |
Shome, R; Bekris, K E Anytime Multi-arm Task and Motion Planning for Pick-and-Place of Individual Objects via Handoffs Conference IEEE International Conference on Multi-Robot and Multi-Agent Systems (MRS), New Brunswick, NJ, 2019. Abstract | Links | BibTeX | Tags: @conference{212, title = {Anytime Multi-arm Task and Motion Planning for Pick-and-Place of Individual Objects via Handoffs}, author = {R Shome and K E Bekris}, url = {https://arxiv.org/abs/1905.03179}, year = {2019}, date = {2019-06-01}, booktitle = {IEEE International Conference on Multi-Robot and Multi-Agent Systems (MRS)}, address = {New Brunswick, NJ}, abstract = {Automation applications are pushing the deployment of many high DoF manipulators in warehouse and manufacturing environments. This has motivated many efforts on optimizing manipulation tasks involving a single arm. Coordinating multiple arms for manipulation, however, introduces additional computational challenges arising from the increased DoFs, as well as the combinatorial increase in the available operations that many manipulators can perform, including handoffs between arms. The focus here is on the case of pick-and-place tasks, which require a sequence of handoffs to be executed, so as to achieve computational efficiency, asymptotic optimality and practical anytime performance. The paper leverages recent advances in multi-robot motion planning for high DoF systems to propose a novel multi-modal extension of the dRRT* algorithm. The key insight is that, instead of naively solving a sequence of motion planning problems, it is computationally advantageous to directly explore the composite space of the integrated multi-arm task and motion planning problem, given input sets of possible pick and handoff configurations. Asymptotic optimality guarantees are possible by sampling additional picks and handoffs over time. The evaluation shows that the approach finds initial solutions fast and improves their quality over time. It also succeeds in finding solutions to harder problem instances relative to alternatives and can scale effectively as the number of robots increases.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Automation applications are pushing the deployment of many high DoF manipulators in warehouse and manufacturing environments. This has motivated many efforts on optimizing manipulation tasks involving a single arm. Coordinating multiple arms for manipulation, however, introduces additional computational challenges arising from the increased DoFs, as well as the combinatorial increase in the available operations that many manipulators can perform, including handoffs between arms. The focus here is on the case of pick-and-place tasks, which require a sequence of handoffs to be executed, so as to achieve computational efficiency, asymptotic optimality and practical anytime performance. The paper leverages recent advances in multi-robot motion planning for high DoF systems to propose a novel multi-modal extension of the dRRT* algorithm. The key insight is that, instead of naively solving a sequence of motion planning problems, it is computationally advantageous to directly explore the composite space of the integrated multi-arm task and motion planning problem, given input sets of possible pick and handoff configurations. Asymptotic optimality guarantees are possible by sampling additional picks and handoffs over time. The evaluation shows that the approach finds initial solutions fast and improves their quality over time. It also succeeds in finding solutions to harder problem instances relative to alternatives and can scale effectively as the number of robots increases. |
Sintov, A; Morgan, A; Kimmel, A; Dollar, A; Bekris, K E; Boularias, A Learning a State Transition Model of an Underactuated Adaptive Hand Journal Article IEEE Robotics and Automation Letters (RA-L) (also appearing at IEEE ICRA 2019), 2019. Abstract | Links | BibTeX | Tags: @article{205, title = {Learning a State Transition Model of an Underactuated Adaptive Hand}, author = {A Sintov and A Morgan and A Kimmel and A Dollar and K E Bekris and A Boularias}, url = {http://www.cs.rutgers.edu/~kb572/pubs/Learning_a_State_Transition_Model.pdf}, year = {2019}, date = {2019-05-01}, journal = {IEEE Robotics and Automation Letters (RA-L) (also appearing at IEEE ICRA 2019)}, abstract = {Fully-actuated, multi-fingered robotic hands are often expensive and fragile. Low-cost, under-actuated hands are appealing but present challenges due to the lack of analytical models. This paper aims to learn a stochastic version of such models automatically from data with minimum user effort. The focus is on identifying the dominant, sensible features required to express hand state transitions given quasi-static motions, thereby enabling the learning of a probabilistic transition model from recorded trajectories. Experiments both with Gaussian Processes (GP) and Neural Network models are included for analysis and evaluation. The metric for local GP regression is obtained with a manifold learning approach, known as "Diffusion Maps", to uncover the lower-dimensional subspace in which the data lies and provide a geodesic metric. Results show that using Diffusion Maps with a feature space composed of the object position, actuator angles, and actuator loads, sufficiently expresses the hand-object system "configuration and can provide accurate enough predictions for a relatively long horizon. To the best of the authorstextquoteright knowledge, this is the first learned transition model for such underactuated hands that achieves this level of predictability. Notably, the same feature space implicitly embeds the size of the manipulated object and can generalize to new objects of varying sizes. Furthermore, the learned model can identify states that are on the verge of failure and which should be avoided during manipulation. The usefulness of the model is also demonstrated by integrating it with closed-loop control to successfully and safely complete manipulation tasks.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Fully-actuated, multi-fingered robotic hands are often expensive and fragile. Low-cost, under-actuated hands are appealing but present challenges due to the lack of analytical models. This paper aims to learn a stochastic version of such models automatically from data with minimum user effort. The focus is on identifying the dominant, sensible features required to express hand state transitions given quasi-static motions, thereby enabling the learning of a probabilistic transition model from recorded trajectories. Experiments both with Gaussian Processes (GP) and Neural Network models are included for analysis and evaluation. The metric for local GP regression is obtained with a manifold learning approach, known as "Diffusion Maps", to uncover the lower-dimensional subspace in which the data lies and provide a geodesic metric. Results show that using Diffusion Maps with a feature space composed of the object position, actuator angles, and actuator loads, sufficiently expresses the hand-object system "configuration and can provide accurate enough predictions for a relatively long horizon. To the best of the authorstextquoteright knowledge, this is the first learned transition model for such underactuated hands that achieves this level of predictability. Notably, the same feature space implicitly embeds the size of the manipulated object and can generalize to new objects of varying sizes. Furthermore, the learned model can identify states that are on the verge of failure and which should be avoided during manipulation. The usefulness of the model is also demonstrated by integrating it with closed-loop control to successfully and safely complete manipulation tasks. |
Shome, R; Tang, W N; Song, C; Mitash, C; Kourtev, C; Yu, J; Boularias, A; Bekris, K E Towards Robust Product Packing with a Minimalistic End-Effector Conference IEEE International Conference on Robotics and Automation (ICRA), 2019, (Nomination for Best Paper Award in Automation). Abstract | Links | BibTeX | Tags: Manipulation, Robot Perception @conference{207, title = {Towards Robust Product Packing with a Minimalistic End-Effector}, author = {R Shome and W N Tang and C Song and C Mitash and C Kourtev and J Yu and A Boularias and K E Bekris}, url = {http://robotpacking.org/}, year = {2019}, date = {2019-05-01}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {Advances in sensor technologies, object detection algorithms, planning frameworks and hardware designs have motivated the deployment of robots in warehouse automation. A variety of such applications, like order fulfillment or packing tasks, require picking objects from unstructured piles and carefully arranging them in bins or containers. Desirable solutions need to be low-cost, easily deployable and controllable, making minimalistic hardware choices desirable. The challenge in designing an effective solution to this problem relates to appropriately integrating multiple components, so as to achieve a robust pipeline that minimizes failure conditions. The current work proposes a complete pipeline for solving such packing tasks, given access only to RGB-D data and a single robot arm with a minimalistic, vacuum-based end-effector. To achieve the desired level of robustness, three key manipulation primitives are identified, which take advantage of the environment and simple operations to successfully pack multiple cubic objects. The overall approach is demonstrated to be robust to execution and perception errors. The impact of each manipulation primitive is evaluated by considering different versions of the proposed pipeline that incrementally introduce reasoning about object poses and corrective manipulation actions.}, note = {Nomination for Best Paper Award in Automation}, keywords = {Manipulation, Robot Perception}, pubstate = {published}, tppubtype = {conference} } Advances in sensor technologies, object detection algorithms, planning frameworks and hardware designs have motivated the deployment of robots in warehouse automation. A variety of such applications, like order fulfillment or packing tasks, require picking objects from unstructured piles and carefully arranging them in bins or containers. Desirable solutions need to be low-cost, easily deployable and controllable, making minimalistic hardware choices desirable. The challenge in designing an effective solution to this problem relates to appropriately integrating multiple components, so as to achieve a robust pipeline that minimizes failure conditions. The current work proposes a complete pipeline for solving such packing tasks, given access only to RGB-D data and a single robot arm with a minimalistic, vacuum-based end-effector. To achieve the desired level of robustness, three key manipulation primitives are identified, which take advantage of the environment and simple operations to successfully pack multiple cubic objects. The overall approach is demonstrated to be robust to execution and perception errors. The impact of each manipulation primitive is evaluated by considering different versions of the proposed pipeline that incrementally introduce reasoning about object poses and corrective manipulation actions. |
2022 |
6N-DoF Pose Tracking for Tensegrity Robots Inproceedings International Symposium on Robotics Research (ISRR), 2022. |
Safe, Occlusion-Aware Manipulation for Online Object Reconstruction in Confined Space Inproceedings International Symposium on Robotics Research (ISRR) , 2022. |
You Only Demonstrate Once: Category-Level Manipulation from Single Visual Demonstration Inproceedings Robotics: Science and Systems (RSS), 2022, (Nomination for Best Paper Award). |
Morse Graphs: Topological Tools for Analyzing the Global Dynamics of Robot Controllers Inproceedings Workshop on the Algorithmic Foundations of Robotics (WAFR), 2022. |
Lazy Rearrangement Planning in Confined Spaces Inproceedings International Conference on Automated Planning and Scheduling (ICAPS), 2022. |
A Survey on the Integration of Machine Learning with Sampling-based Motion Planning Journal Article Forthcoming Foundations and Trends in Robotics, Forthcoming. |
Terrain-Aware Learned Controllers for Sampling-Based Kinodynamic Planning over Physically Simulated Terrains Inproceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022. |
Online Object Model Reconstruction and Reuse for Lifelong Improvement of Robot Manipulation Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022, (Nomination for Best Paper Award in Manipulation). |
CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. |
Complex In-Hand Manipulation via Compliance-Enabled Finger Gaiting and Multi-Modal Planning Journal Article IEEE Robotics and Automation Letters (also at ICRA), 2022. |
A Recurrent Differentiable Engine for Modeling Tensegrity Robots Trainable with Low-Frequency Data Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. |
Efficient and High-Quality Prehensile Rearrangement in Cluttered and Confined Spaces Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. |
Persistent Homology for Effective Non-Prehensile Manipulation Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. |
Model Identification and Control of a Mobile Robot with Omnidirectional Wheels Using Differentiable Physics Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. |
Learning Sensorimotor Primitives of Sequential Manipulation Tasks from Visual Demonstrations Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. |
Fast High-Quality Tabletop Rearrangement in Bounded Workspace Inproceedings IEEE International Conference on Robotics and Automation (ICRA), 2022. |
2021 |
Tensegrity Robotics Journal Article Soft Robotics, 2021. |
Proof-of-Concept Designs for the Assembly of Modular, Dynamic Tensegrities into Easily Deployable Structures Conference ASCE Earth and Space Conference 2021, Seattle, WA, 2021. |
Sim2Sim Evaluation of a Novel Data-Efficient Differentiable Physics Engine for Tensegrity Robots Inproceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021. |
Improving Kinodynamic Planners for Vehicular Navigation with Learned Goal-Reaching Controllers Inproceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021. |
BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models Inproceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021. |
Team RuBot’s experiences and lessons from the ARIAC Journal Article Robotics and Computer-Integrated Manufacturing, 70 , 2021. |
Vision-driven Compliant Manipulation for Reliable, High-Precision Assembly Tasks Conference Robotics: Science and Systems, 2021. |
Uniform Object Rearrangement: From Complete Monotone Primitives to Efficient Non-Monotone Informed Search Inproceedings International Conference on Robotics and Automation (ICRA) 2021, 2021. |
Robotics as an Enabler of Resiliency to Disasters: Promises and Pitfalls Book Chapter Roberts, Fred S; Sheremet, Igor A (Ed.): 12660 , pp. 75-101, Springer, 2021. |
Fast, High-Quality Two-Arm Rearrangement in Synchronous, Monotone Tabletop Setups Journal Article IEEE Transactions on Automation Science and Engineering, 2021. |
2020 |
Task-driven Perception and Manipulation for Constrained Placement of Unknown Objects Journal Article IEEE Robotics and Automation Letters (RA-L) (also appearing at IEEE/RSJ IROS 2020), 2020. |
se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains Conference IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, 2020. |
Safe and Effective Picking Paths in Clutter given Discrete Distributions of Object Poses Conference IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, 2020. |
Scalable, Physics-aware 6D Pose Estimation for Robot Manipulation PhD Thesis Rutgers University, 2020. |
Synchronized Multi-Arm Rearrangement Guided by Mode Graphs with Capacity Constraints Conference Workshop on the Algorithmic Foundations of Robotics (WAFR), Oulu, Finland, 2020. |
Pushing the Boundaries of Asymptotic Optimality in Integrated Task and Motion Planning Conference Workshop on the Algorithmic Foundations of Robotics (WAFR), Oulu, Finland, 2020. |
Tools for Data-driven Modeling of Within-Hand Manipulation with Underactuated Adaptive Hands Conference Learning for Dynamics & Control (L4DC), Berkeley, CA, 2020. |
Learning for Dynamics & Control (L4DC), Berkeley, CA, 2020. |
Motion Planning with Competency-Aware Transition Models for Underactuated Adaptive Hands Conference IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020. |
Refined Analysis of Asymptotically-Optimal Kinodynamic Planning in the State-Cost Space Conference IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020. |
Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands Conference IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020. |
Efficient and Asymptotically Optimal Kinodynamic Motion Planning PhD Thesis Rutgers, the State University of New Jersey, 2020. |
Rutgers University, 2020. |
That and There: Judging the Intent of Pointing Actions with Robotic Arms Conference Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), New York, NY, 2020. |
dRRT*: Scalable and Informed Asymptotically-Optimal Multi-Robot Motion Planning Journal Article Autonomous Robots, 2020. |
Algorithmic Foundations of Robotics XII Book Springer, 2020, ISBN: 978-3-030-43089-4. |
Asymptotically Optimal Sampling-based Planners Book Chapter Encyclopedia of Robotics, 2020. |
2019 |
Anytime Motion Planning for Prehensile Manipulation in Dense Clutter Journal Article Advanced Robotics, 2019. |
Belief-Space Planning using Learned Models with Application to Underactuated Hands Conference International Symposium on Robotics Research (ISRR), Hanoi, Vietnam, 2019. |
Scene-level Pose Estimation for Multiple Instances of Densely Packed Objects Conference Conference on Robot Learning (CoRL), Osaka, Japan, 2019. |
Towards Learning Efficient Maneuver Sets for Kinodynamic Motion Planning Technical Report PlanRob 2019 Workshop of ICAPS 2019 2019. |
Anytime Multi-arm Task and Motion Planning for Pick-and-Place of Individual Objects via Handoffs Conference IEEE International Conference on Multi-Robot and Multi-Agent Systems (MRS), New Brunswick, NJ, 2019. |
Learning a State Transition Model of an Underactuated Adaptive Hand Journal Article IEEE Robotics and Automation Letters (RA-L) (also appearing at IEEE ICRA 2019), 2019. |
Towards Robust Product Packing with a Minimalistic End-Effector Conference IEEE International Conference on Robotics and Automation (ICRA), 2019, (Nomination for Best Paper Award in Automation). |