Simultaneous View and Feature Selection for Collaborative Multi-Robot Perception (Submitted to ICRA 2021)

[arXiv] [pdf]

Overview Figure

Collaborative multi-robot perception provides multiple views of an environment, offering varying perspectives to collaboratively understand the environment even when individual robots have poor points of view or when occlusions are caused by obstacles. These multiple observations must be intelligently fused for accurate recognition, and relevant observations need to be selected in order to allow unnecessary robots to continue on to observe other targets. This research problem has not been well studied in the literature yet. In this paper, we propose a novel approach to collaborative multi-robot perception that simultaneously integrates view selection, feature selection, and object recognition into a unified regularized optimization formulation, which uses sparsity-inducing norms to identify the robots with the most representative views and the modalities with the most discriminative features. As our optimization formulation is hard to solve due to the introduced non-smooth norms, we implement a new iterative optimization algorithm, which is guaranteed to converge to the optimal solution. We evaluate our approach on multi-view benchmark datasets, a case-study in simulation, and on a physical multi-robot system. Experimental results demonstrate that our approach enables accurate object recognition and effective view selection as defined by mutual information.

Citation: Simultaneous View and Feature Selection for Collaborative Multi-Robot Perception. Brian Reily and Hao Zhang. International Conference on Robotics and Automation (ICRA), Submitted 2021.