Representation Learning for Human-Robot Teaming with Multi-Robot Systems

[paper] [bibtex]

Human-robot teaming is a critical capability, enabling humans and robots to work alongside each other as teammates. Robots perform a variety of tasks alongside humans, and seamless collaboration enables robots to to increase the efficiency, productivity and safety of humans across a wide spectrum of jobs and lifestyles, while allowing humans to rely on robots augment their work and improve their lives. Due to the complexities of multi-robot systems and the difficulty in identifying human intentions, effective and natural human-robot teaming is a challenging problem. When deploying a multi-robot system to explore and monitor a complex environment, a human operator may struggle to properly account for the various communication links, capabilities, and current spatial distribution of the multi-robot system. When a robot attempts to aid its human teammates with a task, it may struggle to properly account for the context given by the environment and other teammates. To address scenarios like these, representations are needed that allow humans to understand their robot teammates and robots to understand their human teammates.

This research addresses this challenge by learning representations of humans and multi-robot systems, primarily through regularized optimization. These introduced representations allow humans to both understand and effectively control multi-robot systems, while also enabling robots to understand and interact with their human teammates. First, I introduce approaches to learn representations of multi-robot structure, incorporating multiple relationships within the system in order to allow humans to divide or distribute the multi-robot system in an environment. Next, I propose multiple representation learning approaches to enable control of multi-robot systems. These representations, such as weighted utilities or strategic games, enable multi-robot systems to lead followers, navigate to goals, and collaboratively perceive objects, without detailed human intervention. Finally, I introduce representations of individual human activities or team intents, enabling robots to incorporate context from the environment and the entire team to more effectively interact with human teammates. These proposed representation learning approaches not only address specific tasks such as sensor coverage and human behavior understanding, and application scenarios such as search and rescue, disaster response, and homeland security, but conclusively show the value of representations that can encode complicated team structures and multisensory observations.

Citation: Representation Learning for Human-Robot Teaming with Multi-Robot Systems. Brian Reily. PhD Dissertation, Colorado School of Mines, 2021.