Research
My research focuses on learning representations in order to enable effective human-robot teaming. Specifically, I’m interested in learning useful representations of the complicated structure of multi-robot systems, representations that enable more effective control of multi-robot systems, and representations that enable robots to work directly alongside humans.
See my submitted and accepted publications below.
Representations for Multi-Robot Structure
Multi-robot systems are complex, involving many different forms of relationships connecting individual robots. As these systems grow in size it can become extremely difficult to display their state to a human teammate or for a human teammate to understand the roles of individual robots. I’m interested in learning representations that make it easier for a teammate to understand the structure of a multi-robot system - for example, embedding multiple relationship graphs into a unified vector representation of each robot to use as input for machine learning algorithms like clustering, or graph representation learning to create an approximation of the known relationships.
Representations for Multi-Robot Control
The complexities that make multi-robot systems difficult to understand also make them difficult for human operators to control. My research develops representations that make multi-robot systems easier to control, such as learning optimal weights that determine without a human operator how leader robots should behave to effectively lead followers or how robots should adapt to team composition changes. Representations for control can also take the form of representing interactions between robots as games to enable communication-free navigation or fusing sensor observations from multiple robots to identify the most informative views.
Representations for Human-Robot Teaming
Finally, I’m interested in ultimately enabling robots to more capably work as teammates with humans. This begins with representing, recognizing, and analyzing the activities of humans. I’ve also worked with enabling a robot teammate to recognize the intent of a team of humans, and hope to work with human teammates alongside multiple robots in the future.
Submitted Papers
- Team Assignment for Heterogeneous Multi-Robot Sensor Coverage through Graph Representation Learning. Brian Reily and Hao Zhang. International Conference on Robotics and Automation (ICRA), Submitted 2020. [overview] [arXiv] [pdf]
- Game Theoretic Decentralized and Communication-Free Multi-Robot Navigation. Brian Reily, Terran Mott and Hao Zhang. International Conference on Robotics and Automation (ICRA), Submitted 2020. [overview] [arXiv] [pdf]
- Adaptation to Team Composition Changes for Heterogeneous Multi-Robot Sensor Coverage. Brian Reily, Terran Mott and Hao Zhang. International Conference on Robotics and Automation (ICRA), Submitted 2020. [overview] [arXiv] [pdf]
- Simultaneous View and Feature Selection for Collaborative Multi-Robot Perception. Brian Reily and Hao Zhang. International Conference on Robotics and Automation (ICRA), Submitted 2020. [overview] [arXiv] [pdf]
- Robust Real-Time Behavior Recognition of Robot Teams. Lyujian Lu, Hua Wang, Brian Reily, Hao Zhang, and Feiping Nie. Robotics and Automation Letters (RA-L), Submitted 2020.
- Real-Time Recognition of Team Behaviors by Multisensory Graph-Embedded Robot Learning. Brian Reily, Peng Gao, Fei Han, Hua Wang, and Hao Zhang. International Journal of Robotics Research (IJRR), Submitted 2018. [overview]
Accepted Papers
- Leading Multi-Agent Teams to Multiple Goals While Maintaining Communication. Brian Reily, Christopher Reardon, and Hao Zhang. Robotics: Science and Systems (RSS), 2020. [overview] [paper] [pdf] [video] [slides] [blogpost] [bibtex]
- Representing Multi-Robot Structure through Multimodal Graph Embedding for the Selection of Robot Teams. Brian Reily, Christopher Reardon, and Hao Zhang. International Conference on Robotics and Automation (ICRA), 2020. [overview] [pdf] [video] [slides] [bibtex]
- Simultaneous Learning from Human Pose and Object Cues for Real-Time Activity Recognition. Brian Reily, Qingzhao Zhu, Christopher Reardon, and Hao Zhang. International Conference on Robotics and Automation (ICRA), 2020. [overview] [pdf] [video] [slides] [bibtex]
- Visual Reference of Ambiguous Objects for Augmented Reality-Powered Human-Robot Communication in a Shared Workspace. Peng Gao, Brian Reily, Savannah Paul, and Hao Zhang. International Conference on Virtual, Augmented, and Mixed Reality (VAMR), 2020. [overview] [paper] [bibtex]
- Graph Embedding for the Division of Robotic Swarms. Brian Reily, Christopher Reardon, and Hao Zhang. 2nd Robot Teammates Operating in Dynamic, Unstructured Environments (RT-DUNE) Workshop, International Conference on Robotics and Automation (ICRA), 2019. [pdf]
- Activity Recognition by Learning from Human and Object Attributes. Brian Reily, Qingzhao Zhu, and Hao Zhang. 2nd Robot Teammates Operating in Dynamic, Unstructured Environments (RT-DUNE) Workshop, International Conference on Robotics and Automation (ICRA), 2019. [pdf]
- Skeleton-Based Bio-Inspired Human Activity Prediction for Real-Time Human–Robot Interaction. Brian Reily, Fei Han, Lynne Parker, and Hao Zhang. Autonomous Robots (AuRo) vol. 42, pp. 1281-1298, 2018. [overview] [paper] [bibtex]
- Space-time Representation of People Based on 3D Skeletal Data: A Review. Fei Han*, Brian Reily*, William Hoff, and Hao Zhang. Computer Vision and Image Understanding (CVIU) vol. 158, pp. 85-105, 2017. *Equal Contribution. [overview] [paper] [bibtex]
- Real-Time Gymnast Detection and Performance Analysis with a Portable 3D Camera. Brian Reily, Hao Zhang, and William Hoff. Computer Vision and Image Understanding (CVIU) vol. 159, pp. 154-163, 2017. [overview] [paper] [bibtex]
- Human Activity Recognition and Gymnastics Analysis through Depth Imagery. Brian Reily. Thesis, Colorado School of Mines, 2016. [overview] [pdf] [bibtex]