Research
* Being Updated * My research focuses on learning representations in order to enable effective human-robot teaming. Specifically, I’m interested in learning useful representations of the complicated structure of multi-robot systems, representations that enable more effective control of multi-robot systems, and representations that enable robots to work directly alongside humans.
See my submitted and accepted publications below.
Representations for Multi-Robot Structure
Multi-robot systems are complex, involving many different forms of relationships connecting individual robots. As these systems grow in size it can become extremely difficult to display their state to a human teammate or for a human teammate to understand the roles of individual robots. I’m interested in learning representations that make it easier for a teammate to understand the structure of a multi-robot system - for example, embedding multiple relationship graphs into a unified vector representation of each robot to use as input for machine learning algorithms like clustering, or graph representation learning to create an approximation of the known relationships.
Representations for Multi-Robot Control
The complexities that make multi-robot systems difficult to understand also make them difficult for human operators to control. My research develops representations that make multi-robot systems easier to control, such as learning optimal weights that determine without a human operator how leader robots should behave to effectively lead followers or how robots should adapt to team composition changes. Representations for control can also take the form of representing interactions between robots as games to enable communication-free navigation or fusing sensor observations from multiple robots to identify the most informative views.
Representations for Human-Robot Teaming
Finally, I’m interested in ultimately enabling robots to more capably work as teammates with humans. This begins with representing, recognizing, and analyzing the activities of humans. I’ve also worked with enabling a robot teammate to recognize the intent of a team of humans, and hope to work with human teammates alongside multiple robots in the future.
Submitted Papers
- Decentralized and Communication-Free Multi-Robot Navigation through Distributed Games. Brian Reily, Terran Mott and Hao Zhang. International Conference on Robotics and Automation (ICRA), Submitted 2021. [overview] [arXiv] [pdf]
- Asynchronous Collaborative Localization by Integrating Spatiotemporal Graph Learning with Model-Based Estimation. Peng Gao, Brian Reily, Rui Guo, Hongsheng Lu, Qingzhao Zhu, and Hao Zhang. International Conference on Robotics and Automation (ICRA), Submitted 2021. [arXiv] [pdf]
- Enabling Simultaneous View and Feature Selection for Collaborative Multi-Robot Perception. Brian Reily and Hao Zhang. International Journal of Advanced Robotic Systems (IJARS), Submitted 2021. [overview]
- Fusion and Clustering of Multilayer Graphs through Regularized Bistochastic Approximation. Brian Reily and Hao Zhang. Information Fusion, Submitted 2021. [overview]
Accepted Papers
- Role Discovery in Observed Multi-Agent Systems Over Time through Matrix Factorization. Brian Reily, Michael Don, John G. Rogers, and Christopher Reardon. International Symposium on Multi-Robot and Multi-Agent Systems (MRS), 2021. *Best Paper Finalist. [overview] [pdf]
- Balancing Mission and Comprehensibility in Multi-Robot Systems for Disaster Response. Brian Reily, John G. Rogers, and Christopher Reardon. International Symposium on Safety, Security, and Rescue Robotics (SSRR), 2021. [overview] [pdf] [paper]
- Real-Time Recognition of Team Behaviors by Multisensory Graph-Embedded Robot Learning. Brian Reily, Peng Gao, Fei Han, Hua Wang, and Hao Zhang. International Journal of Robotics Research (IJRR), 2021. [overview] [paper]
- Adaptation to Team Composition Changes for Heterogeneous Multi-Robot Sensor Coverage. Brian Reily, Terran Mott and Hao Zhang. International Conference on Robotics and Automation (ICRA), 2021. [overview] [arXiv] [pdf] [bibtex]
- Team Assignment for Heterogeneous Multi-Robot Sensor Coverage through Graph Representation Learning. Brian Reily and Hao Zhang. International Conference on Robotics and Automation (ICRA), 2021. [overview] [arXiv] [pdf] [bibtex]
- Robust Real-Time Group Activity Recognition of Robot Teams. Lyujian Lu, Hua Wang, Brian Reily, and Hao Zhang. Robotics and Automation Letters (RA-L), 2021. [overview] [paper] [pdf] [bibtex]
- Representation Learning for Human-Robot Teaming with Multi-Robot Systems. Brian Reily. PhD Dissertation, Colorado School of Mines, 2021. [overview] [paper] [bibtex]
- Multi-Modal Sensor Fusion and Selection for Enhanced Situational Awareness. Brian Reily, Christopher Reardon, and Hao Zhang. SPIE Digital Forum on Virtual, Augmented, and Mixed Reality Technology for Multi-Domain Operations, 2021. [paper] [bibtex]
- Leading Multi-Agent Teams to Multiple Goals While Maintaining Communication. Brian Reily, Christopher Reardon, and Hao Zhang. Robotics: Science and Systems (RSS), 2020. [overview] [paper] [pdf] [video] [slides] [blogpost] [bibtex]
- Representing Multi-Robot Structure through Multimodal Graph Embedding for the Selection of Robot Teams. Brian Reily, Christopher Reardon, and Hao Zhang. International Conference on Robotics and Automation (ICRA), 2020. [overview] [pdf] [video] [slides] [bibtex]
- Simultaneous Learning from Human Pose and Object Cues for Real-Time Activity Recognition. Brian Reily, Qingzhao Zhu, Christopher Reardon, and Hao Zhang. International Conference on Robotics and Automation (ICRA), 2020. [overview] [pdf] [video] [slides] [bibtex]
- Visual Reference of Ambiguous Objects for Augmented Reality-Powered Human-Robot Communication in a Shared Workspace. Peng Gao, Brian Reily, Savannah Paul, and Hao Zhang. International Conference on Virtual, Augmented, and Mixed Reality (VAMR), 2020. [overview] [paper] [bibtex]
- Graph Embedding for the Division of Robotic Swarms. Brian Reily, Christopher Reardon, and Hao Zhang. 2nd Robot Teammates Operating in Dynamic, Unstructured Environments (RT-DUNE) Workshop, International Conference on Robotics and Automation (ICRA), 2019. [pdf]
- Activity Recognition by Learning from Human and Object Attributes. Brian Reily, Qingzhao Zhu, and Hao Zhang. 2nd Robot Teammates Operating in Dynamic, Unstructured Environments (RT-DUNE) Workshop, International Conference on Robotics and Automation (ICRA), 2019. [pdf]
- Skeleton-Based Bio-Inspired Human Activity Prediction for Real-Time Human–Robot Interaction. Brian Reily, Fei Han, Lynne Parker, and Hao Zhang. Autonomous Robots (AuRo) vol. 42, pp. 1281-1298, 2018. [overview] [paper] [bibtex]
- Space-time Representation of People Based on 3D Skeletal Data: A Review. Fei Han*, Brian Reily*, William Hoff, and Hao Zhang. Computer Vision and Image Understanding (CVIU) vol. 158, pp. 85-105, 2017. *Equal Contribution. [overview] [arXiv] [pdf] [bibtex]
- Real-Time Gymnast Detection and Performance Analysis with a Portable 3D Camera. Brian Reily, Hao Zhang, and William Hoff. Computer Vision and Image Understanding (CVIU) vol. 159, pp. 154-163, 2017. [overview] [paper] [bibtex]
- Human Activity Recognition and Gymnastics Analysis through Depth Imagery. Brian Reily. Thesis, Colorado School of Mines, 2016. [overview] [pdf] [bibtex]