As robot function increases, human-robot collaboration becomes more feasible. This workshop will focus on planning techniques for collaboration. We will discuss both collaborations that involve the human and the robot sharing a workspace, working side by side, as well as collaborations that involve the human and the robot using shared autonomy. Challenges include reasoning about the collaborator's intentions and the effects that its own actions may have on future human behavior. The goal is for the robot to execute actions that optimize team performance, human satisfaction and trust. The workshop will foster discussion on computational and experimental challenges in planning of robot actions and communications, including both motion- and task-level methods, and theoretical and experimental approaches.
•Shared control in rehabilitation robotics, telepresence robots, EOD, etc.
•Algorithms for arbitrating between robot autonomy and human input
•Algorithms for planning in a shared autonomy framework: motion planning/task planning
•Interfaces for human control of robots (e.g., BCI, eye gaze, joystick)
•Recognizing and predicting human intent and goals
•Modeling human adaptability and preferences
•Collaborative task planning
•Technique for safety in shared-workspace collaboration: sensing/planning
•Task recognition and modelling
•Machine Learning methods for human-robot interaction
8:30 AM - 10:00 AM
Morning Session 1:
Planning for Shared Autonomy
|8:30 - 8:40||Welcome and Introduction|
|8:40 - 9:05||Control methods for shared use of an electrically-stimulated human muscle and a robot in a hybrid neuroprosthetic||Nitin Sharma|
|9:05 - 9:30||Performance bounds for human machine teaming||Peter Trautman|
|9:30 - 9:45||Postdoc spotlight: Understanding Human Intent for Shared Autonomy||Henny Admoni|
|9:45 - 10:00||AIJ Student Award: Coordination Dynamics in Human Robot Teams||Tariq Iqbal|
|10:00 AM - 10:30 AM
10:30 AM - 12:00 PM
Morning Session 2:
Planning for Shared Workspace 1
|10:30 - 10:55||Closed-Loop Motion Planning for Reactive Execution of Learned Tasks||Ron Alterovitz|
|10:55 - 11:20||Duality of Robot Actions in a Collaborative Context||Ross Knepper|
|11:20 - 11:35||Student spotlight: Human-Robot Mutual Adaptation: Models and Experiments||Stefanos Nikolaidis|
|11:35 - 11:50||Student spotlight: Learning How To Plan In Shared Autonomy For Dexterous Manipulation||Claudia Perez D'Arpino|
|12:00 PM - 2:00PM
2:00 PM - 3:30 PM
Afternoon Session 1:
Planning for Shared Workspace 2
|2:00 - 2:25||Language Learning for Flexible Human-Robot Tasking||Cynthia Matuszek|
|2:25 - 2:50||Cooperative motion planning for human-operated robots||Kris Hauser|
|2:50 - 3:15||AIJ Student Award: Generation of Explicable Plans for Robot Task Planning||Anagha Kulkarni|
|3:15 - 3:30||TBD||TBD|
|3:30 PM - 4:00PM
4:00 PM - 5:30 PM
Afternoon Session 2:
|4:00 - 4:35||2-min Abstract spotlight talks
|4:35 - 5:30||Poster Session||List|
• Ron Alterovitz, University of North Carolina at Chapel Hill: Closed-Loop Motion Planning for Reactive Execution of Learned Tasks
• Kris Hauser, Duke University: Cooperative motion planning for human-operated robots.
• Ross Knepper, Cornell University: Functioning Human-Robot Joint Action
• Cynthia Matuszek, University of Maryland, Baltimore County (UMBC): Language Learning for Flexible Human-Robot Tasking
• Nitin Sharma, University of Pittsburgh: Control methods for shared use of an electrically-stimulated human muscle and a robot in a hybrid neuroprosthetic
• Peter Trautman, Galois: Probabilistic Shared Control
Claudia Perez-D'Arpino is a PhD Student in the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology since 2012, and a member of the Interactive Robotics Group at CSAIL, advised by Prof. Julie A. Shah. She received her degree in Electronics Engineering (2008) and Masters in Mechatronics (2010) from the Simon Bolivar University in Venezuela, where she holds an Assistant Professorship in the Electronics and Circuits Department and the Mechatronics Research Group (2010-2012). Her research focuses on statistical models and algorithms for prediction and motion planning that improve the efficiency of human-robot collaboration, mainly for manipulation tasks in shared autonomy and shared-space collaborative robotics.
Stefanos Nikolaidis is a PhD student at the Personal Robotics Lab in Carnegie Mellon's Robotics Institute, working with Prof. Siddhartha Srinivasa. His research focuses on algorithms for improving the performance of human-robot teams in real-world collaborative tasks. Stefanos has a MS from MIT, a MEng from the University of Tokyo and a BS from the National Technical University of Athens. He has additionally worked as a research specialist at MIT and as a researcher at Square-Enix video-game company in Tokyo. He has received a Best Enabling Technologies Award from the IEEE/ACM International Conference on Human-Robot Interaction and was a Best Paper Award Finalist in the International Symposium on Robotics.
Henny Admoni is a postdoctoral fellow at the Personal Robotics Lab in the Robotics Institute at Carnegie Mellon University. Her areas of interest include assistive robotics, human-robot interaction, and collaborative manipulation. Henny completed her PhD in Computer Science at Yale University, where she worked with Brian Scassellati and the Social Robotics Lab on modeling the complex and dynamic structures of nonverbal behavior in human-robot collaboration.
Anca Dragan is a new Assistant Professor at UC Berkeley in the Electrical Engineering and Computer Sciences Department, and leads the InterACT Lab. She obtained her PhD at Carnegie Mellon's Robotics Institute, where she was a member of the Personal Robotics Lab. She was born in Romania and received her B.Sc. in Computer Science from Jacobs University in Germany in 2009. Her research lies at the intersection of robotics, machine learning, and human-robot interaction: she works on algorithms that enable robots to seamlessly work with, around, and in support of people. She's received awards from Siebel, Dan David, Intel, and Google, and her publications have been best paper finalists at Robotics: Science and Systems, the International Conference on Robotics and Automation, and the International Symposium on Robot and Human Interactive Communication.
Julie A. Shah is an Associate Professor in the Department of Aeronautics and Astronautics and leads the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory. Shah received her SB (2004) and SM (2006) from the Department of Aeronautics and Astronautics at MIT, and her PhD (2010) in Autonomous Systems from MIT. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. She has developed innovative methods for enabling fluid human-robot teamwork in time-critical, safety-critical domains, ranging from manufacturing to surgery to space exploration. Shah was awarded an NSF CAREER Award in 2014 and has received recognition in the form of best paper awards and nominations from the International Conference on Automated Planning and Scheduling, the American Institute of Aeronautics and Astronautics, the IEEE/ACM International Conference on Human-Robot Interaction, and the International Symposium on Robotics.
Dr. Schwartz received his Ph.D. in Physiology from the University of Minnesota in 1984. He then went on to a postdoctoral fellowship with Dr. Apostolos Georgopoulos, who was developing the concept of directional tuning and population-based movement representation in the motor cortex. After working in Phoenix and San Diego, he moved to the University of Pittsburgh in 2002. Through his research, Schwartz developed paradigms to explore cortical signals generated during volitional arm and hand movements. This effort has shown that a high- fidelity representation of movement intention could be decoded from the motor cortex and enabled technology now being used by paralyzed subjects to operate a high-performance prosthetic arm and hand.
Mao Zhi-Hong is an Associate Professor and William Kepler Whiteford Faculty Fellow in the Departments of Electrical and Computer Engineering and Bioengineering at the University of Pittsburgh, Pittsburgh, PA. His research interests include human-in-the-loop control systems, networked control systems, and neural control and learning. He received his dual Bachelor's degrees in automatic control and mathematics from Tsinghua University, Beijing, China, in 1995, M.Eng. degree in intelligent control and pattern recognition from Tsinghua University in 1998, S.M. degree in aeronautics and astronautics from the Massachusetts Institute of Technology, Cambridge, in 2000, and Ph.D. degree in medical engineering and medical physics from the Harvard-MIT Division of Health Sciences and Technology, Cambridge, in 2006. He joined the University of Pittsburgh as an Assistant Professor in 2005 and became an Associate Professor in 2011. He was a recipient of the Outstanding Educator Award of Pitt Swanson School of Engineering in 2009, NSF CAREER Award in 2010, Andrew P. Sage Best Transactions Paper Award of the IEEE Systems, Man and Cybernetics Society in 2010, and Outstanding Service Award as Associate Editor of IEEE Transactions on Intelligent Transportation Systems in 2013.
Siddhartha Srinivasa is an Associate Professor at the Robotics Institute at Carnegie Mellon University. He founded the Personal Robotics Lab, leads the Mobility and Manipulation Thrust at the Quality of Life NSF ERC, and co-directs the Manipulation Lab at CMU. Dr. Srinivasa's research focus is on developing perception, planning, and learning algorithms that enable robots to accomplish useful manipulation tasks in dynamic and cluttered indoor environments.
The workshop aims to bring together researchers in the field of collaborative robotics and shared autonomy to discuss planning approaches that explicitly consider the interaction with humans. This topic brings together the HRI and robotics communities, including researchers working in task planning, motion planning, assistance and risk-aware motion planning, physical interaction, safety, user interfaces, task and motion prediction, autonomy and human-input balance, among others.