The problem of enabling a robot to autonomously interact with an outdoor, semi-structured, human-occupied environment poses interesting challenges in the areas of perception, control, motion planning, and human-robot interaction. In this talk, I discuss these challenges as they pertain to an autonomous lift truck (forklift) that operates among people in an semi-structured, outdoor warehouse. Taking only high-level directives from a human supervisor, the vehicle safely executes tasks that require interaction with people and objects in an unknown environment. These include the ability to detect and manipulate loaded pallets, both on the ground and on a truck bed, as well as navigating over uneven terrain, amidst static and dynamic obstacles.
This talk focuses on the fundamental aspects of our approach that include novel insights into human-robot interaction, situational awareness, perception, and planning. I discuss a multi-modal interface through which a supervisor uses a combination of speech and pen-based gestures to communicate with the robot. By grounding the interface in a shared model of the world, the result is a natural interface through which the system is able to understand high-level commands. I then focus on the specific task of pallet engagement, whereby the vehicle detects and manipulates an arbitrary pallet. I describe a LIDAR-based detection strategy that we use to estimate both the geometry of the pallet as well as that of its supporting structure. I present recent results that demonstrate the end-to-end system operating for several hours on rough terrain and conclude by discussing our current and future research directions.