"A Voice-Commandable Autonomous Forklift for Warehouse Operations in Semi-Structured Environments"

Matthew Walter
Postdoctoral Associate
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology

In July 2009, I visited the Distributed Intelligent Systems and Algorithms Laboratory at the Ecole Polytechnique Fédérale de Lausanne. I presented my recent work on the development of a commandable autonomous forklift as part of the MIT Agile Robotics effort. The talk specifically focused on our approaches to the perception problems related to pallet manipulation, namely the methods by which we detect and estimate the pose of unknown pallets and trucks.

Abstract

The problem of enabling a robot to autonomously interact with an outdoor, semi-structured, human-occupied environment poses interesting challenges in the areas of perception, control, motion planning, and human-robot interaction. In this talk, I discuss these challenges as they pertain to an autonomous lift truck (forklift) that operates among people in an semi-structured, outdoor warehouse. Taking only high-level directives from a human supervisor, the vehicle safely executes tasks that require interaction with people and objects in an unknown environment. These include the ability to detect and manipulate loaded pallets, both on the ground and on a truck bed, as well as navigating over uneven terrain, amidst static and dynamic obstacles.

This talk focuses on the fundamental aspects of our approach that include novel insights into human-robot interaction, situational awareness, perception, and planning. I discuss a multi-modal interface through which a supervisor uses a combination of speech and pen-based gestures to communicate with the robot. By grounding the interface in a shared model of the world, the result is a natural interface through which the system is able to understand high-level commands. I then focus on the specific task of pallet engagement, whereby the vehicle detects and manipulates an arbitrary pallet. I describe a LIDAR-based detection strategy that we use to estimate both the geometry of the pallet as well as that of its supporting structure. I present recent results that demonstrate the end-to-end system operating for several hours on rough terrain and conclude by discussing our current and future research directions.

Slides

Presentation slides (see below for links to referenced movies)

pdf (14MB)


Creative Commons License

Media

(2009_06_11_pallet_dropoff_truck)

[mp4 (h264, 22MB)]

This video demonstrates the process of picking up a pallet off of the ground and placing the pallet on a truck. The supervisor commands the forklift to pickup the pallet off of the ground by circling the pallet in an image taken from the bot's forward-facing camera. The supervisor then directs the bot to bring the pallet to Issue where a truck is waiting, via a spoken command ("Bot, go to Issue"). He then commands the forklift to place the pallet on the truck bed by circling the desired placement location in a forward-facing camera image.

(2009_06_10_truck_pickup_viewer)

[mp4 (h264, 24MB)]

A visualization of the pallet detection process, showing the different steps of the hierarchical classifier. Having been commanded to pickup an a priori unknown pallet from truck, the bot searches for the pallet by scanning with a tine-mounted LIDAR. Once the pallet has been found, the bot drives towards it and then reacquires the pallet prior to insertion.


(2009_06_10_truck_dropoff_viewer)

[mp4 (h264, 17MB)]

A visualization of the truck detection process, showing the outcome of the hierarchical truck classifier. The system uses scans from the two vertically-scanning LIDARs mounted to the side of carriage to infer the pose of the a priori unknown truck. Consistent scans yield an estimate of the height and yaw of the truck as well as its distance from the forklift.

(2009_06_10_truck_pickup)

[mp4 (h264, 17MB)]

This video shows the forklift picking up a pallet from a flatbed truck. As in the ground pickup case, the supervisor commands the bot by circling the pallet in an image from the bot's forward-facing camera. The forklift then autonomously engages the pallet, detecting and localizing the pallet and truck, both of which were unknown a priori.


(2009_06_09_summon_truck_placement)

[mp4 (h264, 20MB)]

This video demonstrates an "end-to-end" scenario in which the bot, carrying a loaded pallet is directed to Issue via a spoken command and subsequently requested to place the pallet on a flatbed truck. A standard forklift operator commands the bot.

Last Modified: February 21, 2013

MIT