"Closed-Loop Pallet Engagement in an Unstructured Environment"

Matthew Walter
Postdoctoral Associate
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology

The following documents the presentation of our pallet engagement research as part of the 2010 Workshop on Mobile Manipulation, which took place at ICRA in Anchorage, Alaska. The presentation described our algorithms for pallet detection and manipulation as part of our robotic forklift project.

Karaman, S., Walter, M.R., Frazzoli, E., and Teller, S., Closed-loop Pallet Engagement in an Unstructured Environment. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) Workshop on Mobile Manipulation, Anchorage, Alaska, May 2010.
[bibtex] [pdf]

Abstract

In this talk, I consider the problem of autonomous manipulation of a priori unknown palletized cargo with a robotic lift truck. More specifically, I describe coupled perception and control algorithms that enable the vehicle to engage and drop off loaded pallets relative to locations on the ground or arbitrary truck beds. With little prior knowledge of the objects with which the vehicle is to interact, I present an estimation framework that utilizes a series of classifiers to infer the objects' structure and pose from individual LIDAR scans. The different classifiers share a low-level shape estimation algorithm that uses a linear program to robustly segment input data to generate a set of weak candidate features. I present and analyze the performance of the segmentation and subsequently describe its role in our estimation algorithm. I then evaluate the performance of the motion controller that, given an estimate for a pallet's pose, I employ to safely engage a pallet. I conclude with a validation of our algorithms for a set of real world pallet and truck interactions.

Slides

Presentation slides (see below for links to referenced movies)

pdf (16MB)


Creative Commons License

Media

(2009_11_30_agile_short.mp4)

[mp4 (h264, 67MB)]

This video demonstrates the multimodal interaction mechanisms whereby a human supervisor conveys task-level commands to the robot via a hand-held tablet. These task-level commands include: directing the robot to pickup a desired pallet by circling it in an image from one of the robot's cameras; summoning the robot to a particular destination by speaking to the tablet; and directing the robot to place a pallet by circling the desired location on the ground or a truck.

(2009_06_10_truck_pickup_viewer)

[mp4 (h264, 24MB)]

A visualization of the pallet detection process, showing the different steps of the hierarchical classifier. Having been commanded to pickup an a priori unknown pallet from truck, the bot searches for the pallet by scanning with a tine-mounted LIDAR. Once the pallet has been found, the bot drives towards it and then reacquires the pallet prior to insertion.


(2010_02_21_unmanned)

[mp4 (h264, 61MB)]

A video that shows the forklift operating unmanned, picking up, transporting, and placing pallets as directed by the user via the tablet interface.

Last Modified: February 21, 2013

MIT