Learning Rich, Tractable Models of the Real World

Supported through the NTT-MIT Research Collaboration

Leslie Pack Kaelbling, MIT Principal Investigator

Shigeru Katagiri, NTT Principal Investigator


The everyday world of a household or a city street is exceedingly complex and dynamic, from a robot's perspective. In order for robots to operate effectively in such domains, they have to learn models of how the world works and use them to predict the effects of their actions. In traditional AI, such models were represented in first-order logic and related languages; they had no representation of the inherent uncertainty in the world and were not connected up to real perceptual systems. More recent AI techniques allow model-learning directly from perceptual data, but they are representationally impoverished, lacking the ability to refer to objects as such, or to make relational generalizations of the form: "If object A is on object B, then if I move object B, object A will probably move too."

We are engaged in building a robotic system with an arm and camera (currently, in simulation) that will learn relational models of the environment from perceptual data. The models will capture the inherent uncertainty of the environment, and will support planning via sampling and simulation.


Research Group

Research Notes

Presentations

Reading Group

In Fall 1999, we conducted a reading group on the topic of higher order representations in robotics. More details.