Chris Amato

Research Scientist
 CSAIL and LIDS, MIT
camato at csail dot mit dot edu


Home | Publications | Research | Talks


I am a Research Scientist at MIT working with Leslie Kaelbling and the Learning and Intelligent Systems group in CSAIL as well as Jon How and the Aerospace Control Lab in LIDS. I have a BA from Tufts University in Clinical Psychology and Philosophy and a BS in Computer Science and Math and a PhD in Computer Science, both from UMass Amherst. My research interests include Artificial Intelligence, Reasoning Under Uncertainty, Multi-Agent and Multi-Robot Systems, Game Theory, Machine Learning and Robotics.

My research explores principled solution methods for single and multiagent systems with stochasticity and partial observability. As agents are built for more complex environments, engineering solutions by hand becomes very difficult, while methods based on formal models can automatically generate high quality solutions. Also, as more agents are utilized (e.g., robot systems, networks), centralization becomes impossible or will perform poorly due to communication cost, latency or noise. Decentralized methods that can not only determine how to best use partial information, but also coordinate while optimizing communication have strong advantages. My research seeks to develop fundamental theory as well as scalable algorithms that are applicable in real-world systems such as multi-robot navigation and surveillance problems.

Some of my research on using macro-actions (control programs for subproblems) to produce scalable cooperation in multi-agent systems has been featured as a MIT news story (and then picked up by a bunch of other news agencies). Here is a recent video that shows these ideas working in a multi-robot warehousing domain:



Press:

I'm very interested in using my work for real world applications. Here are some press articles about my work.

Optimizing communication and behavior for teams of robots:
  • Some of my research on using macro-actions (control programs for subproblems) to produce scalable cooperation in multi-agent sytems has been featured as a MIT news story.
  • A high level description of this work discussing search and rescue was published in Government Technology.
  • A IEEE Spectrum article highlights our approach for handling uncertainty which is critical for real world problems.
Automated surveillance with security cameras:
  • There is an article about some of my recent work on balancing time and quality to quickly track and detect security threats at MITnews.
  • Another article with more quotes about the research at Security Info Watch.
Artificial intelligence for computer games:
  • There is a Q&A about some of my work on creating opponents that learn to improve in computer games over at PhaseLeap (no longer available).

Recent and upcoming events:

Our AAMAS paper, Exploiting Separability in Multi-Agent Planning with Continuous-State MDPs, won best paper! It can be downloaded here.

We have a combined tutorial and workshop at AAMAS-14 on Multiagent Sequential Decision Making Under Uncertainty. Check out the website here.

We had a pair of tutorials, one on Self-Interested Decision Making in Sequential Multiagent Settings and one on Cooperative Decision Making in Sequential Multiagent Settings  at AAMAS-13 with Prashant Doshi, Frans Oliehoek, Zinovi Rabinovich, Matthijs Spaan and Stefan Witwicki. You can see the website here.

I co-organized the workshop on Decision Making in Partially Observable, Uncertain Worlds: Exploring Insights from Multiple Communities at IJCAI-11. For more info, check out the website.

Other links:

I maintain the Dec-POMDP page which contains information about the decentralized partially observable Markov decision process (Dec-POMDP) model for describing multiagent decision making under uncertainty. Check it out for an overview, publications, talks and code for various datasets.

While working at Microsoft Research over the summer, I developed a reinforcement learning framework for the video game Civilization IV. You can download it and be able to have the AI learn to improve its play with different RL algorithms. Check it out at the MSR website.