Multiagent Planning
My main doctoral work is on multiagent planning with specific application to video games. The core idea is to build a sidekick agent that can observe the actions of a human player, infer their probable goals, and act to help them. The key insight is that if you assume from the agent's perspective that the human is behaving optimally and expects the same of the agent, then it is possible to treat the whole problem of finding what they both ought to do as a joint planning task where both the human and agent are controlled by one policy. We can then find the optimal such policy for each possible goal in the world and use it from the agent's perspective to estimate the probabilty of which particular goal the human is actually pursuing, and then act appropriately to assist them.
This is still a provably hard problem, so we have be content with only finding an approximately optimal policies. I am investigating a variety of such approximations, with a focus on trying to apply Monte Carlo tree search to help combat the very large size of the state spaces typically involved in these kinds of domains and also incorporating communicative actions from both the human and the sidekick.
I am working with the MIT Game Game Lab, formerly called GAMBIT, on this project. For an example of the kind of game that can be made with this this sort of technology, play Dearth, a game that was developed during the 2009 GAMBIT summer program.
Cognitive Models of Helping and Hindering
Related to the multiagent planning work, I worked with Tomer Ullman, Chris Baker, Noah Goodman, and Josh Tenenbaum from MIT's Cocosci group to develop an ideal observer model of human helping and hindering judgements using a combination of Bayesian inference, inverse planning, and value iteration. Our work compared the model with experimental evidence about people's actual helping and hindering judgements and matched them quite well in practice.
Network Analysis
My masters thesis, supervised by Whitman Richards, was on analyzing the graphical structure of networks by measuring features of the local neighbourhoods centered on each node in the graph. I used the distribution of these features in graphs to cluster graphs, and found that some sets of graphs drawn from social data form clusters separate from graphs drawn from other kinds of network data. Network data from my masters thesis can be found here.
Spacewar!
Along with GAMBIT's director Philip Tan, I led a group of MIT undergraduates in reimplementing one of the first video games, Spacewar!, on an Arduino with a Gameduino shield and custom control system. Aside from project management, I was responsible for learning the ancient PDP-1 assembly language and reverse engineering the techniques that Spacewar!'s creators used in order to be as faithful as possible to the original. This was great fun and revealed a lot of clever tricks used by some of the first game programmers to squeeze as much performance as possible out of their machines. You can find my notes on GAMBIT's blog. We presented our reimplementation to the public at the MIT Museum to celebrate the 50th anniversary of Spacewar!'s creation.
Other Projects
I am interested in HTML5 games programming and have developed a basic vector graphics engine for the game engine Akihabara. Along with Roger Grosse, I developed a static analysis system for detecting some kinds of optimizable structures in probabilistic programs. I worked with Firey Cushman to develop a model of the coevolution of punishment and prosociality.
In my pre-MIT research life I worked at Sydney University's Key Center for Design Computing and Cognition (now DesignLab) where I developed smart room technology and built tools for cross-disciplinary design work in virtual worlds.