This is a brief overview of various teaching activities.
Currently, I am teaching 'Scientific Computing' at the Maastricht Science Program of the Maastricht Science College.
- The Scientific Computing Webpage
- Scientific Computing 2012. This year the course was organized differently and went into more technical detail on certain topics. I'm leaving the slides up for interested students.
I have taught:
- Decision making under uncertainty. Jointly with Matthijs Spaan at EASSS'12.
I am currently preparing a number of tutorials:
- Again Decision making under uncertainty at EASSS'13 at KCL. Jointly with Matthijs Spaan.
- Cooperative Decision Making in Sequential Multiagent Settings at AAMAS 13. Jointly with Christopher Amato, Prashant Doshi, Zinovi Rabinovich, Matthijs Spaan, & Stefan Witwicki.
- Self-Interested Decision Making in Sequential Multiagent Settings at AAMAS 13. Jointly with Christopher Amato, Prashant Doshi, Zinovi Rabinovich, Matthijs Spaan, & Stefan Witwicki.
- Advanced Topics in Autonomous Agents (UvA). Decentralized POMDPs. March 18, 2013. download (pdf).
- 6.882: Planning and Decision Making (MIT). Decision Making for Cooperative Agents: Multiagent MDPs, Decentralized MDPs & POMDPs November 23, 2010. download (pdf).
- Multiagent sytems and distributed AI (MASDAI), UvA, 2008 and 2009. download (pdf).
- Decision making in intelligent systems (DMIS), UvA, April 14th 2008, May 4th 2009. download (pdf).
I am involved in the supervision of various students at Maastricht University, MIT and UvA.
I am looking for (a) motivated master student(s) for the following project(s):
Formal Decision Making with Practical Communication
Formal models of multiagent decision making provide a basis for finding optimal policies. However, current formal models make impractical assumptions with respect to communication. For instance they assume full communication of all private sensor data or the existence of abstract communication utterances of which the optimal meaning must be found during planning. However, it is often relatively simple for a human designer to anticipate in what situations communication can be helpful. Therefore, in this project we will develop formal models that incorporate communication with fixed semantics. Such models will allow the agents to better coordinate without sharing all their private information or further complicating the planning process.
Learning Recursive Models of Other Agents
When an intelligent agent (e.g., robot, or software agent), is sent out into the world to accomplish a task, it may need to interact with other agents whose true intentions and reason capabilities are unknown. Many approaches to such interactions (e.g., in multiagent reinforcement learning) assume that the behavior of another agent can be captured as a simple distribution over its actions by repeatedly interacting. However, this has two disadvantages: 1) it requires many interactions to learn such a model, 2) such a model may just be too simplistic to predict the other agent: it may actually do more sophisticated reasoning!
In this project, we will try to learn more sophisticated models of other agents. We will base ourselves on recursive models (such as the 'recursive modeling method', or 'interactive POMDPs') that express the belief of our agent recursively: our agent beliefs that the other agent beliefs that ..., etc. While such hierarchies of beliefs can extend indefinitely, research has indicated that humans typically reason up to two levels deep. Other research has provided the machinery to decide what to do given a particular hierarchy of beliefs, but where this hierarchy comes from is an open question. To address this question, we will aim to incrementally construct a finite-depth belief hierarchy to explain the behavior of an other agent, deepening the hierarchy when the current model does not suffice.
Decentralized Decision Making for Traffic Light Control
Making decisions in distributed systems is an important research problem. In this project we will use the Green Light District traffic simulator  to learn good policies for traffic lights. A couple of previous approaches exists based on multiagent reinforcement learning: these methods simultaneously learn a model of the environment as well as what decisions to take. However, these method either assume no coordination between traffic lights (which may lead to sub-optimal behavior), or that they can communicate extensively at every stage (which may be impossible in practice). In this project, we will aim to derive policies that are maximally coordinated, but that do not require communication during execution. Additionally there is freedom to explore some other directions, such as the use of state-of-the-art methods to identify good features, or comparing the use of communication the coordinate actions versus communicating state information.
 M. Wiering, J. Vreeken, J. van Veenen, and A. Koopman. Simulation and optimization of traffic in a city. In IEEE Intelligent Vehicles symposium (IV'04), 2004.
I'm interested in supervising any thesis in the area of formal decision making with connections to machine learning/multiagent systems/etc. If you have an interesting idea, feel free to email me.