Jeremy Scott

email:jscott@csail.mit.edu
office:32-237
resume/CV

about

I'm a fifth-year graduate student in computer science at MIT, building applications for describing behavior using sketching and direct manipulation. I am working with Randall Davis in the Multimodal Understanding Group in CSAIL. I earned my Bachelor of Applied Science degree in the Engineering Science program at the University of Toronto. For my undergraduate thesis, I explored foot-based interaction techniques working with Khai Truong in the Ubiquitous Computing Research Group.

research

research log

I've started to log my progress in building CodeInk. Check out my research log.

making programming visual

When we learn about algorithms and data structures, or when we think about problems that we intend to solve with code, we often pick up a pen and draw a picture. What if we could move the objects in the picture to demonstrate the algorithm and have that be the way that we 'code'?

I'm building a programming environment with this in mind: make it easier to think about algorithms for inherently visual problems. I have three sub-objectives: (1) make data and computation visual, (2) use direct manipulation of visualized data as a way to describe steps, and (3) encourage the user to abstract their algorithm, so they are thinking and creating at a general level.

Some of this might sound familiar. I'm building off my work on PhysInk (below). I'm adopting many of Bret Victor's principles of learnable programming and ideas about direct manipulation UIs for visualizing data. The motivation (and some of the code!) behind this work is shared with Philip Guo's work on Online Python Tutor.

physink

For my Master's thesis, I built an application called PhysInk that allows users to quickly demonstrate 2D physical behavior by directly manipulating sketched objects. The user can also sketch constraints (joints, ropes or springs) on how the objects can move. The canvas is driven by a physics engine, so that constraints are enforced, and manipulating the objects feels as natural as possible. The system captures the demonstration as a timeline of physical events, which can be used to find a physically-correct version of what you've described.

classes

I was a TA for 6.813/6.831: User Interface Design and Implementation, taught by Rob Miller and Haoqi Zhang. While at MIT I've taken several AI classes (Machine Learning, The Human Intelligence Enterprise, Topics in Computer Vision), as well as classes in databases and the theory of computation. I also took Neil Gershenfeld's How to Make (Almost) Anything - you can see my projects here.

publications

A Direct Manipulation Language for Explaining Algorithms
Scott, J., Guo, P. J., and Davis, R. 2014. A Direct Manipulation Language for Explaining Algorithms. IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2014).. (Melbourne, Australia, Jul 28 - Aug 1, 2014). PDF

PhysInk: Sketching Physical Behavior
Scott, J. and Davis, R. 2013. PhysInk: Sketching Physical Behavior. In Adjunct Proceedings of UIST 2013: The 26th ACM Symposium on User Interface Software and Technology. (St. Andrews, UK, October 8 - 11, 2013). PDF

Understanding Sketch-and-Speech Descriptions of Machines
Scott, J.. 2012. Understanding Sketch-and-Speech Descriptions of Machines. MIT SM Thesis. (Cambridge, MA, USA. September, 2012). PDF

Sensing Foot Gestures From The Pocket (at University of Toronto)
Scott, J., Dearman, D., Yatani, K. and Truong, K.N. 2010. Sensing Foot Gestures from the Pocket. In Proceedings of UIST 2010: The 23rd ACM Symposium on User Interface Software and Technology. (New York, NY, USA, October 3 - 6, 2010). [18.4% acceptance rate] PDF

past work

SkinMetrics (in the Artificial Perception Lab at University of Toronto)
Advisor: Parham Aarabi
While working in the Artificial Perception Lab (APL) in the summer of 2007, I developed early versions of an image processing algorithm to analyze skin quality from the image of a face. Using edge detection, the software detected anomalies, such as wrinkles or lesions. The algorithm has since been integrated into other APL research and a ModiFace application called SkinMetrics.

Autonomous Inventory and Liquid Level Detection Robot (in AER201, Engineering Science)
In EngSci's AER201 Engineering Design course, I worked with a world-class team of engineers to build an autonomous robot that could detect the presence of oil barrels and liquid levels inside them.


Jeremy Scott (jks@mit.edu)