Our team of researchers at MIT and Northeastern
University is developing wearable devices for blind and low-vision
people. These devices combine sensing, computation and interaction to
provide the wearer with timely, task-appropriate information about
the surroundings – the kind of information that sighted people get
from their visual systems, typically without conscious effort.
We are grateful to
the Andrea Bocelli Foundation and to the MIT EECS
Super-UROP program
for their support of our research.
We are focusing first on three core capabilities:
Safe mobility and navigation:
In future: Where am I? Which way is it to my destination?
When is the next turn, landmark or other salient environmental aspect
coming up? What type of place am I in, or near? Do my surroundings
include text, and if so what is it? Where is the affordance (e.g. kiosk,
concierge desk, elevator lobby, water fountain) that I seek? What
transit options (bus, taxi, train etc.) are nearby or arriving?
Person detection & identification:
In future: What is the person's facial expression, body stance,
and body language? What kind of clothing is s/he wearing? What is s/he doing?
Tactile/aural interface:
In future:How can the system effectively engage in spoken dialogue
with the user, so that the user can specify his/her goals to the
system, with the system requesting clarification when needed? When
should the system deliver information to the user (e.g., during breaks
in conversation)?
Team members include:
Please support our work with a donation!
Current focus:Where is the safe walking surface? Where are the trip and collision
hazards?
Current focus:Are there people nearby or approaching? If so, where is each person, and what is
his/her identity?
Current focus:How can information be conveyed non-visually to
the user, for example through a MEMS tactile display?