Home Blog Archives RSS
June 6, 2008

¶ Urban Challenge log files public
We've made our Urban Challenge race log files public, along with software for viewing the log files. Software requires a GNU/Linux system (we used Fedora Core 6 and Ubuntu 7.04). Running the viewing application with the log files basically shows our car's point of view for the entire race. If you followed the race and are curious about what happened to MIT at specific points (e.g. the collision with Cornell, or trouble on the dirt road) then this could be a fun thing to look at.


January 31, 2006

¶ ladypack paraphenalia
DARPA dude came to visit last Friday. Seth wanted me to demo my stuff, but I had made advance plans to go on the GSC ski trip to Sunday River. Instead, I made a short video for Seth to show. That, combined with a recent webpage I set up, gives a pretty good synopsis of my current research.

http://people.csail.mit.edu/albert/ladypack/media/20060126-ladypack-darpa.avi (18 MB XViD encoded)

Some screenshots:

January 17, 2006

¶ native english speaker
It occurred to me, while reviewing a paper submission for an academic journal, that I'm quite lucky to be a native English speaker in a world where almost all major scientific publications have standardized on English. It saddens me that the authors of the submission, who are clearly not native English speakers, have to struggle so much with basic writing skills, because despite the interesting things they have to say in their paper, almost none of it is properly conveyed. What should be a fun and easy to read paper has become an unnnecessarily voluminous document whose flow is hampered and constantly interrupted by glaring grammatical and stylistic errors that jump out of the pages and divert my attention away from the actual content. Against my own will, I find myself irritated at these distractions and annoyed that such material could be so aggravating to read. I want to scream, "Go take some English classes!" but then I realize how awful it is of me to think that. Then I get annoyed that the Natural Language Processing people down on the third and fourth floors haven't gotten their automatic text translation act together yet ;)
October 21, 2005

¶ ladypack
I've slowly been transitioning from ubiquitous computing to computer vision. A few months ago, I joined Seth Teller's research group and started working with a ladybug2 omnidirectional camera. Omnidirectional here just means the camera is composed of six individual cameras pointed in different directions so that they collectively subtend almost the entire sphere. The idea is to use it for localization and navigation - the task of a computer figuring out where it is based on sensory information and how to get from one point to another.

Collecting and processing data with the ladybug2 is difficult, mostly because it delivers so much of it. Each frame is six 1024x768 images, and analyzing 15 frames per second means going through 70 MB/s. Compressing the data helps the data transfer, but that means it has to be uncompressed before analysis.

To gather data, I got some ideas from Charlie Kemp and built a wearable computer system. The ladybug2 is chest-mounted to give it a reasonable approximation to human perspective. Processing and data storage is done with a laptop which is mounted on a modified backpack. Power and data cables connect the camera to a hardware compression unit on the backpack, which transmits data to the laptop via a Firewire800 connection. Eventually we'll be adding other sensors like a microphone, laser range finder, and inertial sensors. I've been keeping a little website for my progress and thoughts here.

Here's a closeup of the ladybug2 camera unit

The power and fibre optic connection to the hardware compression unit.

The laptop mounted on the modified backpack

I don't think we'll be able to do much realtime processing just yet since the datasets are so large. For now, the ladypack (as I call it) is just going to be a data collection unit.
October 17, 2005

¶ Kimono: Kiosk-Mobile Phone Knowledge Sharing System
We did a lot of work with Nokia in the spring researching/developing information distribution systems in the context of ubiquitous computing. September rolled around and we submitted a paper to Mobile and Ubuiquitous Multimedia 2005. The reviewing results just came out, and we'll be presenting our work in New Zealand come December. I'm not sure yet if I'm going because funding is a little tight right now, but here's to hoping for a Nokia-sponsored trip to New Zealand!

Kimono: Kiosk-Mobile Phone Knowledge Sharing System


The functionality of an information kiosk can be extended by allowing it to interact with a smartphone, as demonstrated by the Kimono system, and the user interface can be greatly simplified by “associations” between pieces of information. A kiosk provides information that is relevant to a particular location and can use valuable context information, such as the fact that a user is physically standing in front of the kiosk, to tailor the display. Its graphically rich screen is suitable for presenting information to the user and has a natural input modality requiring the user to simply touch the screen. However, a kiosk lacks mobility and cannot stay with the user as he or she moves about the environment. Also, information provided by the kiosk must be remembered by the user. Finally, it is difficult to add information to the kiosk, and so the kiosk remains an information display device.

All this changes when a handset, such as a PDA or smartphone, can interact with the kiosk. The handset acts like a personalized proxy of the kiosk. It accompanies the user serving as a memory device. It is also an excellent media creation device, capable of taking pictures and recording voice memos as well as short text messages. Associating newly created content with other currently selected content makes for a simpler user interface. Content and its associations can be uploaded to a kiosk allowing others to access to it.
• Powered by bBlog