This is a video demo of the deep learning eye-tracking app I created as part of my M. Eng thesis. Our CVPR paper and project site can be found here. The code for the iOS side of this project can be found here.

In the demo, I control a dot around the screen using just my eyes. I move the dot to 8 different points: upper left corner, upper middle corner, upper right corner, bottom right corner, bottom middle corner, bottom left corner, and the center. I also announce where I'm about to look before I look there, so you can have a chance to see the dot move to that location. There's a slight blip in the video where I accidentally looked at the bottom middle point before I said I did, so you'll see the dot jump there briefly.

The face and eye images that you see on the screen are the inputs to the neural network. The app currently processes these images and runs the neural network on the iPhone GPU at a rate of 10-15 frames per second.

This demo is just the starting point, and the rest of my M. Eng project will be spent on improving the deep learning model (making it more robust and resistant to conditions of poor lighting, odd head positions, etc.)

For best results, view the video in full screen mode.