Demos

 

This page contains links to four categories of demos (augmented reality, tracking of users, robot manipulation, object pop-up) about my research. 


 

Mobile Augmented Reality

The user who worn our mobile augmented reality gaming/training system can interact with virtual actors inserted in real scenes using head mounted displays (HMDs). We developed a novel algorithm to provide highly-accurate and stable pose estimation (3D location and 3D orientation) of the user's head in real-time over large areas, by using only video cameras and an inertial measurement unit (IMU) mounted on the user. The system then inserts the synthetic actors and objects in the real scene viewed by the user based on our pose estimation. 

[click here]

 

Tracking of Users in GPS-Denied Environment

Our system can track the user's 3D location and 3D head orientation in real-time over large areas in GPS-denied environment. The tracking is achieved by incorporating multiple low-cost sensors (cameras, IMUs, and range radios) mounted on a helmet worn by the individual user. It provides highly-accurate 6 degrees of freedom pose estimation during long periods both indoors and outdoors even in vision-impaired conditions, such as smoky scenes.

[click here]

 

Robot Manipulation

Our system can move the robot arm to successfully grasp the object, even in situations where the part to be grasped is not visible in the input image. The input to our system is only one single 2D image (no stereo or range data) and one known 3D ground plane. It is an open-loop grasp (the mechanical action is only based on the initial command) without using any touch sensors or feedbacks from the robot during the grasp process.

[click here]

 

Object Pop-up

We construct a system to automatically reconstruct 3D objects from a single image. We then can use the resulting 3D object to construct realistic 3D 'pop-up' models from photos .

[click here]