I am excited to continue to develop artificial 3D perception systems at Oculus Research. You can find more information about Oculus Research and my group under Richard Newcombe in this excellent blog post by Michael Abrash. A short excerpt from the post sums it up:
“Machine perception fuses various tracking systems, simultaneous localization and mapping (SLAM), machine learning, distributed networks, databases, and AI into systems that can build and maintain a dynamic model of the world, enabling personalized, contextual AI that can start to understand the parts of the world that matter to you—exactly what AR glasses will need in order to make you smarter.”
In spring 2017, I defended my PhD thesis on Nonparametric Directional Perception. My advisers at MIT within the CS and AI Laboratory (CSAIL) were John W. Fisher III and John Leonard. On my way to MIT, I graduated from the Technische Universität München (TUM) with a Diplom and the Georgia Institute of Technology with a M.Sc.
My research interests in AI and robotics are in 3D perception [1, 3, 4, 8, 9, 10, 11, 12, 13], modeling (directional data [1, 2, 11, 13] and Bayesian nonparametrics [5, 7, 11, 13]), and inference (sampling [1, 2, 4, 13], optimization [6, 7], low-variance asymptotics , and global search ).
Nonparmetric Directional Perception captures and uses regularities of man-made environments revealed in their surface normal distribution.
Directional Stata Center World segmentation regularizes 3D surfel reconstructions and makes camera tracking more efficient.
My interest in robotics and perceiving systems originates from age 15, when I got my first micro-processor from my father as a present. Since then I have built seven robots (Plexa, Plicro, Roboking2005, Ca3505, Kno0Bot, Kno2Bot, Holomove) from scratch and worked in multiple teams on robotics related projects (KUKAyouBot, rEIzor).