Saliency detection and model-based tracking: Two Part Vision System for Small Robot Navigation in Forested Environments

1 minute read

Towards the goal of fast, vision-based autonomous flight, localization, and map building to support local planning and control in unstructured outdoor environments, we present a method for incrementally building a map of salient tree trunks while simultaneously estimating the trajectory of a quadrotor flying through a forest. We make significant progress in a class of visual perception methods that produce low-dimensional, geometric information that is ideal for planning and navigation on aerial robots, while directing computational resources using motion saliency, which selects objects that are important to navigation and planning. By low-dimensional geometric information, we mean coarse geometric primitives, which for the purposes of motion planning and navigation are suitable proxies for real-world objects. Additionally, we develop a method for summarizing past image measurements that avoids expensive computations on a history of images while maintaining the key non-linearities that make full map and trajectory smoothing possible. We demonstrate results with data from a small, commercially-available quad-rotor flying in a challenging, forested environment.

Bibtex

@article{roberts2012saliency,
  title={Saliency detection and model-based tracking: a two part vision system for small robot navigation in forested environment},
  author={Roberts, Richard and Ta, Duy-Nguyen and Straub, Julian and Ok, Kyel and Dellaert, Frank},
  journal={Proc. of SPIE Vol},
  volume={8387},
  pages={83870S--1},
  year={2012},
  url = { http://people.csail.mit.edu/jstraub/download/roberts2012saliency.pdf}
}

Categories:

Updated: