A mapping and localization framework for scalable appearance-based navigation

  • Siniša Šegvić*
  • , Anthony Remazeilles
  • , Albert Diosi
  • , François Chaumette
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

55 Citations (Scopus)

Abstract

This paper presents a vision framework which enables feature-oriented appearance-based navigation in large outdoor environments containing other moving objects. The framework is based on a hybrid topological-geometrical environment representation, constructed from a learning sequence acquired during a robot motion under human control. At the higher topological layer, the representation contains a graph of key-images such that incident nodes share many natural landmarks. The lower geometrical layer enables to predict the projections of the mapped landmarks onto the current image, in order to be able to start (or resume) their tracking on the fly. The desired navigation functionality is achieved without requiring global geometrical consistency of the underlying environment representation. The framework has been experimentally validated in demanding and cluttered outdoor environments, under different imaging conditions. The experiments have been performed on many long sequences acquired from moving cars, as well as in large-scale real-time navigation experiments relying exclusively on a single perspective vision sensor. The obtained results confirm the viability of the proposed hybrid approach and indicate interesting directions for future work.

Original languageEnglish
Pages (from-to)172-187
Number of pages16
JournalComputer Vision and Image Understanding
Volume113
Issue number2
DOIs
Publication statusPublished - Feb 2009
Externally publishedYes

Keywords

  • Appearance-based navigation
  • Point transfer
  • Structure from motion
  • Visual tracking

Fingerprint

Dive into the research topics of 'A mapping and localization framework for scalable appearance-based navigation'. Together they form a unique fingerprint.

Cite this