Semantic Mapping and Visual Navigation for Smart Robots
The project gathers expertise from computer vision, machine learning, automatic control and optimization with the ambitious goal of establishing an integrated framework for combined 3D reconsctruction and interpretation. The research is structured into four work packages: 1) scene modelling, 2) visual recognition, 3) visual navigation and 4) system integration to achieve a perceptual robotic system for exploration and learning in unknown environments. As a demonstrator, we will construct an autonomous system for visual inspection of a supermarket using small-scale, low-cost quadcopters. The system goes well beyond the current state-of-the-art and will provide a complete solution for semantic mapping and visual navigation. The basic research outcomes are relevant to a wide range of industrial applications including self-driving cars, unmanned surface vehicles, street-view modelling and flexible inspection in general.
The project is financed by the foundation for strategic research, project no RIT15-0038.