headerSearch form

Changing the World through Creative Research

Deep Active Localization

Journal
IEEE Robotics and Automation Letters
Date
2019.08.03
Abstract
We present an approach to active localization using deep reinforcement learning. DAL is trained entirely in simulation, and transfers zero-shot to a real robot. Left: DAL in simulation. At the beginning, the robot has no real clue of where it is relative to a known map, but as it executes actions and moves around, it converges onto a true location (see posterior belief improving in the fourth row). Right: A snapshot after several timesteps of the JAY robot operating in an office-like environment and running the DAL algorithm. The blue arrow encodes where the robot thinks it is, whereas the red arrow denotes where the robot actually is. Since both the arrows coincide, the robot’s belief has converged to its true pose. Unlike traditional localization approaches that are hand-engineered for a specific on-board sensor, we decouple the perception and planning from the low-level control, yielding a more practical, transferable algorithm
Reference
IEEE Robotics and Automation Letters (Early Access)
DOI
http://dx.doi.org/10.1109/LRA.2019.2932575