Keywords: LOCATE, visual localization, vision based localization, image geolocation, visual odometry, geo-localization, geo-tagging, image to model registration, Terrain-Aided Navigation (TAN), Terrain Referenced Navigation (TRN), vision based navigation, correlating measured terrain with digital terrain model, 3D alignment, re-photography, cross-domain registration, extrinsic calibration, 6 dof, photograph peak tagging, automatic geo-registration, camera pose estimation, place recognition
Abstract
Project LOCATE deals with localization in natural environments.
The aim is to accurately find the location and orientation of the camera which captured the image.
We introduce a system for automatic alignment of the query photo with a geo-referenced 3D terrain model.
We propose a new alignment metric to accurately predict the camera orientation succeeding the
large-scale visual localization step. Having a sufficiently accurate match between a photograph
and a 3D model offers new possibilities for image enhancement. It can be used to transform
photographs into a realistic virtual 3D experience, e.g. to automatically highlight elements in the image,
such as the travel path taken, names of mountains, or other landmarks. Furthermore, the synthetic depth map,
and/or the whole 3D model of the queried photo, can be used for novel view synthesis,
image relighting, dehazing or refocusing.