In this work, a method for tight fusion of visual, depth and inertial data for autonomous navigation in GPS–denied, poorly illuminated, and textureless environments is proposed. Visual and depth information are fused at the feature detection and descriptor extraction levels to augment one sensing modality with the other. These multimodal features are then further integrated with inertial sensor cues using an extended Kalman filter to estimate the robot pose, sensor bias terms, extrinsic calibration parameters, and landmark positions simultaneously as part of the filter state. The proposed algorithm is to enable reliable navigation of a Micro Aerial Vehicle in challenging visually–degraded environments using RGB-D information from a Realsense D435 Depth Camera and an IMU.
1 Comment
Thermal-Inertial Localization for Autonomous Navigation of Aerial Robots through Obscurants3/4/2018 New work of our lab approaches the problem of localization through obscurants via thermal-inertial fusion. |
AuthorNews and updates from the Autonomous Robots Lab. Archives
April 2024
Categories |