In this work, a method for tight fusion of visual, depth and inertial data for autonomous navigation in GPS–denied, poorly illuminated, and textureless environments is proposed. Visual and depth information are fused at the feature detection and descriptor extraction levels to augment one sensing modality with the other. These multimodal features are then further integrated with inertial sensor cues using an extended Kalman filter to estimate the robot pose, sensor bias terms, extrinsic calibration parameters, and landmark positions simultaneously as part of the filter state. The proposed algorithm is to enable reliable navigation of a Micro Aerial Vehicle in challenging visually–degraded environments using RGB-D information from a Realsense D435 Depth Camera and an IMU.
1 Comment
Carlos
3/4/2018 02:29:13 pm
Where was it published?
Reply
Leave a Reply. |
AuthorNews and updates from the Autonomous Robots Lab. Archives
April 2024
Categories |