In this work, a method for tight fusion of visual, depth and inertial data for autonomous navigation in GPS–denied, poorly illuminated, and textureless environments is proposed. Visual and depth information are fused at the feature detection and descriptor extraction levels to augment one sensing modality with the other. These multimodal features are then further integrated with inertial sensor cues using an extended Kalman filter to estimate the robot pose, sensor bias terms, extrinsic calibration parameters, and landmark positions simultaneously as part of the filter state. The proposed algorithm is to enable reliable navigation of a Micro Aerial Vehicle in challenging visually–degraded environments using RGB-D information from a Realsense D435 Depth Camera and an IMU.
New work of our lab approaches the problem of localization through obscurants via thermal-inertial fusion.
Write something about yourself. No need to be fancy, just an overview.