Position estimation of a multi-rotor UAV using a depth camera

Position - Senior Engineer – State Estimation and Control at Technology Innovation Hub for IoT and IoE (TIH-IoT), IIT-Bombay
Technical skills - Python, Robot operating system (ROS), Gazebo

In this project at TIH, IIT-Bombay , we evaluated visual inertial odometry (VIO) systems for robot pose estimation based on Experimental Evaluation of VIO Systems for Arable Farming paper. In VIO systems, camera captures rich information of the scene at a low frame rate whilst IMU gets motion information at a high rate.
  VIO systems allow robot pose estimation even in GNSS denied areas and are independent of the robot locomotion. However agricultural fields entail a particular challenge for VIO systems since, among their characteristics, they have changing lighting conditions, highly self-similar textures, and unstructured and dynamic objects. Furthermore, the irregular terrain of the field causes more aggressive camera motion than what is usually seen in urban and indoor cases.
  We simulated an agricultural field in a gazebo environment with crop rows. And estimated the iris drone's (with integrated intel realsense depth camera) position and orientation using feedback based visual inertial system (FVIS) algorithm. The following video displays our implementation.

localization
Figure: Localization of iris quadcopter in agricultural field
Dimple Bhuta
Dimple Bhuta
Principal Engineer – State Estimation and Control

My research interests include robotics, computer vision and bio-inspired design.