This project focuses on a fusion of monocular vision and IMU to robustly track the position of an AR drone using LSD-SLAM (Large-Scale Direct Monocular SLAM) algorithm. The system consists of a low-cost commercial drone and a remote control unit to computationally afford the SLAM algorithms using a distributed node system based on ROS (Robot Operating System). Upon finishing this project, it is expected that we are able to reconstruct the 3D environment around AR drone and localize itself. In addition, using visual cues, the drone will be able to hold a given position despite the random disturbances that could be applied to the drone as well as navigate to a given position or follow a certain path autonomously and safely within the map built with LSD-SLAM. The built map will be displayed on the host computer. The drone then will be able to follow the generated path according to the octomap and avoid obstacles accordingly.
Existing software employed by this project include the following:
- ardrone_autonomy: ROS driver for Parrot AR-Drone 1.0 & 2.0 quadrotor. http://wiki.ros.org/ardrone_autonomy
- LSD_SLAM: It is a novel approach to real-time monocular SLAM. It is fully direct (i.e. does not use keypoints / features) and creates large-scale, semi-dense maps in real-time on a laptop. https://vision.in.tum.de/research/vslam/lsdslam
- tum_ardrone: It consists of three components: a monocular SLAM system, an extended Kalman filter for data fusion and state estimation and a PID controller to generate steering commands. http://wiki.ros.org/tum_ardrone
- image_proc: This node rectify the raw image captured by the front camera of Ardrone. By our experiment, this step can significantly reduce the noise in the point cloud map generated by LSD-SLAM.
New software that we designed and coded for this project include the following:
- cvg_sim_gazebo: Gazebo simulation for Hydro Lab.
- ardrone_joystick: use Logitech joystick to publish cmd_vel to control the motion of AR Drone.
- point_cloud_io: publish point cloud topic generated from LSD-SLAM.
- ardrone_moveit: subscribe point cloud topic and convert point cloud data into octomap for visualization and do path planning, then publish trajectory with fake joint states.
- ardrone_planning: subscribe trajectory with fake joints states from ardrone_moveit and convert them to drone’s pose states, then publish cmd_vel to the AR Drone.
Hardware and Infrastructure
Existing hardware employed by this project include the following:
- Laptops with ROS Kinetic and Ubuntu 16.04 installed
- AR Drone: a low-cost quadrotor developed by Parrot
- Logitech Joystick: a low-cost multi-purpose joystick for teleported control of the drone
|Figure 2. Parrot AR Drone||Figure 3. Logitech Joystick|
Sample Data Products
Figure 5 shows the point cloud data collected by the AR Drone using LSD-SLAM algorithm in Hydro Lab. The data successfully reconstructed the environment with a chair and a few boxes as obstacles. Figure 6 shows octomap converted from the point cloud in Rviz to use MoveIt! do path planning and visualize the trajectory.
Results and Demo Video
A simple environment and the result of the map is shown above. The AR Drone is able to map the stairs and obstacles as well as localize itself. For more detailed explanations, you can refer to the attached video in the next section.
- Demo Video
- For 3D path navigation, using MoveIt! would be better than Move_base. Typically, MoveIt! relies on pre-defined action files and action controller file (.yaml file) for translating the multi DOF trajectories produced by MoveIt!.
- MoveIt! does not have a good support for the mobile robot. Therefore, we should treat the quadrotor as a multi DOF joint and use fake joint_states when connecting MoveIt! and the AR Drone.
- A server on the quadrotor needs to service the move_group client in order to receive control commands output by the move_group node.
- Add a filter to reduce the noise of point cloud data generated by LSD-SLAM in real time.
- Update the octomap periodically in Rviz.
- Use both PTAM and LSD-SLAM to improve the precision of pose estimation.
Where is the code?
As always, check out my GitHub: