RGB-D Visual Odometry on ROS

The important aspect of the project is Visual Odometry(VO). It estimates the trajectory of the camera in the world coordinate system which is useful for retrieving the current terrain patch on which the astronaut is standing. For monocular visual odometry, PTAM has been used. The mapping thread in PTAM is heavy and the trajectory wasn’t also accurate enough with 30 fps forward-looking camera. Semi-Direct Visual Odometry(SVO), which has proved its accuracy for MAV’s, fails to deliver good results with 30fps monocular camera, especially when the camera is facing forward. Considering these drawbacks with monocular cameras, Kinect v2, having depth information, is considered. With the additional depth channel it is expected to produce accurate camera pose and trajectory compared to monocular cameras.

There are many options for stereo-camera visual odometry but the number comes down drastically for RGBD visual odometry. ZED 2K Stereo Camera would have been the best option to use if the hardware was available. It has the depth range up to 20 meters for both indoor and outdoors and runs at 100fps. It comes with an amazing ROS wrapper called zed-ros-wrapper which produces absolute 3D position and orientation of the camera. The ZED ROS wrapper publishes the depth information, RGB images from both the cameras, point clouds, visual odometry over ROS topics which can be used for further processing.

On the other side, Kinect doesn’t produce accurate VO when compared to ZED. To rectify this, we can do sensor fusion between Kinect v2 and an IMU. Integrating these sensors and the need for running different modules simultaneously demands a ROS like platform. On ROS, below VO techniques have been tested.

1. dvo_slam

2. rtabmap_ros

TUM doesn’t provide any support for the project but there is an external source where the installation process is clearly documented. It can be found here. DVO_SLAM depends on the older version of Sophus. Even by following the doc, I couldn’t resolve the dependency problem. Issues filed on GitHub also state that the dvo_slam works well only with TUM benchmark datasets but fails to give good results with a live streaming RGBD data. And also, the package does mapping and localization together which is not necessary in this case. Mapping apart from not being useful also consumes a lot of processing power and is not recommended to be used.

RTABMAP_ROS has a separate node that can be used for visual odometry alone which is very useful in preserving processing power. The package is well documented and has good support on GitHub as well. Installation process is clearly documented on ROS page and below is the result of running rtabmap visual odometry.

Screenshot from 2017-06-12 19:10:39

Below are the ROS topics that rtabmap_ros VO publishes.

Screenshot from 2017-06-12 19:10:59

As shown in the subscribed topic list, three ROS topics with RGB and depth images, CameraInfo should already be published. Rtabmap_ros doesn’t provide and any drivers for Kinect v2 but it is mentioned to use kinect_bridge in the documentation. For CameraInfo, Kinect v2 comes with default intrinsic and extrinsic values but not accuarate enough for visual odometry. So, the next step would calibrating the Kinect to evaluate intrinsic and extrinsic values.

Cheers 🙂

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s