Phase-1 evaluation has been completed last week and there has been a good amount of progress in the project as of now. The complete architecture is understood and many approaches have been explored. Thanks to my mentors Karan Saxena and Antonio Del Mestro 🙂
The proposed solution for publishing RGBD images on desired ROS topics has been writing a new rospy publisher that fetches RGBD images directly from the Kinect and publishes on a ROS topic. The problem that has been described in the last post, regarding sudden shutdown the publisher, still persists.
The best resource for this implementation was kinect2_bridge package from iai_kinect2. Kinect2_bridge is implemented in C++, and in our case, it’s the python script which may have some interrupts resulting in causing the sudden shutdown. Ros.spin and ros.rate functions are also made used in the script to avoid such kind of interrupts but the issue continues to exist. After considering and verifying many other ways, it is a better option to perform visual odometry with a recorded video instead of a live streaming camera. The calibration parameters can be a big issue but this seems to be a better way as of now.
KITTI provides a good number of datasets for both monocular and RGBD odometry. The approach is to create a bag file for RGBD images and publish them on desired ROS topics which are in turn taken by rtabmap_ros for Visual Odometry.
Hope this approach leads us to the solution.