As discussed in the last post a rtabmap_ros is executed with a recorded video. Rtabmap_ros provided a .bag file which publishes below ROS topics while executed.
As seen in the picture, it publishes camera info for both the left and right cameras also. The images that are being published are compressed and image view package can be used to view the video graphically. Using mjpegtools, some of the frames of video from the bag file are exported to a local folder out of which two frames are considered for fundamental matrix construction.
Opencv findfundamentalmatrix function is used to compute the fundamental matrix. SIFT features are used along with RANSAC to reduce the number of outliers. Just to mention, the Essential and Fundamental Matrix concepts are clearly explained in the course Introduction to Computer Vision offered by the Georgia Tech on Udacity. OpenCV also provided good documentation on these concepts. Below are the epipolar lines that are drawn using computed fundamental matrix.
The point where the lines on the images meet is the camera position from where the other frame is taken. This is one way of finding the current camera position in the previous frames. It only gives the position but not the exact pose of the camera. Where as, if we are able to transform current pose from visual odometry onto previous frames, we get pose of the camera along with the camera position in the previous frame. The next step would be implementing any of the above stated methods for a running recorded video.