Disparity Explained by Dax

Disparity is a computing tool used to give robots depth perception. Humans see different pictures out of each eye, and our brains put those together to make 3-D vision. We can only see in 3-D in the area where our eyes overlap, which is about a 140-degree field of view, so the brain uses a few tricks to help us make full 3-D images.

One strategy is convergence, or when the brain judges the distance of an object by how crossed or “converged” your eyes are. The other is parallax, where the brain compares the view from the left and right eye to estimate distance.

Isn’t it amazing to think about how our brains are constantly making background calculations to help us move around smoothly in our world? You can’t really appreciate how much effort it takes just to walk around and not bump into things until you teach a robot to do it.

daxbot
An image of our lab. From left to right, this image is split into how we see things, how our simulation environment (the Pickle) approximates it, and finally how disparity breaks the image down.

Disparity in a robot functions much in the same way that our brains do. Calculating disparity involves taking two images, comparing the pixels, and computing how far off one image is from the other starting with the very front pixels, which will have greater disparity, and moving to the ones in the back, which have less disparity.

These pixels are color coded depending on where the computer thinks they are in relation to its cameras, hot to cool. By running the calculations constantly and very quickly, Dax can approximate 3-D vision and move around with awareness of his environment like we do.

This is part of our collision avoidance system and how we teach Dax where things are and how not to bump into them when he’s zooming around town.

Want to learn more about disparity computation? Check it out here: