How Do Neural Networks See Depth?

One of the continuing problems with Deep Neural Networks (DNNs) is that humans typically do not understand how they achieve their amazing results. DNN models are trained on massive amounts of data until they surpass human accuracy, but the details of the models themselves are hidden inside thousands of equations. So, not human readable.

If there is implicit bias - or other flaws - in the training data, those problems will be encoded in the model, with humans none the wiser. That's why I found the paper How do neural networks see depth in single images? worth a read.  READ MORE ON: ZD NET