OpenDroneMap with depth images from RGB-D cameras

Hello to everyone.
First I would like to thank all of you for doing this work and to apologize if my question is stupid, this is my first approach to computer vision techniques.
I have a set of images taken with an RGB-D camera (now taken from a dataset, but the plan is to have a real camera) splinted in depth and rgb images.
There’s a way to use those depth images/informations in the odm pipeline to create a more accurate result?

Here an example of an rgb image [1] and a depth one [2]:
[1]: http://anonfile.com/8fq3o3fabb/1305031790.645155.png
[2]: http://anonfile.com/96qfo6ffb4/1305031790.640468.png

Thank in advance and have a good day.

  • mroma

Hi @mromanelli9 :hand: currently there’s no way to use those depth images to create more accurate results.

Thank for your answer, pierotofy. :+1:

These cameras are (part of) the future of data collection, so this will be interesting to integrate one day, but as Piero indicates, we have no ability to do so now.

The place to integrate this would be in the depthmaps calculations. It’s possible this could be a replacement for the individual depthmaps.

Out of curiosity, what camera are you using?

The idea was to use the Intel Areo RTF drone (https://click.intel.com/intel-aero-ready-to-fly-drone.html) which comes with a Intel RealSense R200 camera (https://software.intel.com/en-us/articles/realsense-r200-camera).

Very interesting. Now the question would be whether the accuracy of the R200 exceeds that that can be derived from multi-stereo view. You might run into baseline issues – in that the R200 probably works great close up, but the signal to noise ratio likely drops as you get further from the target. But, it could be interesting to look into – this is broadly supposition on my part.