Processing chokes at "Geometric-consistent estimated depth-maps"

I used my drone to perform a cell phone tower inspection recently and wanted to create a 3D model of the results. Note, I only selected the 3D model option and CPU/Memory all seem unstressed,

When processing, things seem to chock at “Geometric-consistent estimated depth-maps”. I have left it running for over a day, and tried on 4 CPUs, 8 CPUs, and now I am running with max-concurrency: 1 which I read might help
With max-concurrency: 1 it did eventually move on, but only after about 12 hours of sitting at the same %age. Now it seems to be processing things again and still has not finished.

If anyone has suggestions on how to improve this I would love to know

More details:
I tried with the original data set of 1,000 images at full resolution - no dice!
I tried again with the full data set but resized images to 2048. That worked but the results were horrible - very poor quality.
I tried again with full-sized images but reduced to just 1/3rd of the original set as there was considerable overlap. That did not work and stopped for >24 hours at the same point of Geometric-consistent estimated depth maps. Tried that on two different servers with different #s of CPUs. Same result.

I’m trying to figure out how to use the GPU in Docker (I know I need to install the CUDA driver and then change the run flags but…it’s been a long time since I was anything I would call a developer, so I’m rusty)

Any assistance would be appreciated!


Try to increase quality parameters. For example feature-quality: ultra, mesh-size: 3999999, pc-geometric: true, pc-quality: ultra, pc-rectify: true
Make sure you have enough ram. Would you share your raw pictures so we can see the quality of the pictures. Maybe 200 pictures could do the job too.


Thanks for the response. Since the flight was for a client it’s a little difficult to share the pictures, especially since they are all geo-tagged to identify the site. But you can see an example here.

I can well understand how complicated this is - it is a very complicated structure, especially with all the fake tree bits sticking out and the mix of bright parts and shadow.

I’m new here so if I’m asking dumb questions it’s OK to tell me that but, wouldn’t increasing the quality give it more work to do and make it even slower? Or…are you saying that I could use the lower res photos and the settings you provided to get an acceptable result?

Thanks for taking the time to respond!



I’m curious if you wouldn’t benefit from loading a --boundary GeoJSON to limit the reconstruction area:

Also, it’d be most helpful if you could post the full set of Processing Options you’re using to help us guide tuning suggestions.

1 Like

Saijin’s auto boundary option is great. Could you tell us how you made the flight? Maybe how many levels, how many pictures per level and distance?

Yes, it will give your processor a lot of work to do and will use 4* more time than high. But with the higher mesh it should bring you also much more details and even work better with fine structures maybe smathermather has also an Idea? If I remember correctly he was working here Skydio Datasets

However: I have a different dataset with the parameter

mesh-octree-depth: 12, mesh-size: 9999999, use-hybrid-bundle-adjustment: true, rerun-from: odm_postprocess

The picture dataset is: Microsoft OneDrive - Access files anywhere. Create docs with free Office Online.


@ghost7k @Sajin_Naib

Thanks for the ideas. I don’t know how to load a boundary, but will look into it.

The options are as follows:

Options: auto-boundary: true, max-concurrency: 1, mesh-octree-depth: 12, mesh-size: 300000, pc-geometric: true, pc-quality: high, use-3dmesh: true, rerun-from: odm_postprocess

Note that it crashed (possibly out of memory because the current machine is a small Docker instance) so I restarted it to see if it would carry on. I suspect it won’t but I’m not doing anything else on there today so no reason not to try.

In terms of the flight - It was semi-automated - using a Phantom 4 Pro to perform numerous orbits around the tower. The client needed different camera angles at each level too so there is also that. The camera was set to take photos every 2 seconds, which resulted in about 1,100 photos all told, but with a lot of overlap. That’s why I reduced it to about 1/3rd of that for this processing test.


Check the links for the documentation that I provided in the prior post. It should walk you through it. If it doesn’t please let me know, either here or via Private Message so that I may further improve that section of the documentation.

1 Like

Thanks. I will do.

You don’t have an idiots guide to setting up a GPU so it can be used, do you? That might also help and I have an NVIDIA GTX1080, which should help move things along if I can get it working.

1 Like

If you’re using WebODM for Windows native, you don’t need to do anything aside from have an appropriate GPU (you do), have an appropriate driver (Windows Update almost assuredly made sure of this already, so you do), and use SIFT feature type (if you didn’t force-change --feature-type, then you also have this).

You can verify it by looking in the Processing Log (and by posting it for us so we can take a look, as well).

If you’re using it via Docker or another way, then things get a bit more complicated. Fortunately, we have some excellent folks in the Community who have really helped suss this part out. I’ll draw up documentation based upon their findings.



Thanks for the response. I’m using Docker. Happy to switch to Windows native if that is easier - would I need to buy the windows installer for that or is there another way?


1 Like

I do think it is easier, and carries a number of benefits vs Docker/WSL, and yes, it is a paid offering via UAV4Geo:

That being said, for the Docker install, you’ll need a few things:

  1. Windows 10 21H2 or later (In short, you need Windows 10 fully up to date, or Windows 11)
  2. WSL2 fully updated (depends upon #1 to some extent)
    2A) wsl.exe --update via the Terminal
  3. Follow these instructions:
    3A) GPU in Windows Subsystem for Linux (WSL) | NVIDIA Developer

This mentions WSL2, but this is salient because Docker for Windows by default uses the WSL2 backend, and getting the GPU passthrough working in WSL2 by extension means it should be working in Docker (powered by WSL2).


I find drawing a polygon here is the easiest way to produce the required json file


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.