I used my drone to perform a cell phone tower inspection recently and wanted to create a 3D model of the results. Note, I only selected the 3D model option and CPU/Memory all seem unstressed,
TL;DR
When processing, things seem to chock at “Geometric-consistent estimated depth-maps”. I have left it running for over a day, and tried on 4 CPUs, 8 CPUs, and now I am running with max-concurrency: 1 which I read might help
With max-concurrency: 1 it did eventually move on, but only after about 12 hours of sitting at the same %age. Now it seems to be processing things again and still has not finished.
If anyone has suggestions on how to improve this I would love to know
More details:
I tried with the original data set of 1,000 images at full resolution - no dice!
I tried again with the full data set but resized images to 2048. That worked but the results were horrible - very poor quality.
I tried again with full-sized images but reduced to just 1/3rd of the original set as there was considerable overlap. That did not work and stopped for >24 hours at the same point of Geometric-consistent estimated depth maps. Tried that on two different servers with different #s of CPUs. Same result.
I’m trying to figure out how to use the GPU in Docker (I know I need to install the CUDA driver and then change the run flags but…it’s been a long time since I was anything I would call a developer, so I’m rusty)
Try to increase quality parameters. For example feature-quality: ultra, mesh-size: 3999999, pc-geometric: true, pc-quality: ultra, pc-rectify: true
Make sure you have enough ram. Would you share your raw pictures so we can see the quality of the pictures. Maybe 200 pictures could do the job too.
Thanks for the response. Since the flight was for a client it’s a little difficult to share the pictures, especially since they are all geo-tagged to identify the site. But you can see an example here.
I can well understand how complicated this is - it is a very complicated structure, especially with all the fake tree bits sticking out and the mix of bright parts and shadow.
I’m new here so if I’m asking dumb questions it’s OK to tell me that but, wouldn’t increasing the quality give it more work to do and make it even slower? Or…are you saying that I could use the lower res photos and the settings you provided to get an acceptable result?
Saijin’s auto boundary option is great. Could you tell us how you made the flight? Maybe how many levels, how many pictures per level and distance?
Yes, it will give your processor a lot of work to do and will use 4* more time than high. But with the higher mesh it should bring you also much more details and even work better with fine structures maybe smathermather has also an Idea? If I remember correctly he was working here Skydio Datasets
However: I have a different dataset with the parameter
Note that it crashed (possibly out of memory because the current machine is a small Docker instance) so I restarted it to see if it would carry on. I suspect it won’t but I’m not doing anything else on there today so no reason not to try.
In terms of the flight - It was semi-automated - using a Phantom 4 Pro to perform numerous orbits around the tower. The client needed different camera angles at each level too so there is also that. The camera was set to take photos every 2 seconds, which resulted in about 1,100 photos all told, but with a lot of overlap. That’s why I reduced it to about 1/3rd of that for this processing test.
Check the links for the documentation that I provided in the prior post. It should walk you through it. If it doesn’t please let me know, either here or via Private Message so that I may further improve that section of the documentation.
You don’t have an idiots guide to setting up a GPU so it can be used, do you? That might also help and I have an NVIDIA GTX1080, which should help move things along if I can get it working.
If you’re using WebODM for Windows native, you don’t need to do anything aside from have an appropriate GPU (you do), have an appropriate driver (Windows Update almost assuredly made sure of this already, so you do), and use SIFT feature type (if you didn’t force-change --feature-type, then you also have this).
You can verify it by looking in the Processing Log (and by posting it for us so we can take a look, as well).
If you’re using it via Docker or another way, then things get a bit more complicated. Fortunately, we have some excellent folks in the Community who have really helped suss this part out. I’ll draw up documentation based upon their findings.
Thanks for the response. I’m using Docker. Happy to switch to Windows native if that is easier - would I need to buy the windows installer for that or is there another way?
This mentions WSL2, but this is salient because Docker for Windows by default uses the WSL2 backend, and getting the GPU passthrough working in WSL2 by extension means it should be working in Docker (powered by WSL2).