Orthomosaic Image Resizing and Overall Processing Limitations

Can anyone tell me exactly how WebODM goes about determining its own processing and size limitations? I’m running an i7 with 16GB of ram and a 1TB SSD and two graphics cards, a 760 and a 1050 with a combined 6GB of GPU ram. Recently I’ve run two batch sizes as big as about 1500 images each, with the ODM default image resizing settings (from 4000x3000 to 2048 wide) but I’ve also tried a combining the two sets (flown by two different mavic mini’s) for a total of 3000 images each, at which point it crashed. Is there a simple formula or set of guidelines I can use so I’m not screwing around trial-and-error and guessing?

Also, I have a Mavic Air 2 and I’m looking at stepping up to 8000 x 6000 = 48MP picture size mode so I can take fewer pics from higher elevations and save time, at which point I would probably tune up my resolution to more of a max allowable to get the best GSM result that I can. Again, I’m wondering about the overall batch size / memory / process limitation, like ___ pix X ____ MB/pic = ____ GB max processing limit. I would think that it all boils down to pixel count at some point, but I don’t know everything that the “black box” is doing beneath.

2 Likes

Ah, the forever question, haha.

As for the dual-GPU, WebODM is not going to be able to use the 760 as it doesn’t actually support the proper CUDA features required, and as such, is likely only using your 1050.

What other processing parameters are you using? Those are critical for trying to figure out how big your image groups should be.

Do you have swap/pagefile?

I wouldn’t expect to process more than 250 images on 16GB RAM, especially since not all of that is actually free, most likely 4-8GB is free at any given moment since you’re hosting an OS as well. I would try to set your split around there and see if it fares better.

I’m not sure you’re going to get much by using the 48MP Mavic Air 2 images since it is a quad-bayer sensor, so the actual real resolved resolution is going to be closer to 12MP (4000x3000). You can compare yourself by shooting the same scene with 12MP and 48MP, and scaling the 12MP image up with Lanczos or another good filter… Probably will be perceptually almost identical, if not identical.

1 Like

To give you an example from a task I’ve had running for 75 hours so far-
I have 96GB RAM + virtual memory to give a total of 300GB.

I’m processing 959 M2P 20MP images with no resizing, ultra feature quality and high point cloud quality.

Max RAM usage has been over 90GB, with maximum committed memory 168GB, although there has been almost no I/O activity on the SSD drive where the virtual memory is.

1 Like

OK, so I guess the answer to that question is really, really, really big. Wow. I guess with 96 GB of RAM you aren’t just screwing around. Thanks for the perspective. Obviously the limits are not defined by the platform.

I guess the next up question is about RAM vs SSD vs GPU vs WebODM performance in general. I have two i7 machines, one is i7-4770 @ 3.4GHz Win 10 Pro with 32 GB RAM but a miserable NVIDIA NVS 510 with a 1TB SSD, and the other is the 16 GB machine with the 760 and 1050 GPU’s described above. The 32GB RAM machine is running WebODM with a browser and Docker. I can’t seem to get it to process anything of any real size and I don’t know why but I have not spent much time trying to troubleshoot it. I was hoping to set it up as a processing node, but life sometimes gets in the way. My next question is: is it even worthwhile to try to set up the 32GB i7 machine for dedicated WebODM use with such a small and stingy and antiquated video card? How much incremental performance advantage will a powerful GPU yield?

1 Like

Saijin, thanks for the clarification on the GPU’s. But can you elaborate on the swap/pagefile setup? I infer that this is a way to extend processing capacity by using an SSD, yes? And, if so, do you think it’s a liability to use such a feature on a SSD, or will it contribute to early hardware breakdown? Also, I have no idea about the processing parameters. I do make a selection to process DTM/DSM and tend to leave the default resize at 2048 px but otherwise I don’t monkey with the settings assuming that they are above my technical pay grade at this point.

2 Likes

Not using the GPU is not a disaster, it merely takes longer with the CPU, and not a huge amount for small datasets.
At the present time there are a few issues with GPU processing, so I sometimes have to force CPU use by going for higher settings than required.

I’m using an SSD for virtual memory, but as mentioned above, it wont necessarily be getting a lot of use, as the peaks in committed (but not necessarily used) memory are not for anything like the whole duration of the task, unless of course you are short on RAM for the number of images.

SSDs are pretty good these days as far as longevity goes, and allocating some space for extra provisioning can help.

1 Like

You’ll need to adjust your .wslconfig to allow for more RAM usage. You can also define a swap ext4.vhdx for the Linux guest in Docker/WSL2 to use. Mine was backed by SSD. It is slow, orders of magnitude slow compared to in-memory, but if you have time you can struggle through at least.

By default, WSL2/Docker will use up to half your available physical RAM, so 16GB or less in your case. I do not recommend leaving the Host OS with anything less than 4GB RAM.

I mean, technically yes, but like Gordon said, modern SSDs and SSD controllers are very competent. I’m not terribly concerned, honestly. I have an ancient Samsung 830 64GB with something like 15TB written to it hanging around with 99% health according to Samsung’s utility. And that wasn’t even a good controller/flash compared to what we have now.

You can always check out the docs:
https://docs.opendronemap.org

I’m working on expanding each parameter’s description so it will help guide folks as to when they might want or not want to use a particular flag.

I would say start by asking yourself what you need as your output product quality. Once you have a definition/standard, it becomes much easier to have folks guide you to what you might want to tweak.

If you’re happy with what you’re getting right now, I would focus first on getting your setup ironed out so you can push the limits better later on.

1 Like