I’m a beginner and have been playing around with WebODM which I installed myself for a few months. Worked great with 12 megapixel images and I’ve processed up to 591 of them without problems.
I started playing around 20 megapixel images, ran some tests with up to 180 of them, again processed without problems in about 2 hours 18 minutes.
Now I’ve tried a big one with 767 images and I’m not getting anywhere. The progress bar never gets past the gear icon (15+ hours). I’ve tried splits as low as 50 with split-overlaps of 50 (the default overlap of 150meters seems huge), and it still seems to hang. Diagnostics shows around 300 MB of the 16 GB free. I’m sure its running out of memory, it is a pity that the swap can’t be set over 4GB on MacOS since the 1TB flash drive is pretty much super fast memory.
I’m trying setting the concurrency at 1 to see what happens, but is there anything else I should be trying?
How can I see if progress is being made if the progress bar doesn’t move?
Once you start swapping out to disk, you’re looking at a small fractional percentage of the speed of working in-memory. Typically on the order of 0.5 - 2.5% or so.
It likely doesn’t have much progress to report since it is going to stay in each memory-heavy stage for an extended period of time.
You can try resizing the images down while using the split//overlap pipeline. What other parameters are you setting?
16GB RAM (with some amount reserved for the MacOS host) really isn’t ideal for that Task size at 20MP.
I think that is one thing how the Mac M1 Pro differs from “older” computers. I’m sure most people here already know this, but:
“The MacBook Pro’s substantially different memory hardware is the basis for its improved memory performance, but the M1 Pro MacBook Pro is also bolstered by 200GB/s unified memory and a faster, 7.4GB/s SSD, which means that the memory is much quicker and the system can swap with the SSD faster.”
Even for video editing, the difference between 16 GB and 64 GB (on the M1 Pro) has been tested to be minimal since for the most part (and even sometimes slower for the 64 GB system), the GPU and flash drive are almost essentially 1+ TB more of memory.
I have not tried changing parameters other than the split/overlap and now the concurrency.
I don’t know why there is a limitation of 4GB for swap on MacOS, and it probably has nothing to do with WebODM, but I feel this is part of the problem.
I guess what I’m trying to figure out is whether the split/overlap setting is actually doing anything. The default overlap is 150m which seems crazy big and in the middle of the area, would encompass all of the area. These pictures were taken 2 seconds apart at 10mph so maybe even 10m overlap would be OK?
Obviously I can play around and experiment, but this one is over 7 hours in and I have no idea whether it is doing anything.
So using those numbers, swap is 3.7% (at peak) performance of in-memory operations, up from the very conservative baseline numbers I posted above.
Can you get by with image downsizing? 16GB is very slim, and swapping is not kind if you’re doing it the entire time.
For instance, a Task I didn’t have enough RAM for took 5 days when swapping, but takes a few hours now with more RAM.
And your understanding is correct, we do not control Docker’s limitations that they impose on each OS. I have not found any good explanation for their treatment of swap on MacOS, but I believe they’re looking to change the virtualization backend eventually, which supposedly will use system swap.
Yeah, I’ll try resizing next. It doesn’t look like there is a way to get WebODM to do it to an existing project?
My last test was using settings of Split at 50 and Split-Overlap of 5. Ended up with an out of memory error after 11 hours.
Are splits known to work? If I remotely understand how they should work, worst case with a split of 50 and an overlap of 5 meters would yield a subpart of 100 or maybe 150 images and I know 180 images work. Seems odd.
Correct, image resizing for the entire pipeline must take place prior to Task processing, either during upload using our Resize option, or by passing externally-processed images for upload.
Yes.
You’re not likely to get enough good common tiepoints in a 5m strip between the submodels. You really want more overlap than not.
What I’m trying to figure out with the splits are settings that will not run out of memory, not necessarily work/merge well. Splitting is supposed to decrease memory requirements and so far I can’t say I’ve seen evidence of that.
If I can process 180 images without a problem, what settings would you recommend for splitting 767 images?
I think I had tried 50/50 which didn’t complete, but 90/75 did, in 75:50:14. This is the first split I’d tried, no Texture or Surface Model was generated. Is that normal?
Yes, currently textured model is not supported during split/merge but other products should be generated, such as DEM products, orthophoto, and the colorized point-cloud.