I’ve got a medium sized dataset, 1700 45MP images, and I need to maximise the resolution of the ortho. I’ve got a maxed out Mac Studio with 128GB RAM and giving it all to Docker but it runs out of memory with fast-ortho and ignore-gsd on the meshing step. Is there an easy way to break up a project into areas or something? Any other tricks? Cheers!
Hi, did you try the split-merge option?
Don’t yes this flag with larger datasets. It will fail. You can set your orthophoto-resolution to a high value if you turn off ignore-gsd, and the effect will be good. You will also save a lot on memory, and might stop OOMing, though I haven’t profiled 1700 45mp images to know memory requirements.
Also, if you can add swap (I don’t know how Apple handles this, but Brett does), add up to 1-2x your available RAM.
Finally: at what stage is the OOM?
Docker on MacOS can’t add more than a single 4GB swap volume currently.
From what I’ve been seeing with Ventura, you really have no control over how the MacOS host decides to swap and when. It’s an opaque process based upon dynamic expansion of a hidden VM APFS volume on the root drive, so you need ample room on disk for it to grow into.
Interesting. I thought I remembered you discussing some complications there.
When Docker has full support for the Apple Virtualization Framework, we should not need to really worry about resource management in terms of static pre-allocation like we do currently, it should be much more dynamic, more in line with the Docker experience on Linux.
Until then, I guess make sure that the drive has ample free space.