I know this topic has come up many times before and I’ve read through most, if not all, of the related posts, so please understand that I’ve tried to do my research. I am using the Docker version of ODM on an Ubuntu 18.04 machine in GCP. It is a 16 processor VM with 102GB of RAM. I have a very large set of data (tens of thousands of images) that were collected with a high-resolution metric camera. Image dimensions are about 13,000 by 8,000. Now, I know I can’t process these images in a single go, so have written a function that breaks my images up into chunks and tries to process those chunks. My goal is to be able to process about 100 images at a time. However, processing about 100 images at a time I can’t get ODM to run to completion. I continually get a memory allocation error after the line in the log that says
Sorting texture patches... done.
I know these are large images, however, I am using a machine with 102GB of RAM and still overrunning. The command I’m attempting to run is
docker run -ti --cpus=8 --rm -v /home/mydir:/datasets/code opendronemap/odm --project-path /datasets --fast-orthophoto --orthophoto-resolution 25 --ignore-gsd --skip-3dmodel --resize-to 1000 --dem-resolution 25 --orthophoto-compression JPEG --max-concurrency 8 --texturing-skip-global-seam-leveling --texturing-skip-local-seam-leveling --mesh-size 20000 --mesh-octree-depth 8
The native resolution of the data are about 7-10cm, so I’ve tried to resize to 1000 pixels and export both the DSM and ortho at reduced resolution (25cm) thinking that might save me, but alas no. Eventually, I need an ortho at native resolution.
Based on many failed runs, I have a few (hopefully) simple questions.
- Am I just out of luck and there is no solution but to increase RAM on my VM until it works? This seems like a bad, not to mention expensive, solution to me.
- I am going to try to go below 100 scenes per cluster to see if this helps. I’ve worked my way down from trying 250 images to 100, but I’m afraid if I go much smaller I won’t get enough coverage to matter.
- I’ve tried running at max-concurrency 1 and that did not help either.
I know memory management has been a big topic here and often the answer is simply “get more RAM”. I’m hoping that maybe there are some new tricks out there or perhaps I’m doing something wrong that someone can shed some light on. Could it be an issue with using either Ubuntu or Google Cloud? I can provide a full log trace if someone wants it.
Thank you very much for any help folks can provide.