Big datasets

I’m having a very hard time completing the processing of a test set of 640 Phantom 4 images. I always get a code 1 exit or a lack of disk space. Laptop has 16 Gb ram with an I7. I’m running this with docker and have set the storage to 80Gb and memory to 12Gb. When it runs, the memory on the computer goes up to around 95 (locked by the VM?). I tried the ‘High Quality’ default, and I also tried enabling the 2.5D surface. I’m using Firefox. Unlinke other posts, everything uploads fine. The process works, and it’s in the ‘merge depthmaps’ that things crash.

Is there a combination of settings that is optimised for 500+ images in a 2.5D framework. In fact, I don’t use DEMs and prefer to work with pointclouds. Can WebODM orthorectify from point clouds alone?


Merge depthmaps takes the depthmaps from stereo pairs and merges them into a single point cloud. It looks like there’s an upstream fix in OpenSfM which addresses this memory issue with merge depthmaps:

Once this gets merged into master, we can test it.

1 Like