I cant get past "Not enough memory" errors with 64 gigs at 470 images

I am new to web odm, I started with a run of 320, 20 Mp images on 32 gigs of ram and everything went perfectly, I ran it multiple times on max settings (PC ultra, features ultra, brute forces, classify pc etc. Then I tried a batch of 469 images and ran out of memory, so I upgraded to 64 gigs of ram, same problem. If I don’t split the task I get an "out of memory error at about the 42 second mark. If I turn max concurrency to half of my cores (10/20) and split into 100 images with a buffer of 50, it will run for a few hours before a I get an out of memory error. I even tried splitting it into smaller chunks of 25 images and only 5 cores and it either runs out of memory or completely stalls out with CPU usage going to zero and no progress being made after 30 hours. Based on the claim that memory usage scales linearly with the number of images, 64 gigs should be enough, and even if it isn’t splitting it down into sets of 25 images should work. My task manager shows full memory utilization by ODM also.
The report from my last run and images are in the link.
(Sign in to your account)

1 Like


Please never use --ignore-gsd, which quite commonly causes Out-Of-Memory errors and other issues.

Also, please try appending --auto-boundary to make sure the reconstruction boundaries are kept more close to the data-dense parts of the reconstruction.

Finally, how are you running WebODM? Docker? Native?

1 Like

Im running ODM in docker, it looks like docker has full access to my machines memory based on activity monitor.

1 Like

Do you have a decent-sized swap volume setup as well?

2-4x physical RAM is usually a nice buffer for larger projects (1,000+ images).

I’m 32GB RAM + 120GB SWAP locally, for instance.

1 Like

This. I had 72GB of RAM assigned to my VM and still had issues. I added a swap to match that and I can run about 1k images with no memory error.


Im not sure how to set up swap memory for docker containers in windows. Is there a good guide online?

1 Like

for Windows, this would be your pagefile (If memory serves me correctly). Set your pagefile to use 64GB of your SSD, restart your computer, and try again.

Here is a quick guide for Windows 10.

NOTE: It doesn’t have to be double, it can be as much as you can give for extra memory.

1 Like

WSL2 uses its own dediated swap file which gets stored on-disk as an ext4.vhdx, similar to the WSL2 volume.

It is always good to have a pagefile for Windows as the host, but Docker/WSL2 will not allow their child processes to touch it directly.

Check the Microsoft Docs on how to set one up:

Also, if you’re using the default .wslconfig, WSL2 will not use more than half your RAM, so you might be able to skooch that number up a bit in the .wslconfig as you’re setting up your swap file. I would not recommend leaving Windows with anything less than 4GB RAM to work with, though honestly 8GB is probably safer if you’re inteding to use the machine at all while it is processing.

120 g swap in docker? My docker won’t let me slide over to more than 4 g?

1 Like

Correct, Docker for MacOS is a bit more limited in that regard, so the 4GB SWAP is as large as it will let you use.

What is “ignore gsd” and how do I set it? I’m doing 504 pics in high resolution and crap out when it hits 58 gigs of memory used.

1 Like

--ignore-gsd is a flag that isn’t needed 99.9% of the times and will absolutely lead to crashes and other issues. You don’t need it, and you should not set it.

You can try lowering --max-concurrency to see if that takes the peak memory usage down, though bear in mind this will limit how parallel the processing is, so it will take longer.

Just an FYI in case helpful…

I ran a test to process 428 x 20MP images on a 40-thread server (with no GPU) using these settings:
auto-boundary: true, dsm: true, feature-quality: ultra, matcher-neighbors: 40, mesh-octree-depth: 12, mesh-size: 300000, min-num-features: 64000, orthophoto-resolution: 1, pc-geometric: true, pc-quality: ultra, rerun-from: dataset, resize-to: -1

Processing time: 17h51m
Max RAM+SWAP usage: 148GB
Max disk space usage: 154GB

1 Like

Hi Saijin_Naib, I created a .wslconfig by creating a text file with the recommended syntax to set cpu core, memory and swap memory limits and changed the extension to .wslconfig but task manager shows only physical memory being used when processing with WebODM. You mentioned that WSL2 uses its own dedicated swap space, is that something I can check to confirm that docker has access to swap memory? Also, another problem I have been experiencing is the dashboard clock progressing very slowly, I just finished processing a photo set that took 22.5 hours real time but the WebODM clock says 8 hours. This is probably completely unrelated but I thought I would mention it in case it is indicative of a greater problem.
Thank you for your help!

1 Like


If you created the swap volume in your .wslconfig as per the documentation, you will see a swap file at the path you specified. So, it will be NAMEOFSWAPFILE.vhdx. For me, it was in:


Update: For anyone having similar issues running out of memory trying to get high quality point clouds, the issue seemed to be fixed by selecting the tile point cloud option.

1 Like

That can certainly help, yep! Have you noticed anything different in your point-cloud like seams or gaps? That can sometimes occur, and can be a driving reason for not using that option, despite the improved memory efficiency.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.