Understanding memory usage and swap

Hello there and happy new year!

I’ve started using webodm a couple of months ago. I’m processing a couple of hundred images of a construction site with a p4p.
As I’ve processing more and more images (500+), the more memory issues i’ve run into and I did some steps to mitigate. First I’ve upgraded the RAM to 64 Gig, then started to using swap that has solved my problems partially.

Hardware: AMD Ryzen 7 5700G
RAM: 64Gig
Proxmox PVE environment with an Ubuntu LXC 20.04.3 LTS running Docker : 12 CPU cores and 50Gig RAM

I’ve added an extra 200Gig SSD for swapping (I know, it’s slower…):
And set swappiness to 0 to use this swap.

However it seems that Webodm still maxes out the memory and not touching the swap much.
Not getting any errors that I ran out of memory though…
Is this behaviour to be expected? I would imagine the swap being used more, not only a 1-2 Gigabytes when there is 200 available?

On my notebook with 16 Gig of RAM where I’m running a Windows Docker WSL2 I’ve also set a local swap file and it’s being used extensively up to 100+Gigabyte.
I would like to understand why the LXC image is not using the swapfile, even when the RAM is full and there is enough swap and the setting are correct.

I’m happy to provide more details if needed.

Kind Regards,


In not an expert but I think the operating system is controlling swap usage. Windows normally have a swap file that can expand and contract on the fly and it uses it for some kind of safety, it always uses it.

When I’m running WebODM on Win10 it uses swap as much as the ram.

There’s one setting that you shouldn’t use though that can make things worse, “ignore-GSD” I think it’s called.

1 Like

Hello Andreas,

both in Windows and Linux I’m controlling the SWAP:

  • in Windows I’ve explicitly added the line for using swap file in the .wslconfig file
  • in Linux I’ve added an SSD solely for this purpose and also formatted it for swap and applied the config in the fstab file.
root@webodm:/home/daniel# lsblk
sdb                         8:16   0  210G  0 disk [SWAP]
root@webodm:/home/daniel# swapon -s
Filename                                Type            Size    Used    Priority
/dev/sdb                                partition       220200956       1070300 100

So the behaviour should be the same?


Okay, then I don’t know the problem.

swappiness 0 means your Kernel is going to do literally anything it can to not touch the swap. I don’t have documentation on this under Linux, but at least with Windows, any delay in allocating/expanding the swap can cause Out-Of-Memory issues despite the total possible allocation size of the expanded swap being sufficient. I wonder if a similar behavior can occur with the Linux kernel when it is set to avoid swapping that aggressively.

Do you know what your distro’s default sysctl values for swappiness is? I believe my Alpine kernel shipped with something like 65%. You could always check what the Microsoft-supplied WSL2 kernel’s swappiness value is set to as a baseline for comparison, as well.

Perhaps adjust it up to see if it behaves a bit better. When I was using Alpine on an EliteBook 2740p with 8GB RAM, I was able to process pretty large datasets by swapping out to SSD.

1 Like

it’s 60 by default, let me change back to that and try it again. Thanks for the tip!

daniel@webodm:~$ free -h
              total        used        free      shared  buff/cache   available
Mem:           49Gi        48Gi       333Mi        13Mi       263Mi        88Mi
Swap:         209Gi        34Gi       175Gi

looks like it’s working all right, let me double check it tomorrow after I woke up. The server is still working on some data.


Thank you Saijin_Najib. Appearently I misread the manual: with swappiness set to 60, everything is working just fine!


Glad you’re working smoothly!

Show off some stuff when you get a chance :slight_smile:

1 Like

sure will do as soon as I’m satisfied with the results. Or I just post a different topic should I run into problems again :slight_smile: