Estimating Memory Requirements - Out of Memory with Ultra Settings

This is a question about estimating memory requirements. Given our imagery and settings, like the feature and cloud quality, how do we size our machine accordingly?

Running WebODM 1.9.10, on AWS Linux 2, in browser Chromium 90.x release. The linux box has 64GB ram and 500 GB disk. I have checked the issues in github as well as here but didn’t see anything quite like this. I was running with ultra/ultra (features/pc quality) settings on a 558 image set. Received an ‘Out of Memory’ message at 7+ hrs. I did not have task output running but did have the process output showing workers starting, failing, restarting. Since it’s a fairly big machine, I might assume this was a container memory issue and will try to do a better job of capturing the output.

I am rerunning now with features set back to medium and pc at ultra and task output running.

Here’s the out of memory message.
outofmemory

Here’s a snippet from the process output.

[2021-11-06 00:50:22 +0000] [236] [INFO] Autorestarting worker after current request.
webapp | [2021-11-06 00:50:22 +0000] [236] [INFO] Worker exiting (pid: 236)
webapp | [2021-11-06 00:50:23 +0000] [247] [INFO] Booting worker with pid: 247
webapp | [2021-11-06 00:51:59 +0000] [214] [INFO] Autorestarting worker after current request.
webapp | [2021-11-06 00:51:59 +0000] [214] [INFO] Worker exiting (pid: 214)
webapp | [2021-11-06 00:52:00 +0000] [248] [INFO] Booting worker with pid: 248
webapp | [2021-11-06 00:53:12 +0000] [226] [INFO] Autorestarting worker after current request.
webapp | [2021-11-06 00:53:12 +0000] [226] [INFO] Worker exiting (pid: 226)
webapp | [2021-11-06 00:53:12 +0000] [249] [INFO] Booting worker with pid: 249

Try setting the “split” option 100-200 images.
Might help

O.K. - I’ve used split before in a clusterODM setup. Didn’t make the connection to the containers performing the work.

And you might take a look at the swap space settings on your computer, it might be set to low.

Great point! Great article here on the subject for those (like me) not familiar with modern rules. Note the old rule of 2x RAM is considered not useful. See the tables in the link for recommendations based on linux vesion - How to increase swap space in Linux (net2.com)

Thank you!

I use Win so it’s pretty much automated

Thanks again - a couple of things worth passing on in case people are curious: 1) recommended swap space settings are variable dependent on the amount of memory on your device, 2) don’t assume any is set up on yours, for the AWS Linux 2 box, there was none in the setup, 3) For a box of that type with 64G ram, swap was recommended at 4G, 4) remember to set swappiness to something between 1-100. The higher the number, the more aggressive the swaps. I set mine to 10 because I’ve got a fair amount of memory and want to discourage going to disk.

2 Likes

I’ve heard before that swap on SSD isn’t good, but I done know if that’s fact with the new drives

Good reference on that here - Why are swap partitions discouraged on SSD drives, are they harmful? - Ask Ubuntu

The long and short is, it’s a lifespan issue. No doubt the PaaS vendors would rather not have excessive swapping on their SSD hardware; however, at the recommended limits for swap, hard to believe drive failure rates is an issue.

I’m thinking about upgrading my ram to 64gig, I think I need it now.

Hi Andreas,

For what it is worth, (and on another post) - I got a good recommendation from Cherrmax to consider going online with renting server space as a cheaper option. I bought a used r610 (96GB/1.6TB) to “practice” on, but would go this way in the future.

for your specific question above - at 96GB (10GB swap), I find that 150 photos at ultra/ultra is about what I can do. Then, as long as there is enough space on the partition, it seems that the sky is the limit (I have run 1250 images split this way). I would recommend splitting at 100, overlap 50m. That worked on my son’s gaming computer, which was a quad-core 64GB laptop.

best

1 Like

So what do you recommend for me with 32gig ram, 1T SSD, 6c/12t.

Swapping to SSDs isn’t that bad. Modern SSDs have incredible wear-management technology.

As for not needing much, it depends. If you run short on RAM, you can lean on swap/page. It’ll just be slow.

I was able to process a dataset here on 32GB with experimental “original” quality that Out-Of-Memory crashed until I gave myself about 256GB of pagefile.

As for estimation, it is tricky. I’m still trying to work out how each stage stores things in memory.

Hi Andreas - when i was trying to figure my stuff out (a bit like throwing a bunch of typewriters at a troop of monkeys…) i ended up using top and df like they were best friends. Without knowing better, i would keep the images around 70?

I don’t know the first thing about how to open up my son’s laptop to the web, but I would love to muck around with another dataset (with no pressure!). So if you are free to share, I am more than happy to fire it up…it is just taking space on my desk now…

Running dense reconstruction now from 550 images 20mp res, 50% of ram(32gig) in use

Oh - i am running docker under fedora 33. I don’t know if this make a difference. I haven’t trusted windows since NT.

Right, but:

  1. How many bands per image
  2. Bit-depth per band
  3. Resolution of images
  4. Processing parameters

The interplay of all these factors is really tough to suss out.

1 Like

It’s from a Mavic 2 Pro, 20mp jpg.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.