This is a question about estimating memory requirements. Given our imagery and settings, like the feature and cloud quality, how do we size our machine accordingly?
Running WebODM 1.9.10, on AWS Linux 2, in browser Chromium 90.x release. The linux box has 64GB ram and 500 GB disk. I have checked the issues in github as well as here but didn’t see anything quite like this. I was running with ultra/ultra (features/pc quality) settings on a 558 image set. Received an ‘Out of Memory’ message at 7+ hrs. I did not have task output running but did have the process output showing workers starting, failing, restarting. Since it’s a fairly big machine, I might assume this was a container memory issue and will try to do a better job of capturing the output.
I am rerunning now with features set back to medium and pc at ultra and task output running.
Great point! Great article here on the subject for those (like me) not familiar with modern rules. Note the old rule of 2x RAM is considered not useful. See the tables in the link for recommendations based on linux vesion - How to increase swap space in Linux (net2.com)
Thanks again - a couple of things worth passing on in case people are curious: 1) recommended swap space settings are variable dependent on the amount of memory on your device, 2) don’t assume any is set up on yours, for the AWS Linux 2 box, there was none in the setup, 3) For a box of that type with 64G ram, swap was recommended at 4G, 4) remember to set swappiness to something between 1-100. The higher the number, the more aggressive the swaps. I set mine to 10 because I’ve got a fair amount of memory and want to discourage going to disk.
The long and short is, it’s a lifespan issue. No doubt the PaaS vendors would rather not have excessive swapping on their SSD hardware; however, at the recommended limits for swap, hard to believe drive failure rates is an issue.
For what it is worth, (and on another post) - I got a good recommendation from Cherrmax to consider going online with renting server space as a cheaper option. I bought a used r610 (96GB/1.6TB) to “practice” on, but would go this way in the future.
for your specific question above - at 96GB (10GB swap), I find that 150 photos at ultra/ultra is about what I can do. Then, as long as there is enough space on the partition, it seems that the sky is the limit (I have run 1250 images split this way). I would recommend splitting at 100, overlap 50m. That worked on my son’s gaming computer, which was a quad-core 64GB laptop.
Swapping to SSDs isn’t that bad. Modern SSDs have incredible wear-management technology.
As for not needing much, it depends. If you run short on RAM, you can lean on swap/page. It’ll just be slow.
I was able to process a dataset here on 32GB with experimental “original” quality that Out-Of-Memory crashed until I gave myself about 256GB of pagefile.
As for estimation, it is tricky. I’m still trying to work out how each stage stores things in memory.
Hi Andreas - when i was trying to figure my stuff out (a bit like throwing a bunch of typewriters at a troop of monkeys…) i ended up using top and df like they were best friends. Without knowing better, i would keep the images around 70?
I don’t know the first thing about how to open up my son’s laptop to the web, but I would love to muck around with another dataset (with no pressure!). So if you are free to share, I am more than happy to fire it up…it is just taking space on my desk now…