Memory usage

When I started using using ODM I had 32GB of RAM, and quite a few times I ran out of memory, so I spent up big on another 64GB to give me 96GB in total.
However, now even with sets of over 5000 images, WebODM + Python processes rarely use any more than 15GB, often <10GB, and never more than 20GB.
Have I wasted my money increasing RAM, or is there a way to make WebODM use more of it?

Max RAM use is often in spikes, and isn’t likely to be caught when you’re monitoring it. For 5k images, you probably do need 96GB RAM.

2 Likes

I have spent a lot of time watching! Just having a look in the extra columns available in Task Manager

For the largest Python process:
|Peak working set (memory) |python.exe| |6,853,652 K|

I’m not an expert, but presumably this is why I’ve never seen the several Python processes use more than 10GB in total out of the 96GB available.

I’m not sure what has changed from the running out of memory issues when >25GB was a common sight, to always being <20GB now.

I usually look at Commit Size as the indicator of how much memory a process is asking for and needs, since it also includes Virtual Memory size. Working Set should be just what is physically backed by RAM at that exact moment, ignoring Virtual Memory entirely.

If your commit charge goes above Physical + Page, you’re going to OOM crash even if you don’t see it exceed RAM size in Peak Working Set.

1 Like

Working set does vary over time, but the Peak value I posted hasn’t changed in the hours I’ve been keeping an eye on it. Commit size is varying a bit, but generally around 7.6GB, still way, way short of what is available. Paged is <1MB and it appears that Virtualisation is disabled for Python.
Currently 49 hours into the 6894 image set.

2021-10-23 14:38:40,274 DEBUG: Matching DJI_0227_5.JPG and DJI_0228_5.JPG. Matcher: FLANN (symmetric) T-desc: 216.664 T-robust: 0.106 T-total: 216.834 Matches: 15265 Robust: 15237 Success: True

OK, adding images to the reconstruction is where memory use gets a bit more intensive! Commit is up around 70GB now, when I’m over 100 hours in.

1 Like

Unfortunately it has failed after more than 115 hours with “Cannot process dataset” :worried:

End of the console log:

2021-10-26 06:11:18,214 INFO: DJI_0494_11.JPG resection inliers: 3071 / 3153 2021-10-26 06:11:18,418 INFO: Adding DJI_0494_11.JPG to the reconstruction 2021-10-26 06:11:32,705 INFO: ------------------------------------------------------- 2021-10-26 06:11:33,090 INFO: DJI_0532_11.JPG resection inliers: 2970 / 3007 2021-10-26 06:11:33,293 INFO: Adding DJI_0532_11.JPG to the reconstruction 2021-10-26 06:11:33,512 INFO: Shots and/or GCPs are well-conditioned. Using naive 3D-3D alignment.
Traceback (most recent call last): File “E:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\bin\opensfm_main.py”, line 15, in commands.command_runner( File “E:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\commands\command_runner.py”, line 38, in command_runner command.run(data, args) File “E:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\commands\command.py”, line 12, in run self.run_impl(data, args) File “E:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\commands\reconstruct.py”, line 11, in run_impl reconstruct.run_dataset(dataset) File “E:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\actions\reconstruct.py”, line 9, in run_dataset report, reconstructions = reconstruction.incremental_reconstruction( File “E:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\reconstruction.py”, line 1348, in incremental_reconstruction reconstruction, rec_report[“grow”] = grow_reconstruction( File “E:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\reconstruction.py”, line 1284, in grow_reconstruction brep = bundle( File “E:\WebODM\resources\app\apps\ODM\SuperBuild\install\bin\opensfm\opensfm\reconstruction.py”, line 90, in bundle report = pysfm.BAHelpers.bundle( MemoryError: bad allocation [ERROR] The program could not process this dataset using the current settings. Check that the images have enough overlap, that there are enough recognizable features and that the images are in focus. You could also try to increase the --min-num-features parameter.The program will now exit.


And below the console -

It looks like your processing node ran out of memory. If you are using docker, make sure that your docker environment has enough RAM allocated. Alternatively, make sure you have enough physical RAM, reduce the number of images, make your images smaller, or reduce the max-concurrency parameter from the task’s options. You can also try to use a cloud processing node.


Does the final message override the console error suggestions?

Full console log here:

If it did run out of memory, would split/merge allow it to be processed?

1 Like

Split/merge likely would help.

You could also try the auto-boundary flag that was just recently added if you’re fully up to date.

1 Like

I have the latest build, but don’t see the auto-boundary option listed in edits.
Closest I could see was: use-hybrid-bundle-adjustment: true

1 Like

Hmm… It should be present with ODM 2.6.5 and above.

Only ODM under Linux etc? I’m using WebODM and 1.9.7 build 32 is the latest according to the update tool.

Ah, yep. If you’re sending the job to Lightning which is on v2.6.5, you can make use of them. Local included node-odm nodes in WebODM are still v2.6.4

I think it’ll be pretty neat:
image

1 Like

Is there a quick guide to using Lightning somewhere? I’ve looked in a few places but not found one.
While I still have ~200GB of images uploaded onto Google for my client (who are using DroneDeploy for the ortho), I’m wondering if I can produce an ortho of ~2/3 of the property with them for my own interest, and to compare with DD.

What do you need for Lightning?

I think the website is a good primer, but I’m interested in what gaps you find.

With a limit of only 3000 images for the most expensive option, split-merge would be handy, but you already know that! :wink:
At this stage I know I can do larger sets locally, although no luck so far with >6000 images, but split-merge is still in progress on the nearly 7000 image set.

My main query was can you point it to Google 1 to load the images, or do they have to come from your local computer?

1 Like

Haha, yes, split/merge would be wonderful on Lightning.

Ah, no, not at the moment. “Side-loading” from cloud providers isn’t supported at this juncture, so it’d have to be from the local machine.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.