WebODM stuck when using huge datasets (>5000 pictures) please help!

Hi everyone,

i’ve been using webodm for a long time and i absolutly love it. Lately i have to process very huge datasets (3000, 5000, 9000 pictures) and run into problems.

My NodeODM:

  • 2 TB RAM
  • 128 CPU cores
  • 512 GB disk space

Usually i process with pc-quality: ultra but i couldn’t make it work with the above datasets. So i tuned everything down to the following settings:

auto-boundary: true, dem-gapfill-steps: 6, dem-resolution: 4, dsm: true, dtm: true, mesh-octree-depth: 11, optimize-disk-space: true, orthophoto-resolution: 1, pc-quality: high, use-hybrid-bundle-adjustment: true

With this settings i got the 3000 dataset to process, but the others are stuck since a few days at:

2022-07-01 07:40:15,800 INFO: -------------------------------------------------------
2022-07-01 07:40:15,821 INFO: P2103584.JPG resection inliers: 727 / 759
2022-07-01 07:40:15,842 INFO: Adding P2103584.JPG to the reconstruction
2022-07-01 07:40:16,045 DEBUG: Ceres Solver Report: Iterations: 9, Initial cost: 5.503326e+02, Final cost: 5.188912e+02, Termination: CONVERGENCE
2022-07-01 07:40:46,142 INFO: Removed outliers: 130
2022-07-01 07:40:51,293 INFO: -------------------------------------------------------
2022-07-01 07:40:51,316 INFO: P2103585.JPG resection inliers: 724 / 747
2022-07-01 07:40:51,338 INFO: Adding P2103585.JPG to the reconstruction

Can someone help me with the settings or give other suggestions? I don’t mind if the dataset runs for several days or even weeks and i wouldn’t want to reduce the quality to much, since it would reduce the products quality.

Thank you in advance for any help!

PS: i sadly can’t provide you with the datasets.


Did you try with * Splitting * options ? I’m just having a look on the doc, sorry if that does not apply at all to your issue, It just sounds like it could help in your case…
Also * dem-decimation * (no idea what this is doing maybe someone can explain)?
I see I edited my post after your reply. Again, I’m just reading the doc for those options and it seems they might be related.

1 Like

i tryed with a cluster before but with not much luck. Individuell sets where still pretty big and would got stuck. Althought from what i’ve read cluster doesnt give you a textured 3D-Model, but i really would need one.

1 Like

What is your virtual memory like? How much swap/pagefile do you have?

It’s all real memory, just checked 2,4 TB memory, 900M swap.

I actually use 2 kind of machines, one is a physical server the other a cloud server but with similar specs, same behaviour.

1 Like

What’s the reason for setting dem-gapfill-steps=5?
I notice even in Smather’s recommended settings for best quality he only used dem-gapfill-steps=3

1 Like

Did you check disk space during process ? 512Gb for huge amount is not big as there are a lot of temporary files …

1 Like

I don’t remember where i got this setting, i might try to reduce it but i can’t imagine that it stucks because of the dem generation. Can you link me the recommended settings page you mentioned? Thank you!

Yea that was something i looked into and also why i use the " optimize-disk-space: true," parameter. As mentioned i’m trying this process on different machines and my other one as 1 TB disk space, so i assume its not the problem here.

1 Like

I can’t remember the thread, but these were Smather’s recommended settings for best quality orthophotos:
auto-boundary: true, dsm: true, feature-quality: ultra, matcher-neighbors: 40, mesh-octree-depth: 12, mesh-size: 300000, min-num-features: 64000, orthophoto-resolution: 1, pc-geometric: true, pc-quality: ultra, rerun-from: dataset, resize-to: -1


I have a follow up question: who has proccessed datasets that huge and higher? What specs had your machine and what settings did you use? Thank you very much

1 Like

try this topic, I’ve managed to process 7000 images of the webodm sample dataset on a potato compared to your NASA rig.

1 Like

me and my rig on the daily :sob:

1 Like

Thank you for the link! And i am very curious that you managed to process 7000 images, great work! But i couldn’t really get how you did it? Your last entry was:

So do you actually managed to reproduce your success and what settings did you use?

1 Like

hm, it ran, I can remember… It took 5 days though .
I think the settings were like this:

The secret sauce was for me swap, after adding swap, it solved almost all my issues.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.