Why is my dataset only processing a section down the middle

I have a dataset of images taken at 45 degrees. There are 32 parallel rows of approximately 65 images in each. I am using the standard ‘High Resolution’ setting available on WebODM…
ignore-gsd: true, dsm: true, pc-quality: high, dem-resolution: 2.0, orthophoto-resolution: 2.0

When I run the images in blocks of 6 rows, it generates a point cloud successfully.
e.g. Rows: 1-6, 3-8, 5-10, …, 27-32
But when I process all 32 rows at the same time, it only generates a point cloud for a section in the middle… see image with cameras displayed.
I would like to get a point cloud and textured image from the complete dataset rather than having to stitch different sections together.
Any advice is appreciated.

1 Like

Could you maybe try disabling pre-matching (–matcher-distance, --matcher-neighbors), and maybe upping --min-num-features to 12000?

Also, for the sake of avoiding issues, could you drop ignore-GSD for now?

If you can share the data, even better. I’ll try to process locally.

Ok. It is taking around 4 hours to process so I will let this run and let you know how it goes.

1 Like

Hi, I have been trying to process the data with the settings you suggested but I jet the following:

This is a dataset with 1923 images that were processing last week. Now, I can only process smaller data sets. e.g. 131 processed successfully in 37 mins. I cancelled 260 images after hours with similar DEBUG messages.
Should it be printing ‘matching’ debug messages if matcher-neighbour and matcher-distance are both 0?
Maybe this should be a new topic?

As before, you can disable ignore-gsd? It is very computationally intensive and can make tasks take inordinately long.

Ah… I see. I will run it again.
I was experimenting earlier and reinstalled WebODM. but now the only node available is the lightning node. Do I need to install NodeODM and set one up locally? Seems a bit weird as the last time I didn’t have to do this.

1 Like

It depends how you have WebODM installed. Is this the WebODM for Windows (native)? If so, you shouldn’t need to do anything special for the local node to be installed. A complete removal of the WebODM directory and clean re-install might be warranted.

I am running it again on another laptop. But it is not as powerful.
I can’t believe I overlooked disabling ignore-gsd!
And I will reinstall on my pc.

Not a problem! It can be a desirable flag to have, but if you are troubleshooting, at best it makes things take far longer than they need to, and at worse, it can cause things to fail.

No joy on reinstall… still only have Lightning node.
I am installing using https://webodm.net/static/downloads/WebODM_Setup.exe
And after uninstalling, I remove the folder from C:\Users\Declan\AppData\Local\Programs
Maybe I need to remove registry keys manually…

That installer is the WebODM for Windows interface to the docker-provided NodeODM (the older installer).

Did you purchase WebODM for Windows? If so, send an email to [email protected] so I can send you the new/native installer link.

Yep. was using the wrong file.
I have reinstalled and the node is there.
I’m running with the correct settings. I’ll update you in the morning.
I appreciate your help.


No joy yet… It is still struggling along with messages like:

And my other laptop is at 14h 10mins. with the same settings.
This was being processed in approx. 4 hours (with poor results)
I am going to cancel it and try with smaller chunks again. I will investigate what the threshold for the number of images is. If you have other suggestions, please let me know.

1 Like

What messages in that log screenshot are concerning you?

None… just the time.
I take it from your comment that if I leave this running, it will eventually complete processing…
I guess increasing the min number of features to 12000 from 8000 increases the processing time by more than 50%.

It can certainly have an outsize impact on larger datasets, sure. I’m not sure about 50% more, but it does increase RAM consumption and CPU time for finding/extracting the features. There is some sort of non-linear relationship, but I have not yet quantified it.

I am getting a better understanding of what is happening.
My dataset with 2000 images processed in 4 hours because it only actually processed a few hundred images down the middle.
When it processes the full dataset, it will probably take more than a day. And the relationship between number of images and time is probably not linear either… I guess number of matches and time may tend towards a linear relationship if the additional images extend in one direction and maintain a similar spacing.
Thanks for your advice and help. I am doing various tests but I expect I need to be more patient!
I’ll post back here in a few days.

1 Like

Hi Declan, Brett,

A few things: yes – kick out the --ignore-gsd flag. We should probably remove that from the default high-resolution settings (if it’s there) because it is probably the most dangerous accidentally set flag we have. :smiley:

As to --matcher-neighbors and --matcher-distance, I would only set these to 0 on moderately small datasets. For anything modestly large, this is probably an anti-pattern. Instead, just increase them. One of the challenges with non-nadir datasets is making these large enough (but not so large that computation of the SfM stage gets too long). But maybe just set matcher-neighbors and set to 20 or max 30 and see if more of the scene reconstructs.

Setting the matcher pair of settings to 0 can result in somewhat exponential increase in processing times, somewhat ameliorated by indexing of features if you use Barrel of Words matching. But, it is usually better to optimize those values to low 10s of images instead.


Hi guys,
Thanks for your help with this.
Brett’s suggestion to set --matcher-neighbors and --matcher-distance set to 0 worked and gave great results when I ran it on 500 images in one corner of the scene. It took 16 hours to run.
I then ran all 1920 images with --matcher-neighbors set to 20 and it ran in 10 hours also with great results. I will use this setting for future runs.
The images were taken at 45 degrees and seem to be capturing the areas under trees fairly well. I think it would be a great result if I could get the 45 degree images from 4 or even 8 directions. (N, S, E, W etc.)
For now, I just have 2 directions but the resulting textured model has very few holes! I think the 45 degrees works well.


Sounds like Pictometry data, which, yeah, is pretty excellent in my experience!

Show some off (if and when you can)!