Incomplete Orthomosaic

OK…would take care in upcoming flights

1 Like

The process failed actually. Here’s the full log:
console (5).pdf (1.7 MB)

1 Like

Really odd error! Never seen this one before, so I’m trying to learn more about it.

For now, could you retry from load dataset, but this time:
drop --use-3dmesh
add --auto-boundary

Again got a very small fraction of the desired result.

Anything else, which I can try.

1 Like

Huh… This is really odd.

Can you:
change --crop 0
drop --fast-orthophoto
change --feature-type orb
change --matcher-neighbors 0

It’s has been more than 6 hours and still matching process is going on. :open_mouth:

1 Like

Yeah, we’re skipping all geometric pre-matching, but since we’re using ORB, it should still be significantly faster than it would be under other feature types.

This is a longshot, unfortunately. I just want to see if this helps in any way.

1 Like

It is still ongoing with matching process. Anyways, just a few questions if you don’t mind,

  1. Why random incomplete orthomosaic is generated? Is everytime random features are detected?
  2. What could be the possible parameters to tackle these type of datasets where there is a lot of vegetation?
  3. I have seen few posts in this community related to Agriculture and Vegetation crops where the orthomosaic generated was incomplete. So, is it inability of WebODM to process these type of datasets?
1 Like
  1. Feature detection and extraction is non-deterministic to my understanding.
  2. --feature-quality ultra, -resize-to -1, --min-num-features, --matcher-neighbors.
  3. No, no inability to process these datasets. Oftentimes there are factors in the data collection or the data itself that lends it to being more or less difficult to reconstruct. If I’m not mis-remembering, I think we’ve gotten most folks to be pretty successful with their datasets. I hope we can do the same for you.

That being said, the motion blur is very problematic. Not only is it reducing the ability for features to be detected and extracted, it is compounding the impact of rolling shutter distortion, which we currently do not correct for.

We’ll see what happens with this round of settings, then I may try pre-processing a subset of your data.

1 Like

I understand now about the motion blur problem. I have increased the shutter speed to maximum and increased the base ISO too. I took a flight test near my home. The images are relatively sharper. I would definetely take care these settings when I plan the next mission.

Regarding Rolling Shutter distortion: Are there any software through which I can kind of correct the rolling shutter effect or any piece of python code?

By the way, still the matching process is going on :sweat_smile:

1 Like

I’m unaware of any stand-alone FOSS software that will perform the Rolling Shutter correction for you. Pix4D, Agisoft, and a few other commercial photogrammetry suites can correct for it, however.

During a recent big job under varying lighting conditions I spent a lot of time adjusting shutter speeds, aperture and ISO to avoid motion blur, having to do mental calculations whilst walking over rocky, rough ground and keeping my eye on the drone.
So I’ve grabbed an envelope, turned it over and found that for a given GSD the general equation is shutter speed needs to be close to, or shorter than
speed in m/s * 100 /GSD.
speed in km/hr * 28/GSD
speed in mph * 45/GSD

For example @ 40km/hr with a GSD of 2cm, you’d want faster than 1/500.
Yes the calculation says 1/560, but a ~10% blur isn’t noticeable. 1/250 could cause problems though.


This is super useful. I did a similar calculation a few years ago. Can you either add an issue to add a section on this or just add a section to


Got it flagged to add to docs already :sunglasses:


Strictly speaking I should have written the answer as x and the shutter speed as 1/x :wink:
Working in m/s makes it an easy mental calculation.


Hi Saijin,

So, I got the data processed finally and with GCP too. That’s an achievement but got weird DSM and DTM.

On the map, it shows only 4 GCPs. But I entered 6 GCPs.

The weird thing is that, after 4th GCP shown on the map, the Elevation shoots up steeply. I used the following parameters.

Since, I needed little sharp and straight edges, I used the abovementioned parameters.

Still confused how elevation could be so wrong.

I checked the output folder named odm_georeferencing and saw the content of gcp_list_utm.txt. It shows all the GCPs which I provided.

But ground_control_points.gml shows only 4 GCP which are shown on the map too.

Just wondering why did it ignore GCP-3 and GCP-4?

1 Like

Glad you were able to salvage it!

Hmm… I wonder if GCP3/GCP4 are outside of the reconstructed area?

No, they are not. I double checked the GCP3/4 location.It’s inside only.

1 Like

Is it possible their images were not in the calibrated image set, but still inside the reconstruction from other images?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.