Incomplete Orthomosaic

Dataset Link: crop_survey - Google Drive
It includes GCP file too. But right now, I am looking for a solution where I can get the full orthomosaic based on the photos clicked without GCP.
I am always getting an incomplete orthomosaic or a very small part of the full orthomosaic. I have attached photo of last one which I processed.


I believe, I used almost all the parameters to my knowledge to get the full orthomosaic but it is in vain. Here is the latest image of the parameter which I used.


Now, I thought may be I should take more flights (it frightens me because I need to travel almost 400 km to get the data) with more overlap but before that I tried my luck with the dronedeploy and I was completely astonished to see the results. This is what I actually wanted.


I hope we can achieve this in our WebODM too. Maybe because of my less knowledge of limited parameters, I could not achieve the desired results.

Please help me with this dataset.

1 Like

Try removing --resize-to and using the GCP again.

Also, you can try --fast-orthophoto to see if that helps it stitch better.

What overlap/sidelap are you planning with?

80% overlap.

  1. Should I use the same parameters mentioned and just remove --resize-to parameter & add --fast-orthophoto just as mentioned below?
    dem-gapfill-steps: 4, dsm: true, dtm: true, fast-orthophoto: true, feature-quality: ultra, matcher-neighbors: 16, mesh-size: 300000, min-num-features: 80000, pc-classify: true, pc-filter: 1, pc-geometric: true, pc-quality: ultra, skip-3dmodel: true, use-3dmesh: true

  2. Should I use the GCP? I am asking this because I tried using GCP before, but it was failing during reconstruction only.

  1. Yep, keep everything, drop --resize-to, add --fast-orthophoto. Some flags do not apply with it, but you don’t need to manage that at the moment.

  2. If the above reconstructs better, I would try adding the GCP file and running again.

1 Like

Can you set the minimum shutter speed higher or maybe raise the minimum ISO for your GoPro? You’re getting a lot of motion blur in these images which does not help things.

1 Like

OK…would take care in upcoming flights

1 Like

The process failed actually. Here’s the full log:
console (5).pdf (1.7 MB)

1 Like

Really odd error! Never seen this one before, so I’m trying to learn more about it.

For now, could you retry from load dataset, but this time:
drop --use-3dmesh
add --auto-boundary

Again got a very small fraction of the desired result.


Anything else, which I can try.

1 Like

Huh… This is really odd.

Can you:
change --crop 0
drop --fast-orthophoto
change --feature-type orb
change --matcher-neighbors 0

It’s has been more than 6 hours and still matching process is going on. :open_mouth:

1 Like

Yeah, we’re skipping all geometric pre-matching, but since we’re using ORB, it should still be significantly faster than it would be under other feature types.

This is a longshot, unfortunately. I just want to see if this helps in any way.

1 Like

It is still ongoing with matching process. Anyways, just a few questions if you don’t mind,

  1. Why random incomplete orthomosaic is generated? Is everytime random features are detected?
  2. What could be the possible parameters to tackle these type of datasets where there is a lot of vegetation?
  3. I have seen few posts in this community related to Agriculture and Vegetation crops where the orthomosaic generated was incomplete. So, is it inability of WebODM to process these type of datasets?
1 Like
  1. Feature detection and extraction is non-deterministic to my understanding.
  2. --feature-quality ultra, -resize-to -1, --min-num-features, --matcher-neighbors.
  3. No, no inability to process these datasets. Oftentimes there are factors in the data collection or the data itself that lends it to being more or less difficult to reconstruct. If I’m not mis-remembering, I think we’ve gotten most folks to be pretty successful with their datasets. I hope we can do the same for you.

That being said, the motion blur is very problematic. Not only is it reducing the ability for features to be detected and extracted, it is compounding the impact of rolling shutter distortion, which we currently do not correct for.

We’ll see what happens with this round of settings, then I may try pre-processing a subset of your data.

1 Like

I understand now about the motion blur problem. I have increased the shutter speed to maximum and increased the base ISO too. I took a flight test near my home. The images are relatively sharper. I would definetely take care these settings when I plan the next mission.

Regarding Rolling Shutter distortion: Are there any software through which I can kind of correct the rolling shutter effect or any piece of python code?

By the way, still the matching process is going on :sweat_smile:

1 Like

I’m unaware of any stand-alone FOSS software that will perform the Rolling Shutter correction for you. Pix4D, Agisoft, and a few other commercial photogrammetry suites can correct for it, however.

During a recent big job under varying lighting conditions I spent a lot of time adjusting shutter speeds, aperture and ISO to avoid motion blur, having to do mental calculations whilst walking over rocky, rough ground and keeping my eye on the drone.
So I’ve grabbed an envelope, turned it over and found that for a given GSD the general equation is shutter speed needs to be close to, or shorter than
speed in m/s * 100 /GSD.
speed in km/hr * 28/GSD
speed in mph * 45/GSD

For example @ 40km/hr with a GSD of 2cm, you’d want faster than 1/500.
Yes the calculation says 1/560, but a ~10% blur isn’t noticeable. 1/250 could cause problems though.


This is super useful. I did a similar calculation a few years ago. Can you either add an issue to add a section on this or just add a section to


Got it flagged to add to docs already :sunglasses:


Strictly speaking I should have written the answer as x and the shutter speed as 1/x :wink:
Working in m/s makes it an easy mental calculation.