Differences in Point Cloud and Other Results

I have processed one dataset. I have attached all the processing details, report and snapshots as much as I can. I am just wondering how Dronedeploy manages to produce such sharp and smooth orthomosaics even if there is less overlapped images.

I am sure WebODM would have done a fantastic job had there been a better overlap for this particular dataset.

I looked at the point cloud and saw some major differences. Is it the root cause?

I went through some of the community posts and found that setting pc-filter to 0 helps sometimes though it is not recommended. After applying pc-filter to 0, I got a better result and a point cloud but still it is not able to match dronedeploy clarity and smoothness.

Just wondering what is the root cause and how we can improve our WebODM? I have tried most of the parameters. The only thing left is to set mesh-octree to 14 but I don’t think it would solve the problem.

dronedeploy.pdf (2.3 MB)

Dronedeploy Orthomosaic Building Result

Webodm Orthomosaic Building Result

Dronedeploy Point Cloud

Webodm Point Cloud

1 Like

This is difficult question for me since you have maxed out the settings (pc-quality : ultra, feature quality : ultra) and still not get your expected result (lesser point cloud than DroneDeploy).

But, I think ODM orthophoto texture was better than Dronedeploy.

My suggestion : try to change ODM version to 2.6.7. Another user has found ODM 2.6.7 has denser point cloud than the newer one (ODM 2.7.2). But, It must be tested more to prove that, I also checking this.

EDIT 1st : HERE another user who found that hypothesis (older ODM 2.6.7 has denser point cloud than newer one.

1 Like

Can you share the raw images somewhere (google drive, DroneDB, or elsewhere)? I can take a look and try it in different ODM versions. That’s a substantial difference that I wouldn’t expect to see.


Try increasing pc-filter!



1 Like

I disabled filtering by setting pc-filter to 0. From my understanding of the parameter, I should get an unfiltered point cloud. So, would increasing this parameter help in this case?

1 Like

I have few observations regarding to comaprison between our WebODM and dronedeploy:

  1. Whereever there are holes in the point cloud, dronedeploy manages it better. Also, orthomosaic at that place does not get distorted.

  2. We get crooked lines but they somehow manage it to show straight.

  3. Dronedeploy smoothens the orthomosaic.

  4. I feel our results seem better when we overfly or overlapping percentage is more.

  5. Last but not the least, it generally happens in the case of processing an urban area or area containing buildings.

Am I correct about these points? And if yes, is there any way to solve this? I have tried the max settings possible but I have never fully achieved what dronedeploy produces.


Sorry, I never use dronedeploy.

Another users also facing holes in 3D mesh but there are no hole with dronedeploy and same exact dataset. HERE is the thread.

Another similar problem and some impovement, HERE


I am talking about the same thing. But they could not conclude what needs to be done to sort it out.


For now, maybe that is the best result that ODM can archieve (BTW, you have amazing computer that can maxed out ODM settings).

And yes, please support ODM, so it will better and better.


texturing-data-term: area helped here.
So should I try with this parameter? Also, can we generalize that whenever we process dataset containing buildings, rooftops, etc., we should go for texturing-data-term: area


Normally I’d recommend it when you’re trying to suppress ghosting in images from things like moving cars on a roadway. It works by trying to choose the pixels that change the least over the largest area of coverage.

I think the default of GMI should suit most datasets best, and I’d be hesitant to recommend area as a first-option.


Have you tried processing with the latest (2.8.0) version? I’ve recently pushed some improvements in this area. https://github.com/OpenDroneMap/ODM/pull/1418


Bit disappointed to hear that. I always felt that with some tweaks in the parameter, we would surpass other Paid software’s results.

is there a way I can contribute apart from monetary help?
I am already a BIG fan of FOSS. WebODM has really helped me in my career and I always wanted to contribute. I have decent knowledge of C++, python and web-development. Maybe, I need some more knowledge in photogrammetry domain




I have updated it now and running it with the following parameters:

dem-resolution: 1.0, dsm: true, dtm: true, feature-quality: ultra, matcher-neighbors: 24, mesh-octree-depth: 12, mesh-size: 5000000, min-num-features: 64000, orthophoto-resolution: 1.0, pc-geometric: true, pc-quality: ultra, resize-to: -1


In addition to what we list on the page that Piero linked, providing datasets and examples is super helpful, so this thread is beneficial and a help to the project.

Building Edges

Let’s break this down a bit, you have a few tightly bundled questions that require some thought. First: orthophotos and crisp edges to buildings. This is a problem space that Piero has been working on. As Piero indicates above, run an update to get the changes he recently pushed. This is probably the trickiest thing yet to fix in the ODM pipeline, and I suspect that isn’t the final patch (though Piero would be able to say more).

Point Cloud Density

Then there is point cloud density. Thanks to recent changes requested by the community, it is now possible to turn off the aggressive filtering we do in the point cloud. There are use cases for doing so, although I am not sure if your dataset is one of them.

Rolling Shutter

Finally, we have the elephant in the room, as it were. Rolling shutter. GoPros take rolling shutter problems and amplify them, and we don’t have a correction for it unlike DD. GoPros basically took the problem of rolling shutter that most of the camera industry was carefully trying to engineer away, and built cameras with bad rolling shutter so they could max out other specifications (like megapixel, form factor, etc.) while keeping manufacturing costs down.

And so, since DD has good rolling shutter correction, DD get’s a well behaved camera model:

By contrast, we get something very noisy and stretched along the scan path of the rolling shutter with OpenDroneMap:


This makes getting good feature matches over complicated structures like tall vegetation well neigh impossible. Add to that fact that this is at the edge of the data collection zone, reducing the number of overlaps in this area, we have a recipe for a poorly reconstructed scene.

We are having some active discussions about how to get rolling shutter correction into OpenDroneMap. Consider sharing widely if/when we have a campaign to fund that work. Thanks again for sharing this dataset. Do you mind if we use it as one of the test datasets as we make improvements?


I have tested your dataset. I got point cloud as you get (sparce point cloud in forest area). Now, I already know that you just use 3 GCPs.

My suggestion is increasing the GCP, especially around forest area.

From this report, I have hyphothesis, the area that near with GCPs has more overlap than the area which far from GCP.

It is said there are minimum 5 GCPs, and 8-10 GCPs for larger project

1 Like

Just to add to this, I was able to get somewhat better results with some tweaks:

Camera model is still poor, as rolling shutter is still the fundamental challenge:

Options: auto-boundary: true, dsm: true, feature-quality: ultra, matcher-neighbors: 40, mesh-octree-depth: 12, mesh-size: 300000, min-num-features: 64000, orthophoto-resolution: 1, pc-filter: 0, pc-quality: ultra, resize-to: -1

IMO, the only way to improve this further at this time would be to fix rolling shutter.


Thanks for the whole lot of information. Really appreciate it.

Yes you can surely use this dataset.