Differences in Point Cloud and Other Results

I ran the dataset with the following parameters:

I got the following results:


Point cloud

I noticed that the number of downloadables have reduced.


After update:

Kind of same settings, just this time I used split parameter too.


Would definetely make sure next time.

1 Like

I can see you have increased matcher-neighbors to 40. Is there something else to be done, like preprocessing of images, etc. ?

1 Like

I don’t think there is anything left to do until we have rolling shutter correction. It really messes with matches when there isn’t enough overlap to compensate, and it appears to be the root of the matching challenges here. Perhaps you can try different match types, but that’s something Saijin has spent more time experimenting with than me.


I understand rolling shutter effect. Just out of curiosity I wanted to ask is that How do they correct the rolling shutter effect. I mean, do they pre-process the image using some rolling shutter correction tool and then process the images?

I was searching on the internet and saw some papers claiming the correction of rolling shutter effect using Artificial intelligence.

Do you have suggestions for any software (be it open-source or paid) which corrects the Rolling Shutter Effect?

If this is the root cause, then I would definetely want to get my hands dirty to solve this problem.


This is a good review of the existing process for Pix4D and MicMac.


Hello, I’m back

As Smathermather said, It is look like (probably) the problem of rolling shutter with your GoPro images.

I test your dataset with Reality Capture

It gives same bad result in forest as ODM result. ODM tends to remove mesh faces which can’t be interpolated by any texture. But, using Reality capture, the faces was blurry in forest area.

It also has broken roof reconstruction.


Thanks for trying.


Thank you for the reference. I’ll definetely look into it. I guess Rolling Shutter is a very old problem. Most of the cameras available have rolling shutter including DJIs.

I can see that MicMac is a FOSS. I wonder why no one has looked into it. In my free time, I tried searching on the internet so that I get already baked code but did not find any. Is is something hard to implement? By the way, this statement is not intented to disrespect the developers of WebODM. IMO, they have already done a fabulous job.

Also, If you know any already developed software (be it paid or FOSS) which corrects Rolling Shutter effect from images, please suggest me.


Agisoft and Pix4DMapper correct for Rolling Shutter Distortion.

I’m not sure a stand-alone software for correcting is viable. I believe it needs to be pretty tightly integrated with the rest of the photgrammetric pipeline to work effectively.


If you don’t mind I have one more question. So, is this rolling shutter correction process a kind of pre-processing of the image or something?

Is it something like this, first pixels of each image is corrected and then normal Workflow of Photogrammetric process is followed.

1 Like

We do have a (now dormant) project called NodeMicmac for integrating Micmac into WebODM. No one has had the time to maintain it, however.

Btw: most of the closed source providers started with Micmac and forked it. With no license that requires it, unfortunately much of the great work on those proprietary solutions doesn’t make it’s way back into the project.

Our aim with OpenDroneMap is to be easy to use, easy to contribute to, and great software with a good community. There’s always more work to do, and rolling shutter is an important feature to implement, and the nature and quality of rolling shutter varies widely by camera.


I have no practical knowlege of the implementation details at this juncture, so I can’t comment, unfortuantely.

From what I’ve read, it seems like that is sort of what’s happening, but seems like less on the pixels of the input image side of things and more about adjusting the vectors of features, etc.

1 Like

Chiming in :

I’d say that’s the exact opposite happening here : whenever a residual shows “structure” means that there is a systematic error remaining, as opposite to a random grid of error, which shows that only random noise due to the measurement process is left (which is what most parameter estimations take as assumption, i.e. measurements with gaussian noise).

Having a systematic meaning that the camera model doesn’t match the reality (or the system) of the actual device, whether it is complex deformation or rolling-shutter.

Here, I’d say that there a mismodeled distortion in the DD case. Still, residual seems OK-ish.

Still, the result of the SfM doesn’t say much about the dense point cloud computation. Holes can appear for various reason, here, my gut would say that it is due to the repetitive patterns of the roof that might introduce outliers in individual depthmaps, that further don’t pass consistency checks when fusing depthmaps. Hard to tell without debugging.


Rolling shutter for sure isn’t the only issue, just an exacerbating issue on an dataset with low overlap.

Fully agreed on roofs. This seems to be a smoothness of texture/repeating texture issue. I was able to better results with ODM on the roofs, but symptomatic of smooth, repeating, metal roofs I’ve seen elsewhere:

But rolling shutter is a challenge in this model. That’s not a good model. This is what I’ve seen when a rolling shutter camera’s effects have been well compensated for in OpenSfM which is considerably less noisey (this is an orbit and a cross hatch to produce this camera model to attempt work around rolling shutter) which is considerably less noisy:

All that to say that I believe that the difference between the results from DD is a combination of factors: low overlap, rolling shutter, + one we haven’t discussed. In the end, having run this through Agisoft which doesn’t turn a textured mesh into a point cloud like Pix4D and DD, we see that Agisoft has holes in the same places:

DD’s extra points are likely a result of a suspected colorized mesh-to-point cloud workflow, which solidly places the differences between the point cloud from DD and that from ODM as a question of verisimilitude not veracity, or perhaps more precisely: completeness over accuracy.


We can create something quite similar in MeshLab or CloudCompare with relative ease:

I could spend some time writing up a tutorial, but there are great videos that serve as a good starting point:

We should consider adding this as a feature. We get this question now and again with Pix4D, and I’m not surprised to see something similar with DD. In the meantime though, you may have to roll your own.

Or, (to anyone interested) roll it for all of us! Pull requests are always welcome.


Correcting myself :

  • The DD picture was showing the distortion map, not the residual map (which is what ODM/OpenSfM report is showing).
  • The distortion map shows the magnitude of distortion induced by the camera model, so it has indeed “structure” by definition.
  • The residual map (I don’t know if DD report shows it) should be noisy (no structure) if camera model is well modeled.

Regarding the residual grid of OpenSfM you’re showing, it has some radial pattern, but looking at it, it is actually more anisotropic, hard to tell what is causing it, but for sure there’s an effect that isn’t accounted for (rolling shutter ?).

Regarding the noises, the process of modern approaches for computing depthmaps is the following (whether it is SGM or PatchMatch-based) :

  • Select per-image neighbors (black magic : number of neighbors and min/max angle thresholds)
  • Initialize depthmaps (black magic : random init. or SfM init. or SfM dense init with interpolation or multi-scale scheme)
  • Compute per image depthmaps (black magic : cost function, neighbor aggregation)
  • Remove points that are inconsistent between an image and its neighbors (black magic : fill holes with interpolation/inpainting, possibly re-iterate with depthmaps)
  • Fuse depthmaps (black magic : union-find like of intersecting points, fit local planes)
  • Meshing (black magic : plane prior, various mesh healing heuristics)

So many things could possibly go wrong and be different between implementation. I do agree with you that meshing might be what DD does very well and is actually filling the gaps. I still wonder if there’s some magic options if OpenMVS that could fix the issue.

Also I can’t access the GDrive folder with the dataset, can someone share it ?



Ahh, apples and oranges. I didn’t look at these closely enough.

Yup. Reposted here: NH38 - Google Drive

1 Like

:flushed: Bookmarking this…

1 Like

Forgot to mention the last one :

  • Mesh refinement which is a hell of black magic & maths, and also the best way to get amazing meshes. OpenMVS implements a vanilla & naive version. Bentley CC implements SotA of mesh refinement and it explains why there’s no rivals to it when it comes to mesh quality.