Output Comparison

Hi,
I am planning to write a series of blog posts on comparing outputs for different software against WebODM. I am thinking of doing quantitative and qualitative analysis for Orthophoto, Point Cloud, DSM, DTM and 3D Model. Any suggestions on what methodology should I follow?
To start, I plan to get the datasets from https://cloud.pix4d.com/demo, run the same in WebODM and compare outputs.

1 Like

Well, there are advantages to both methods, right?

If your target audience for the comparison is an analyst, you’d want to have more quantitative analyses versus a consumer who would benefit from qualitative most.

Do you know if those datasets have RTK/PPK GCPs or other truth data? A quantitative analysis could look at XYZ distortion at those points, for instance.

Qualitative would be mostly visual analysis, you know? How does the ortho look (color, sharpness, blending, pixel choice [nadir VS oblique during reconstruction], etc. Same goes for the DSM and point clouds.

Yes, the datasets have GCPs. Would calculating RMS error at GCPs be a good comparison? What other quantities I can look for?

This is a screenshot from the quality report of a dataset.
image

For qualitative analysis, I am thinking about having an image slider comparison as on UAV arena. A side-by-side comparison can be shown for the ortho, DSM, DTM as well as screenshots of some views of point clouds and 3D mesh. Are there any other ways to show a comparison?

Btw what do you mean by pixel choice [nadir VS oblique during reconstruction]?

1 Like

You could possibly compare coloration values at given reference points to test the image blending and balancing algorithms, though without a calibration target it is hard to quantitatively compare…

Slider of the products seems best, IMO. Maybe also compare generation time, file export size/resolutions, etc?

Oh, if you look at a generated orthomosaic, you can sometimes see that certain parts of the image are reconstructed from pixels closer to nadir vs pixels from images that were more oblique. Hard to explain, but usually easiest to spot in vegetation.

1 Like

So I started analyzing this dataset https://cloud.pix4d.com/site/9982/dataset/36449/map?shareToken=7f1b19102d5742f093515bc8e68ac96f

Options: use-3dmesh: true, dtm: true, mesh-size: 300000, mesh-octree-depth: 11, dsm: true, depthmap-resolution: 1000
Resize: None
Time Taken: 3:55:08
Machine Spec: EC2 m5.large, 8GB RAM, 4vCPU, 3.1 GHz Intel Xeon® Platinum 8175M*

These are the outputs I got:
Ortho:
image
image
image

Point Cloud:
image
image
image

Control Point: (313298.170, 5155168.820, 391.440):
image

3D Model
image
image

Outputs in Pix4D
These are the pre-generated outputs available on the Pix4D Demo

Ortho
image
image
image

Point Cloud
image
image
image
image

Control Point: (313298.170, 5155168.820, 391.440):
Computed Position: 313298.03, 5155168.86, 391.89

3D Model
image
image

Possible to make improvements to WebODM outputs? Let me know if you need download access to the outputs.

Also, how do I compare coloration values? Would simply observing RGB values at particular points and comparing it with input images work?

2 Likes

That could work, but again, there are multiple input images used per output pixels… Ideally, we’d have a calibration target so we have known reference values, but I think just comparing RGB at various known locations in both will be sufficient.

2 Likes

So the first output comparison post is here!

Thanks for all the help @Saijin_naib

5 Likes

Thanks for doing all the hard work to get this accomplished!

Nice, quick read :slightly_smiling_face:

I think for the future, you should make it clear what version(s) of both softwares are being used. One, so people can look and see if any major fixes/features were added since the comparison was done, and also so that people can ensure that the most current version(s) are being compared to one another to keep it fair.

Along those lines, your findings are quite interesting! I think it’d be interesting to see if any of the changes that went into ODM as of today might help (alignment fix being one).

2 Likes

Updated the blog post with version details (which is unfortunately the older 1.0.2). Will do a re-run with the latest and greatest 2.1.0 and update soon!

3 Likes

That is going to be an exciting comparison!

Thanks so much for your work!

2 Likes

Woah! Yeah, that should be interesting.

1 Like

There is an option in Pix4D to specify the accuracy of the input images.
image

Apparently it can make a big difference. In a dataset I have been playing around with, these were the results.
Default settings: Horizontal accuracy: 5m and vertical accuracy: 10m
image

Horizontal accuracy: 0.1m and vertical accuracy: 0.2m
image

These are the results I got with WebODM (ODM 2.2.0)
image
You can notice these are closer to the outputs from Pix4D with default settings. Is there any way to update the input image accuracies in WebODM?

1 Like

Found this parameter in WebODM
image

Very promising results vis-a-vis Pix4D
image

2 Likes

A good test will be to see what happens when you set that very low for data that have very bad accuracy.

Good idea! Any dataset you can point me to?

1 Like

Yes! I meant to reply to this topic and must have forgotten. Also: check ground control point number 8. Much of the Z error in OpenDroneMap seems to be at a single location. There could be something specific happening at that point.

1 Like

Not at the moment. If I still had access to the drone data from before I took over at my prior company, you’d have as much horrible data as you could want :rofl:

They were using the geotags from onboard a hacked Canon Powershot SX120IS, which was mounted right next to the PDF/FCB and 2 60A ESCS, so the accuracy was LOLBAD

2 Likes

Just degrade the quality of the GPS info in the exif tag and voila! simulated poor quality GPS.

The effect of over constraining the model is that you get a lot of isolated portions of the overall model – the reconstruction gets broken and things go pretty haywire. I’d post an example, but I never saved any.

2 Likes

Was just comparing a dataset between 1.0.2 and 2.2.1.
1.0.2:
image

2.2.1:
image

mind blown! :exploding_head:

3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.