Minor issues with current generation of quality report

I stared at the quality report this afternoon for the first time in a few weeks and wanted to share my thoughts.

I acknowledge that big parts of the report come from OpenSfM and have checked their docs to get a better understanding.

The “Processing Time” value in the “Dataset Summary” section should use integers for the hours and minutes, and for seconds too if the precision is at the one-second level. OpenSfM gives total number of sections as a float which is not bad if that’s the precision they have.

The image in the “Processing Summary” section is north-down unlike any other image. Can’t tell from OpenSfM if that’s intentional, and I haven’t looked to see if there’s something peculiar about my data collection, but it probably should be north-up like all the rest.

The reprojection error in the “Reconstruction Details” has no units, and I’m not clever enough to guess what they should be. In OpenSfM it’s pixels, but I’m not sure if that makes sense here.

And the most important one:

The quality report should include the options used to generate the output. This is super helpful for differentiating between two different runs with different settings. I’d prefer CLI settings (--auto-boundary ...) over WebODM settings (auto-boundary: true ...) but I’m flexible. :slight_smile:


Agreed on all points.

Have you taken a poke at the python for generating the report? Are you able to help us out with some of these?


I haven’t yet looked at the code, was hoping to get some idea of how popular my suggestions were and whether folks had any alternatives to propose. Fingers crossed that others chime in with observations or recommendations!


Those aren’t the only problems!
The image capture times are also incorrect, as mentioned over a year ago here:


I am not a programmer. Could ChatGPT help with writing or deciphering the python code?



1 Like

From some analysis I’ve seen, it often provides very reasonable seeming output that is wrong. I’m not knowledgeable enough to catch that, haha.

Perhaps we can workshop something as a community, provided we determine what all needs fixing and adding so we have a clear scope of work.

1 Like

I haven’t gone super deep yet, but I will say that working on the quality report can touch on many different parts of the system, so it’s a great opportunity to learn more about how ODM works from the inside out.

While we’re making a wish list, is there anything in specific that folks wish was covered in the quality report but isn’t?


I’d love an Appendix section that shows images calibrated vs not in two lists with small thumbnails and full filename and Metadata like camera, ISO, etc. Would be helpful recon for difficulty with mixed datasets.

Edit (brain not functioning on mobile):
A map with symbols showing uncalibdated camera locations (using image EXIF location, if available) preceding the above tables would be incredible. Do you have a cluster of images that failed to calibrate over part of the survey? Why? What’s there? Water? Change in elevation? Did your gimbal mess up?

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.