I am trying to find the best way to quantitatively analyze the quality of the orthos I am generating. Visual inspection is of course first order of business ( here is a good place to say kudos again to the awesome (!) curtain comparison tool https://opendronemap.github.io/UAVArena/ by @pierotofy and everyone else that contributed ).
But I am looking for a more quantitatively rigorous method /tools to compare orthomosaics after generation by different engines. For instance defining and measuring noise metrics, resolution, accuracy, geo-referencing etc.
Per the above, the opensfm/reports folder contains files called tracks.json and reconstruction.json which have information on “wall times” and many other params.
After sifting through some of the output, the below maybe sort of KPI:
In odm 1.0.0, the new shots.geojson file in the odm_report output folder should hold all the coordinates of all images. I have seen runs that completed successfully but the output was garbled, where this number was very low compared to the known number of input images. (say only 2 out of 98 or so)
You might want to check out the last thread on quality reporting that I’m aware of. I imagine it would be a valuable addition and drive further interest for the photogrammetry users…would you be interested in starting a fund?
…I found outside of PIX4D’s quality report many apps tend to offer little in the quantitative metrics. I use GIS on the regular and with image analysis tools one could run a number of unsupervised image classification processes and then quantitatively compare the raster outputs in a number of ways. Relative and or absolute accuracy? I use NAIP imagery for georeferencing but it can still be dubious if you don’t have well distributed reference points (there’s still a bit of art to georef). However, I would start by asking what is the specific question are you trying to answer?
Thanks Nick!
Appreciate the reply and pointers. The main questions is a whether there is an established effective quantitative metric or metrics to judge an ortho aside from simply looking at it. Specifically, if I am looking for the best set of params, I would like to be able and run some routine to filter out the first order of orthos that don’t cut it. In the end I may be left with a few that I will have to use my eye-balls for. But with not too many params, the permutations are many and I’d like to make more efficient.
Per classification - I think it might get shady. My data can become fairly uniform - field like - so I don’t know how robust this path may be.
Per pix4d - their report is indeed helpful so I tend to go along that route for ODM