Does anybody know how to interpret ODM quality report? More specifically, what is the meaning of survey data, feature details, reconstruction details and camera models? What information is essential in order to access the quality of generated products?
I’m working on a writeup to help with this, but it isn’t complete.
For the time being, a good resource to get familiar with many of the concepts of the Report would be reading the Pix4D Documentation on interpreting the Quality Report:
Do you have a report you can screenshot so we can go over the numbers section by section with you?
Thank you for your help.
Yes, I have the quality report. However, as a new user, I am not authorized to upload it neither to provide more than one snapshot (embedded media).
Anyway, the video you shared has already helped a lot.
You’ve been promoted! Don’t do any shady stuff, haha
Before going into the details, I just want to mention that the previews in the quality report appear a little confusing because I am mapping a cut slope (camera is pointed towards the horizon). As a result, WebODM has reconstructed not only the engineered slope I am interested to, but also hillslopes that appear behind it (in second plane). I am struggling to find the best set of processing parameters in order to eliminate that…
ODM Quality Report (sections)
1 - Processing summary: Which information I must care about?
2 - GPS/GCP/3D errors: since I have not used GCPs, the report displays only the precision of x,y,z coordinates and 3D points based on GPS data. Right?
3 - Feature details: heatmap - does it exhibit concentrations of keypoints?
4 - Reconstruction details? No idea…?
5 - Camera models: residuals must be as small as possible everywhere? Is there any threshold?
A) You likely want to look into image masking to help limit background reconstruction. Possibly another upcoming feature, as well.
Entirely up to you! I think all of it is worth inspecting since each part tells a different story about your data.
To my understanding, that is correct.
To my understanding, that is correct. Average distribution of tiepoints across the image frames. Can help you find issues with lens, focusing, dirt, etc.
This I need to study more. I think lower residual is better, and the track length is how many images on average your tiepoints are shared between. If that is correct, more tracks with higher image count likely means higher overlap/sidelap, or possibly lower GSD.
My understanding is this models how close the lens is to an ideal lens. So, where a pixel was imaged VS where it was reconstructed, and the direction and severity of these offsets across the lens.
Thank you for the explanation Saijin_Naib!
Yes, I have already masked the original set of images. The results (3D point cloud) is much much better and the processing time much faster!