This is related to the existing issue:
but presents a different method of calibration.
VisualSfM and ODM don’t seem to have a lens distortion step during feature detection and alignment, but other tools do, like PhotoScan, precisionmapper.com, and most of the other online services.
VSfM ignores GPS locations of cameras and creates a curved or domed surface when it tries to create best fit feature matches. ODM uses the GPS location of the cameras and creates a flatter surface, but handles feature matching conflicts poorly:
Top view of road looks kind of okay:
But side view is bad:
This scene should be something like:
After features are detected, instead of taking the best average triangulation of camera locations and matches, radial and cylindrical transforms can be applied to features to solve the pattern of distortion within the lens. Using image pairs, the total distortion of a feature match would be the sum of the two images’ distortions at the given point. With thousands of feature pairs the general lens curvature should become obvious. As a bonus, after best fit lens curve is found, a map of residuals can be created to find the specific irregularities of the specific lens, like this:
As long as the camera lens and focal length don’t change (i.e. fixed focus UAV cameras), the same distortion and residual map can be used to transform all feature locations:
This step should be quick compared to SIFT or other long running steps, and could be performed iteratively during matching or once after a first pass match is complete, then a final match step would apply the resulting transform to the locations.