I’ve had a chance to play with this concept for a while, and the main issue has been the drift error accumulation while performing homography linking. The error can be restricted quite a bit if the following assumptions hold true:
Nadir-only images (captured with gimbal), so only yaw angles are allowed
Images captured in terrain-follow mode (constant altitude for a flat terrain)
Good estimate for the camera’s focal length is available in the camera database
Variations in flight altitude and camera angles make the problem much more difficult to speed up using the planar assumption. I think the constraints apply well to AG fields datasets, so the method should still be useful if it’s really fast.
Where I performed agricultural field survey a good number of the fields are far from what one might consider flat, so we couldn’t deploy Pix4DFields for that exact reason…
Looks like we’ve got some good company in this problem space, at least!
Would these be part of the Processing Report? These are not only interesting, but look like they might be helpful for debugging… Not sure how they’d scale with lots of images, however.
Couldn’t you chain homographies across many different spanning-trees and average things in order to smooth drift across all directions ?
Like start with a random image, draw a random spanning tree (or a probabilistic one where each edge has a change to be sampled according to its weight, you also jitter edge weight to have more random-ness), chain along the tree, store the solution (cameras and points), repeat N times and average ?
Yes it’s definitely possible (and a good idea!), the only drawback is that computing the homograhy graph using the TRASAC algorithm is not the fastest (granted I still have to optimize / make it multithreaded). I have had some good successess using a single graph starting from the most central image in the matching graph (minimizing the hops required to compute the homography chain for all images) and then letting bundle adjustment fix the drift, but as I experiment more with larger datasets I could add this one too.