In watching a wall of processing debug messages scroll by
(I really need to set my terminal up to mimic the matrix font), I got to thinking:
Do we have a concept of Geometrically Verified Matching within our pipeline?
Matching Strategy
Allows the user to determine how the images are matched:
-
Use Geometrically Verified Matching: Slower but more robust. If selected, geometrically inconsistent matches are discarded. Useful when many similar features are present throughout the project: rows of plants in a farming field, window corners on a building’s facade, etc.
This should help prevent spinning of the wheels trying to match tiepoints that have no physical way of being the same, despite “matching” at the confidence threshold because they’re just spectrally similar.
From my understanding, this would have to be a filter step after we get camera positions and optimization parameters, but before we actually do feature matching…
Supposedly, this can also really help improve the generated point cloud in highly homogenous environments by discarding Errors of Comission in matching.
Interesting; I don’t think we do (or at least I don’t recognize this from the OpenSfM source).
2 Likes
Would this be something appropriate / wanted in our pipeline?
If so, fundraising event?
It would be a cool addition for sure; I just don’t know if it would be a priority.
Personally my short term goals are to improve thermal bands support and lowering the memory requirements for densification.
But contributions are always welcome.
2 Likes
Wouldn’t throwing out erroneous matches/tiepoints help with that (as well as filtering 100% co-incident points)?
I know on the PRs, but this one is well beyond me.
Perhaps with the sparse point cloud, but not the dense. The depthmap fusion step of OpenMVS is currently a memory bottleneck. Once that’s improved, we can increase the quality of point clouds further by allowing 2-view triangulations (currently we have a minimum of 3 views for points to be included).
2 Likes
Yes, by default OpenSfM does geometric verified matching (either F-Matrix or E-Matrix depending if the camera model has non-default values or not and/or is non-rectilinear).
3 Likes
Welcome!
So, this happens on say, Brown-Conrady cameras as well? It is currently active in our pipeline?
It happens with any camera model :
- If it has default (i.e. unknown) internal parameters, we run fundamental matrix estimation.
- For fisheye and/or any models which internal are known, we run essential matrix estimation.
Usually, these processes are a fraction of the time spent in matching image’s features.
But, as they work with two-views, they still can’t filter non-physical points that would still satisfy the epipolar constraint (and where deemed visually similar in the two images by the previous matching process) : usually, we need three views to really discard such points.
2 Likes
So, it sounds like it is currently active/working in our pipeline, from what you’ve stated.
I’m not understanding what this from Piero means in the context of what you’ve stated.
Hey @YanNoun
thanks for chiming in! Awesome to know this is a feature in OpenSfM.
2 Likes
I meant that OpenMVS is currently configured to filter out points that haven’t been seen by at least 3 views (3 photos). Since drone surveys sometimes lack sufficient overlap, we can increase the coverage of the model by allowing points that have been seen by at least 2 views. That’s memory intensive in its current state, though.
3 Likes