I noticed an issue in dense pointcloud reconstruction for some datasets. The boundary of the dense pointcloud is far smaller than the sparse one. I’m not sure why it happens. Tried both pc-geometric and pc-skip-geometric but won’t help. If you can see the first image, the sparse one basically covers all the area that the drone covers, but the dense point cloud in the second image, missed a lot of area around the boundary, and also has strips probably due to particular images’ depth data is not merged. I used docker odm 3.1.7(latest), tried feature-quality medium, and high, pc-quality, low, medium, and high. Also can confirm older version exist the same problem.
I found the cause, the problem still comes from opensfm stage. The undistorted images are all like this(feature-quality:medium). The overlapping looks decent, I’m trying to tweak the parameters to see if it can be fixed.
Did some checks on the matching results. For medium quality, most matches could have 1 or 2 mismatched features, but overall matchings are still good. I think these wrong-matching features should be filtered out during bundle adjustment because good matches are still in majority? But seems the wrong matches are affecting results. In feature-quality: low & min-num-feature: 500 reconstruction, there are fewer features matched, but at the same time, there is no wrongly matched features
ORB feature works properly in this dataset, also no wrong matches when I plotted the results, I guess it’s some issues related to SIFT?
One difference between sift implementation and orb implementation is that SIFT features are extracted more aggressively to get enough features, it keeps reducing the threshold until gets enough features that are more than min-num-feature. While ORB directly targets the top min-num-feature features. With the same min-num-feature: 10000, sift could get more than 20000 features while orb will guarantee 10000 features. Not sure if this would introduce weaker sift features that could be easily matched wrongly.
I was believing that even though there could be some wrong matches, the bundle adjustment stage should still be robust enough to filter out them because the majority of the matches are still correct. But it seems still sensitive to errors.
If you have any insight into how we can “unlock” ORB to not be capped to 15,000 features and to adjust it’s tunable OpenCV parameters, I’d be quite grateful. My research on it hasn’t yielded much because I’m over my head
On opencv’s implementation, I don’t think one can do it directly. One workaround is to get a very large number firstly, say 40000, the key points will return with scores, one then can filter it with a threshold. But obviously not very efficient. Unless one reimplements the whole thing.
But I’m not a fan of having a big number of features(over 20000) . They significantly slow down the processing speed, and the quality improvement is not big.
Tried tuning the opensfm parameters related to the feature-matching and bundle adjustment threshold, but not helpful. I’m done exploring, I’ll blame SIFT for this one
There are other tunables I’m after, and sometimes you need more than a few features, which yes it’s slower, but not nearly the same time penalty with ORB, making high feature ORB an interesting path to pursue.
Yes. The drone does turn but the camera is non-rotated (always points in the same direction). Never think that would affect a lot, I’ll make sure it’s rotated the next time.