What’s the difference… flann… brute force?

I would like to know what flann and brute force does differently.

Got a feeling that Orb or Sift isn’t the only thing that impacts the result.

1 Like

Brute Force is a really simple matcher that pairs every single image against every other image, which is why it takes inordinately long on big datasets. It does no geographic filtering or anything to get rid of impossible matches. This can sometimes lead to degraded quality as features are matched in an image pair that are physically impossible to match, but they meet the threshold so you get garbage in that spot in the reconstruction.

OpenCV docs have a great description of both, especially FLANN:
https://docs.opencv.org/5.x/dc/dc3/tutorial_py_matcher.html

1 Like

So when should you choose Orb-Bf over Sift-Flann?

I’m primarily into first and foremost get a good pointcloud.

1 Like

ORB should be more noise tolerant, and sometimes can match better than SIFT, depending, but mostly you’d choose it for speed at this point, if the dataset is small enough that the matching time doesn’t become too long.

I need to tune our ORB implementation, and one of those optimizations is enabling it to work with FLANN like SIFT does, which would vastly improve its scalability on large datasets.

SIFT+FLANN, or sometimes HAHOG+BOW should give you the most dense pointclouds at present.

2 Likes

I’ve seen that Orb and Sift gives me very different camera calibrations.

What affects camera calibration?

I feel an urge to ask you a lot of things and add it to the docs.

1 Like

Doc updates for all of this is on my list. I’m working on it :slight_smile:

I’ll have to dig into the feature types themselves again to refresh my understanding before I give out any authoritative answer as to how they differ, but with a lot of handwaving literally almost anything can affect camera calibration.

As to why the feature type chose affects it to any significant extent, I’d have to say is likely due to how I don’t have binary feature types well implemented or tuned. I wouldn’t expect SIFT or HAHOG to have terribly different calibration results, for instance. Some variance, sure, but I wouldn’t expect anything crazy.

I’ve run tests on a very oblique dataset and orb,sift and hahog have small differences in camera calibration. Orb and hahog gives about the same amount of points and sift the most points.

But with a near nadir dataset orb gives a really bad camera calibration way of from sift at the same dataset.

1 Like

Yeah, that’s likely due to the need to tune things around our binary feature types, unfortunately.

So it can be adjusted?

You only need time then.

I hardly have time, trying to learn about photogrammetry between work and family.

1 Like

Yeah, time and to educate myself on these things more fully. I am unfortunately just scratching the surface, which is why the implementation of ORB/AKAZE we have gives us the trouble it does. I did not do a stellar job with it.

How is it implemented?

I have some programming experience with Python, Racket and VBA. I’m when I get time I will get into F# and C# but know when.

It would be a major headache but some parts of the pipeline would probably benefit on being programmed in a functional language as it’s much easier to handle multiple threads.

1 Like

What (I think) I need to adjust is all in Python and setting some variables to tune things. I frankly just don’t have enough practical knowledge and field data to test against to make these changes confidently. It is something I see great value in, and want to accomplish since I’ve been spearheading adding ORB, but I’m just not where I need to be to understand things fully. At a first pass, I’d love to get ORB/AKAZE working with FLANN, which looks like it might be “simple” to get working, but then we’d have to tune a bunch of things. Similarly, the 15,000 feature limit is not a limit of ORB but rather the implementation/tuning, etc.

1 Like

Sounds like getting the math right, is it?

1 Like

I don’t know enough to know if that’s all there is to it :sob:

I don’t know much of the subject but some part is probably a built in accuracy Problem in orb. Maybe some things can’t be tuned away.

Orb does good when there’s lots of oblique images, sift/Hahog seem to need less.

I’ve looked at OpenCV matching code and it seems that camera calibration is not a part of that but a separate stage.

So I wonder why the result are so different in terms of bowling and cloud density.

What’s the stages involved in matching and camera calibration?