Excellent. In my testing with COLMAP, it doesn’t adequately balance intrinsics and extrinsics when used with GPS data, so the lens corrections can be over-eager, especially over large datasets.
That said, I’m super intrigued by the use of view occlusion. That explains the quality of results from difficult scenes, and that seems less computationally expensive than TSR doing a ray trace post hoc to refine the mesh.
I have never got colmap to run to dense reconstruction, based on cuda weirdness even with a spanking new gtx2070 / ubuntu 20.04… any hints?
It just worked, so I didn’t have the curse/blessing of figuring anything out.
I’m mostly interested in the dense reconstruction part of COLMAP; it tends to perform better than MVE in many cases, and I think it’s due to the pixel-wise view selection approach.
Yeah, the testing I did a couple of years ago indicated the dense reconstruction is superior to MVE.
What’s interesting however, is that both programs try to be as generic as possible (achieve decent reconstructions in a variety of scenes). We have a bit of a luxury with drones, because we often can make certain assumptions about the scene (e.g. almost nadir, evenly spaced camera shots, etc.) and I think there’s space to improve some of the algorithms to take advantage of these assumptions (just like we did for mvs-texturing by adding a nadir weight parameter), both in terms of speed and quality.
Just as long as it doesn’t harm my new project: OpenVegMap:
(HT Kristin Bott for the fantastic photos of this
mutant awesome cauliflower.)
Or my older project, OpenSproMap:
We should publish these along with the banana dataset and get the entire fresh produce aisle on the map.
Yup. I have a vegetable photo collector on the team now. Standby. We’ll have all the datasets fit to roast.