With large datasets processed in ODM, eventually the SfM portion of the toolchain becomes the predominant bottleneck. This is expected and is quite nicely reduced in effect by the incremental reconstruction technique in Incremental reconstruction algorithm — OpenSfM 0.4.0 documentation.
Never-the-less, I wanted to understand the effect better with a long-running 8k+ dataset. We statically fix bundle_interval which is the main knob on the exponential increase in time with SfM. It defaults to 100, which means the global check on the incremental approach gets triggered every 100 images. In other words, images are added to the reconstruction 1 at a time with local matching and orientation, and then every 100 images, the whole pose model is adjusted to make all the cameras consistent globally. Incremental in this case is a local/global approach unique to OpenSfM, and borrows from video SLAM approaches in order to have global-like accuracies with something closer to local SfM efficiencies.
We can see when this 100th step happens in the logs, e.g.:
docker logs unruffled_dhawan | grep "naive"
... 2022-04-07 04:48:23,187 INFO: Shots and/or GCPs are well-conditioned. Using naive 3D-3D alignment. 2022-04-07 11:07:46,208 INFO: Shots and/or GCPs are well-conditioned. Using naive 3D-3D alignment. 2022-04-07 16:48:51,795 INFO: Shots and/or GCPs are well-conditioned. Using naive 3D-3D alignment.
If I plot this sequence of additions according to time for the first 3500 images, I get something I expect, a exponential curve:
But at image 3600, it gets pretty interesting:
This is a pleasant surprise. What kind of magic is this?