Are GPU reconstructions less precise than CPU ones

I have uploaded the same dataset to two different webodm servers. One with lots (128) of cpu cores and other with less cores (16), but with a good gpu (A100) to see which one will run faster. On both systems, reconstructions settings were set to 3D model, which is my goal.

GPU one took 1:20, CPU one took almost twice as long at 2:40. Clearly GPU reconstruction is winner here. However, visually CPU derived one looks better aligned as a 3D model than the GPU one (below are the links for both datasets). I haven’t done any quantification yet, but would like to hear people thoughts, if anyone has encountered something like this, or have suggestions?

GPU derived 3D model
CPU only 3D model


It seems like CPU feature extraction and detection might be more robust and lead to a better sparse/dense cloud, so you may have to crank some parameters up on GPU to match that. There should still be a time benefit, I’d think.

1 Like