Does 'would not fit in GPU memory' issue have a fix yet?

There was a post back in February about the GPU pipeline having some issues, and I wondered if there was a known fix yet as this is causing some slowdowns in processing larger image sets.

Im getting an issue even with 12MP images where it falls back to using the CPU, this is on a card with 8GB of dedicated VRAM which seems a bit off.

2 Likes

Maybe! We did make a lot of changes since February.

Try updating to the latest release, try processing and report back? :pray:

1 Like

It seems to be back to where it was before it stopped working completely, ie, for me it will only use GPU for feature extraction if I reduce my M2P 20MP images to about 8MP.
I really don’t understand why a 12-15MB image wont fit in 4GB of VRAM.

1 Like

Just an observation: I measured the maximum GPU RAM consumption to be 7.3GB at Ultra quality with 20MPx images.

3 Likes

Was that during feature extraction or densify point cloud stage?

1 Like

During feature extraction, later stages used less than 1GB RAM

1 Like
2 Likes

I run Colmap for some stuff and it uses the GPU without problems. So why can Colmap do it and not ODM?

Maybe ODM and Colmap use different libraries?

This is what I think is the case. I used COLMAP and Meshroom before WebODM, and I get better overall results with WebODM and love all the features and the fact it’s OS which is why I have stuck with it, but both the other two would chug along on the GPU with no problems.

COLMAP uses a different implementation (which, by the way, cannot be used freely for commercial purposes), see colmap/LICENSE at dev · colmap/colmap · GitHub. Although yes it would be interesting to see how it differs.

3 Likes

Ever heard of theft?:wink:

Not something I would personally advocate for.

3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.