Image do not fit in GPU

I am getting this message:

[INFO] Photo dimensions for feature extraction: 8192px
[INFO] CUDA drivers detected
[WARNING] Image size (8192x5460px) would not fit in GPU memory, falling back to CPU

I understand the message, but am surprised to get it, since in Metashape I am able to align photos using my GPU. I have a GeForce RTX 3080 with 8 GB of memory.

1 Like

Do a quick search here on the forum for similar posts. In summary. the underlying library is the issue and it is a heavy lift to rewrite. If I had to guess, Metashape is resizing your images for you to fit into GPU memory, but that is pure supposition without access to the source code.

You can do the same (resize images), upgrade your card, or let it fall back to CPU compute. The good news: what you’ll be saving on Metashape could go into a better graphics card, amortized over a pretty short period of time.

2 Likes

Choose a more reasonable --feature-quality parameter (do not use Ultra).

2 Likes

OK thank you. I will use a lower feature quality.

2 Likes

I heard that you need 8gb for 20mp images, so I got myself a card with 12gb. That works well, I will try the limit some time.

2 Likes

FYI I’ve tried --feature-quality set at high but could still not fit in GPU. However, setting it to medium worked.

1 Like

What image size were you talking about here, where it doesn’t help with matching:

1 Like

Images were 8192x5460px .jpg, each 16.7 MB.

1 Like

If you want to use ultra for feature-extraction, WebODM will fall back to CPU, but that only concerns the feature-extraction phase.
The GPU will still be used during point cloud creation and that will decrease processing time by quite a bit.
I was also concerned about not being able to run feature-extraction on the GPU, but after doing some benchmarking I found that the CPU often detects more features than the GPU and the overall time saving happens during the point cloud creation, which also runs on the GPU and is not limited by the memory of the graphics card.
So if you need higher quality setting for the feature extraction, use it. For some datasets I clearly prefer running feature extraction on ultra, especially with overlaps <75%. That the GPU is not used is hardly any set back, in my opinion from the benchmarking I’ve been doing.
Point cloud densification and pc-geometric stages are still being done on the GPU with no memory issues.

4 Likes

@shiva Thanks for the clarification that is very helpful!

2 Likes

Oh, I missed it was about matching. No my card isn’t even enuf for that.

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.