Do a quick search here on the forum for similar posts. In summary. the underlying library is the issue and it is a heavy lift to rewrite. If I had to guess, Metashape is resizing your images for you to fit into GPU memory, but that is pure supposition without access to the source code.
You can do the same (resize images), upgrade your card, or let it fall back to CPU compute. The good news: what you’ll be saving on Metashape could go into a better graphics card, amortized over a pretty short period of time.
If you want to use ultra for feature-extraction, WebODM will fall back to CPU, but that only concerns the feature-extraction phase.
The GPU will still be used during point cloud creation and that will decrease processing time by quite a bit.
I was also concerned about not being able to run feature-extraction on the GPU, but after doing some benchmarking I found that the CPU often detects more features than the GPU and the overall time saving happens during the point cloud creation, which also runs on the GPU and is not limited by the memory of the graphics card.
So if you need higher quality setting for the feature extraction, use it. For some datasets I clearly prefer running feature extraction on ultra, especially with overlaps <75%. That the GPU is not used is hardly any set back, in my opinion from the benchmarking I’ve been doing.
Point cloud densification and pc-geometric stages are still being done on the GPU with no memory issues.