WebODM can handle this size image just fine. However, if you are processing using your GPU, you need enough memory on your GPU to handle them. Apparently the memory on your GPU is not adequate to handle the 20Megapixel images from your Phantom 4 Pro. Saijin suggested downsizing your images to see if 16MP or 12MP would work. If you need to use the full size images, you could direct WebODM to not use the GPU. Of course, this will slow some portions of the processing. I think the flag is no-gpu.
Ok…but I find it strange because in pix4D, 3Dsurvey and Metashape the memory of my GPU is perfectly adequate and fast to handle the 20Megapixel images from my Phantom 4 Pro.
That message I mentioned earlier refers to a small project with only 68 images and my GPU is an MSI GEFORCE RTX 3060 TI VENTUS 2X OC V1 8G.
Can I reduce the size of images in the WebODM processing settings? Is any option there? If not, what is the methodology you use to do this so that the images do not lose quality.
We honestly don’t know if Pix4D or MetaShape don’t downsample images to fit within the CUDA texel dimensions or not since they’re all closed source And it isn’t primarily the memory usage, but rather there are finite image dimensions that can be properly accelerated within CUDA, and if the input images don’t fit, CUDA can’t accelerate them.