Since some months now I am making fantastic maps with WebODM and am super happy about this software.
Before I used some other, but being able to process on my own hardware with as many images as I like, is fantastic.
After reading repeatedly about it in the forum, my question is:
what graphics card for CUDA computing would you recommend?
At the moment looking at a NVIDIA GTX 1650 with 4Gb VRAM or a 1660 with 6Gb VRAM.
The graphics card would be installed in this machine:
CPU: Ryzen 7 5700G
RAM: 32Gbyte DDR4
Storage: 256 Gbyte NVMe with 2TB HDD
OS: Ubuntu 20.04 LTS (kernel 5.15)
WebODM is running in a docker container.
The drone is a DJI Phantom 3 Pro, so image resolution is 4000x3000 pixel.
But this may change in future.
Short background is, that the computations are taking place Off-Grid. I am living in Uruguay, South America, a good bit away from civilization.
The aerial maps are used for permaculture eartworks and general land management.
Which means energy consumption is a concern and computing several days in a row is not often if at all possible.
Some of the bigger maps with 2500 images I made take 70-90 hours and more. Depending on the settings and if I put the Swapfile on the NVMe or HDD. Since 32Gbyte RAM isn’t that much.
Will a Geforce GTX 1650 with 4Gb VRAM shorten the overall processing time?
Would I have any advantage from having 6Gb of VRAM?
Comparing performance per Watt and overall power consumption I would clearly prefer the GTX 1650. The computer is not connected to a screen, so no other use for the graphics card but CUDA computing. The other application I plan to use is Meshroom, but as far as my research goes, it should be fine with either 4 or 6Gbyte VRAM.
In this post:
it is mentioned that the image size determines the needed VRAM.
In the post one person (MarkoM) has occasional fails with 4000px resolution and 8Gb VRAM and somebody else (Gordon) says he successfully computes 4032px images with 4Gb VRAM.
In other posts, people with 6 and 8Gb VRAM report of failures.
It seems that CUDA usage is generally not trouble free, though I hope there is a silent majority for who it just works. Especially running “out of memory” messages seem to be frequent.
If it helps I would also be open to go for a Geforce RTX 2060 with 8Gbyte of VRAM, but that is a whole lot more of electricity for 2Gbyte more of RAM. CUDA performance between 1660 Super and 2060 is not decisively much.
That is why I thought to put up a new post, trying to receive some thoughts of other users, maybe also people who successfully use CUDA to shorten their processing time.
Looking forward to hear from the community!