Advice on gpu 2060 or 3060?

I got a Ryzen 3700X, 64gig ram and SSD.

I wonder about Nvidia 2060 and 3060, both with 12Gig ram.

Which one would fit be best, I don’t want to waste money?

Almost want to ask what your goal would be?

I had a similar question a short time ago.
Seems like GPU processing at this stage is not really speeding up overall processing time by much.
Gordon made a view very relevant points here: NVIDIA CUDA graphics card recommendation

Before making that post I was extensively searching the forum about CUDA and WebODM. Gordon uses a Geforce GTX 1650 with 4Gb memory, others a NVIDIA RTX 2060 with 8Gb some even Quatro cards with >24Gb.
Having read through many of those articles (mostly people having trouble making WebODM use their CUDA capable cards) and finally the replies of Gordon made me rethink my decision of investing in a CUDA card. GPU processing seems to not even improve quality.
The speed differences between cards seems to be negligible since ODM can use CUDA from version 6.1 on. I take this from having seen posts where people mentioned running WebODM with a 1000 series Geforce card.

Looking into the specs, a RTX 2060 has ~2000 CUDA cores and the RTX 3060 around 3500 CUDA cores. But in CUDA benchmarks the margin between the two cards is marginal. And if you have CUDA 7.5 (RTX 2060) or CUDA 8.6 (RTX 3060) seems irrelevant. The amount of memory on the card seems to be more relevant, but if both have 12Gb, it looks like overspending to buy a 3060.
Ray tracing and the tensor cores (DLSS) of the RTX generations does not seem to be used at all, but that is the actual advantage of the later Geforce generations.

Finally I am now upgrading RAM and looking for a good second hand offer for a Ryzen 9 3950X or Ryzen 9 5950X since I already own an AM4 motherboard. More cores and RAM seem to help the overall process much more then a CUDA card.
Since RAM can only be expanded so much, I also got a PCIe 4.0 NVMe SSD to hold some more virtual memory.
You write that you have a Ryzen 3700X, it should be easily possible for you to upgrade the CPU.
The Ryzen 9 3950X should not cost more then a second hand RTX 3060.

But again, I do not know the closer circumstances of your question.
If the graphics card also serves other purposes, my reply would look different.

And … I am by no means an expert with WebODM or graphics cards.
Most of the statements here contain seem’s like statements.
I am boldly leaning out of the window with some of the statements, also with the hope that maybe somebody comes around and corrects me on the subject :wink:
Because I would also like to have a good argument why to get a RTX 2060 or 3060.
But at this point, I can’t rationally justify to myself buying such a card for WebODM.

If anybody has information pointing another way, please share it :cowboy_hat_face:

Hope this can at least serve as a pointer in the right direction for you.

Cheers!
Shiva

1 Like

According to other sources it’s a huge difference in a part of the process

That was the article that put me on the journey.
It mentions that only parts of the process are accelerated and after investigating how much that affects the whole process, it seems rather insignificant to me :blush:
Especially if I take the extra time to make it work, to also keep it working (updates or different datasets seem to break the function occasionally) and the costs into account.
Now I have spent more time researching the subject then it would have needed me to earn the money for a graphics card :smirk:
But I could not find evidence that the whole process overall is significantly shortened. Gordon made that point in the quoted post, but I also have seen other people mentioning their experiences with GPU aided processing in the forum reporting similar results.

Also it seems to not be trouble free, there are people having trouble getting it to work at all or also see this thread of somebody owning a NVIDIA A100 GPU, which is really, really a powerful GPU. The process became much faster but finally the poster did not have the quality output you would expect.
The NVIDIA A100 sells here upwards of 15.000 USD.

If you or somebody can show me results pointing the other way. I am also still very open and interested to equip the rendering PC with a good graphics card :partying_face:

But in the end, it’s up to you and your needs. On another post somebody plans to put a NVIDIA Geforce RTX 3090 into his machine. That matches his situation because he also uses it for other software.
For the size of projects I am running (500-2500 images with 12-24megapixel) and only for WebODM, it seems unreasonable to spend money and possibly several hours to shorten the process by a mere 10-20%.

Yet again, that is the point I came to. If it excites somebody to make the process work on a GPU and the extra costs and time investments do not matter, please do not be discouraged by my post.
By now I appreciate a slightly slower, more reliable and troublesome processing more, then squeezing out a couple of hours on a 90 hour process.

Again a but appears, if new information on the subject emerges, I am all ears :cowboy_hat_face:

Edit: Corrected and extended the reply by a bit.

1 Like

Don’t forget that you still may have to avoid using any awesome graphics card if tasks fail due to race condition errors. I’ve encountered that problem with quite a few datasets. Usually it’s the .npz features files that aren’t all ready before they are called for pair matching.

So what’s not an awesome gpu?

One that does feature extraction slowly :slightly_smiling_face:

It’s only a problem with some of my M2P datasets, but I haven’t tried to figure out if there are any common features that lead to that failure. Since I usually use ultra feature extraction on full size images, my GPU doesn’t get used for this stage very often.

1 Like