Good question. I don’t think so. For example: I have a 1660 and a 2060 (10 and 25 points on the test, respectively), but I don’t believe the 2060 is 2.5X faster, at least not on the images I render.
What's your KeyShot Benchmark score?
KeyShot Viewer 11.3.3 Benchmark Result
CPU result: (11th Gen Intel® Core™ i7-11700 @ 2.50GHz - Threads: 16) 1.80
GPU result: (NVIDIA GeForce RTX 3080 Ti with Driver: 528.49) 94.50
KeyShot Viewer 11.3.3 Benchmark Result
CPU result: (11th Gen Intel® Core™ i9-11900K @ 3.50GHz - Threads: 16) 1.57
GPU result: (NVIDIA RTX A4000 with Driver: 473.47) 50.96
You shouldn’t buy a Quadro RTX4000 if you do CUDA renders. Actually, if you have a 3080Ti buy a second one, connect them with NVLink and you’ve a score of 190 and 24GB of memory to use.
EDIT: Seems NVLink was only for the 3090/Ti I see now.
There is a reason Nvidia disabled the NVLink functionality in their new cards. Not so much their reason though, if you want to kick a 4090’s *ss in GPU renders you buy two 3090Ti’s with an NVLink and you’ve more speed and a sweet 48GB of memory to use. Nvidia just realise nobody would buy Quadro’s anymore so no way you can connect the 4090’s using NVLink.
The Quadro’s are nice since you can link them and are 1 slot. But if you have to link 4 of them to get the speed and still have less memory than 2x 3090 for example, why would you.
I would buy two of the same second hand 3090 (Ti’s) and use NVLink. Cheaper than one 4090 and it will give you 48GB of VRAM. You just need to have a case/mainboard where it fits a bit. Best solution is water cooling but as long as there is some space between the cards I think it will be ok.
I don’t really agree with @Morgan since the CPU only sends your scene to the GPU that’s just a task that happens only once every render. So that will be a bit faster with a faster GPU and memory but for the CUDA calculations itself it doesn’t matter. You also don’t need both cards in really fast PCIEx16 slots since it’s not like you are gaming where textures/geometry go in and out the GPU constantly.
That’s also the limit of GPU renders that you are really limited by the memory of the GPU since while rendering there is no option to exchange textures with the GPU, it all should be there before the render starts
@wayne.heim can I ask you how you fit those in a case? I was doing a puzzle to get my old 2080Ti to fit with a 3090 but I’m out of ideas.
Hello everybody. Does NVLink work with any RTX? Does the amount of memory available determine the file size that can be generated (large files like 50,000px for example), but does it improve speed too? Will two 3080’s improve rendering speed or just allow the use of larger files?
2x3080 will double your render speed if you render with GPU of course. The NVLink doesn’t work on all cards and is killed with the 40xx serie. I looked online at some lists about NVLink support but I don’t dare to say if NVLink is supported by the 3080 as well. Some articles state that Nvidia dropped support for NVLink on 3080 cards but that doesn’t mean a lot. The 3090 and 3090Ti do support it so if rendering is your thing 2x3090Ti will bring you 48GB of VRam which is nice and speed wise you will be faster than a single 4090 as well looking at the amount CUDA cores (minus a bit less since the 4090 has higher clock speeds).
For speed it doesn’t make a difference to have NVLink or not, for memory it does. The nice thing with NVLink is that programs who support it (most render tools) combine the memory so you can have double the amount of textures before your GPU runs out of memory. And basically that’s the biggest negative point about rendering by GPU I think, as opposite of CPU all your textures have to ‘fit’ into the memory of the GPU if you want to be able to render them.
So basically 2x3090 (or more ) is real nice if you need a lot of VRam especially since there is no RTX GPU with 48GB yet for the ‘consumer’ market. And since the 4090 has no support for NVLink anymore they want to push the render people to the Quadro series I guess.
Just be sure te software you use supports it, KeyShot does I think (it in the manual). For gaming it’s of no use, about the same as SLI was basically of not much use either.
I did the test with the Porsche scene. My scores are: 10 with the GTX1660 and 25 with the RTX2060 (2.5x higher). The render time was 31 seconds with the 2060 and 51 seconds with the 1660 (1.65x faster), so the score gives an idea but does not represent a real result.
This is the point, Oscar. More memory available, more points to work with, but I’m not sure if it improves speed. I remember the case of SLI, where the speed gain in games was only 30%. Could it be that when rendering with KS, doubling the number of CUDA cores, doubling the memory, but continuing with the same speed of these cores, increases the rendering speed?
The 1660 has 1536 CUDA cores, the 2060 has 1920 CUDA cores. But if you have both cards in the PC you have 3456 CUDA cores to render with.
So if possible you can just put them both in your PC and use them both to render, no NVLink needed to just render with multiple GPU’s. I had 2x1070 and later 1x1070 and 1x2080Ti in my PC. The amount of available memory will just be the memory on the card with least memory. In your case both have 6GB if I’m right so that will stay 6GB.
To get back at your last reply, besides things of the clock speeds of your GPU’s the more CUDA cores the faster the rendering process will go.
I’ve now a 3090 in my PC and a 2080Ti that I want IN my PC but it doesn’t fit the 3090 has around 10000 CUDA cores but the 4000-something of the 2080TI would be welcome as well. That will reduce the memory I can use if both enabled since the 2080Ti has 11GB and the 3090 has 24GB.
If you have the space in your case I would put them in both but it can be hard to cool if they are real close to another. Although 2x 1070 never gave me a problem being really close together but the newer cards get really hot.
And if you want to do a good render test, take a bit of a complicated scene or let it stop with quite a lot of samples. Transferring the textures and geometry to GPU also costs time and if you’ve really short render times they have more impact that if you render for a longer time. That’s also the reason why I’m no fan of the benchmark of KeyShot. I can overclock my card and get a high score in that benchmark but if I actually us it that way in KeyShot the drivers of the graphics card instantly crash
Thanks for the answer. I hadn’t thought of looking at the number of CUDA cores, you’re right. I’m thinking of changing my 1660 to a 3090, mainly because of animations I want to create with KS.
You won’t regret. I got a 3090 from a friend who ‘needed’ a 4090 but it made a huge difference with my 1070+2080Ti combination. Here in The Netherlands new 3090 or 3090Ti’s you can’t get no more but maybe where you live there’s also a computer forum where members also can sell their ‘old’ hardware. At least, I trust those more than something like eBay or a general second handed marketplace.
And if there is a chance you might want a second one somewhere in the future… be aware the cards differ quite a lot in size. My 3090 is for example 2 slots and a lot smaller than my 2080 Ti. Easier to fit smaller cards in a computer case.
Seriously thinking about using the HAF 922 which is unused here…
CPU: Intel Xeon W-2135
GPU: NVIDIA Titan Xp, Quadro P2000