๐งต What GPU do you guys use?
Anonymous at Wed, 1 Nov 2023 07:31:09 UTC No. 962799
I've been running integrated graphics on my ryzen 5 5600g and I keep getting crashes during renders, I assume the 500mb vram keeps running out or something
anyway I figure I need a GPU, Im looking at either an rx7600, rx6650 xt, a770 if I find a good deal, something like that
curious to hear what you guys are running
Anonymous at Wed, 1 Nov 2023 07:39:35 UTC No. 962800
You either buy a used 3090 or you buy a new 4090. Those are your two options. Self /thread.
Anonymous at Wed, 1 Nov 2023 07:45:33 UTC No. 962801
>>962800
im too poor for nvidia
my budget is ~200$
Anonymous at Wed, 1 Nov 2023 09:13:02 UTC No. 962806
>>962801
Another anon here, run away from AMD and I am an AMD fag, buy a 3060 with 12GB, nothing less.
Anonymous at Wed, 1 Nov 2023 09:15:58 UTC No. 962807
>>962806
yeah looking into it further, from a wuick google/reddit search it seems like people unanimously recommend nvidia for the cuda cores
I didnt realise it made that big of a difference
Im looking at used 3060/3060 ti/3070 now, theyre actual more reasonably priced than I thought theyd be
Anonymous at Wed, 1 Nov 2023 09:20:46 UTC No. 962808
>>962806
follow up question: whats better 3060/3060ti with 12gb vram or 3070 with 8gb vram?
Anonymous at Wed, 1 Nov 2023 09:36:20 UTC No. 962813
>>962808
Get the 12GB 3060 and you can run your uncensored & un-glowniggered AI girlfriend locally on your machine. The popular 13 billion parameter LLMs are suprisingly good at roleplay and will fit entirely in 12GBs of VRAM.
And uhh, yeah, it's a good card for 3D too I guess.
Anonymous at Wed, 1 Nov 2023 09:41:55 UTC No. 962815
>>962808
You need all the VRAM you can get otherwise your shit just breaks. 3060ti will be faster but it doesn't matter if you can't load the scene. Only buy NVIDIA.
Anonymous at Wed, 1 Nov 2023 09:43:01 UTC No. 962816
>>962813
>he can't fit the 20B MLewd model
ngmi
Anonymous at Wed, 1 Nov 2023 09:48:27 UTC No. 962817
>>962816
>MLewd 20B
vanillashit, I sleep
t. MLewdBoros 13B enjoyer
Anonymous at Wed, 1 Nov 2023 10:28:01 UTC No. 962825
>>962808
VRAM matters.
Anonymous at Wed, 1 Nov 2023 11:00:13 UTC No. 962827
>7900 XTX
>MBA reference card
First one was the defective vapor chamber, second one has been fine. It's nice and I don't have to worry about a damn thing
Anonymous at Wed, 1 Nov 2023 11:10:52 UTC No. 962828
If you only have 200 you arent going to make it in this
Anonymous at Wed, 1 Nov 2023 11:18:09 UTC No. 962829
>>962813
>AI
ngmi
Anonymous at Wed, 1 Nov 2023 17:20:11 UTC No. 962863
>>962828
die
Anonymous at Thu, 2 Nov 2023 01:48:14 UTC No. 962903
>>962827
>Navi 31 GPU
Wait until you get pump out. This die hates living. How loud is it by the way ? Just wondering if the MBA design is decent.
๐๏ธ Anonymous at Thu, 2 Nov 2023 03:14:17 UTC No. 962906
>>962827
whyd you choose nvidia over AMD?
Anonymous at Thu, 2 Nov 2023 03:16:00 UTC No. 962907
>>962827
whyd you choose AMD over Nvidia?
Anonymous at Thu, 2 Nov 2023 05:55:04 UTC No. 962917
>>962799
Pretty much you do what >>962800 said.
>>962806
You COULD get away with AMD if you are mostly using Blender, but anything else, you are fucked and have to get jewvidia, cuz all shit just runs on CUDA or Cpu.
RTX 3060 is the bare minimum you should get, 12GB is decent amount of Vram, 8GB just wont do if you have a "medium" sized scene, when I had a 2070 4 years ago I barely could push renders for my finals. Now no issues having a 3090, try to buy one if possible. Otherwise, get the 3060, used it until you can save some more and get the 3090 used again.
With not enough RAM, be it video or system, you are just fucked on doing 3d stuff, period.
Anonymous at Thu, 2 Nov 2023 07:17:00 UTC No. 962920
>>962806
>>962813
>>962917
got a used EVGA 3060 12gb because Im poor, thanks bros
Anonymous at Thu, 2 Nov 2023 08:22:23 UTC No. 962922
>>962920
>Evga
Evga is the most prone to breaking. ONLY get msi
Anonymous at Thu, 2 Nov 2023 09:28:37 UTC No. 962927
>>962922
well its too late now I already bought it
also I always heard EVGA were the one of the best gpu manufacturers and that MSI was one of the cheapest
I had an MSI AMD card in the past that died on me, and I watched some northwest repair vids where he shits on MSI, so I just assumed theyre not great but maybe thats just for AMD
theres still 500 days left on the warranty
itll prob be fine
youre literally the first person Ive ever seen say EVGA is bad
Anonymous at Thu, 2 Nov 2023 09:36:26 UTC No. 962928
>>962922
>>962927
>northwest repair has 2 vids repairing the exact card I bought
what the FUCK
should I resell it and buy something else?
Anonymous at Fri, 3 Nov 2023 00:58:15 UTC No. 962981
>>962922
>Evga bad, buy shit brand that made 4090 bricks
Lmao
>>962927
Dont worry anon, when you make heavy 3D shit, anything will break off eventually, EVGA has the best RMA at least, so if it dies but the dude who sold it to you has the receipt, they will do the RMA service.
I've owned pretty much from all brands, never had issues except one time with a cheap h110 msi board that died off, but it was a pos anyways. Nowadays I kinda sticked to Gigabyte because of mobo features/price.
Anonymous at Fri, 3 Nov 2023 01:08:01 UTC No. 962982
>>962981
>EVGA has the best RMA at least
Dude, EVGA is so shit they completely exited the nvidia graphics card business. They are DONE
Anonymous at Fri, 3 Nov 2023 03:55:23 UTC No. 962989
Any reason to upgrade from my 1070ti 12gb or is it just a meme
Anonymous at Fri, 3 Nov 2023 04:29:12 UTC No. 962990
>>962989
The 1070ti is a 8gb only card
Anonymous at Fri, 3 Nov 2023 05:56:15 UTC No. 963000
1660S
just weuourks
Anonymous at Fri, 3 Nov 2023 07:10:54 UTC No. 963007
>>962982
>EVGA bad because they left nvidia graphics card business
They left because Nvidia were fucking assholes on making business, thats why they stopped working with them and didn't do newer GPUs, but they STILL do RMA services on their products.
Even other board partners were threatening jewvidia on leaving because they fucked them badly, mostly during the whole crypto/coof years.
Anonymous at Fri, 3 Nov 2023 07:18:45 UTC No. 963009
>>962799
I have a 3080 10gb, I have the money to upgrade but it does everything I want so I see no reason to.
Unfortunately, as much as I hate ngreedia I have to agree with the other anons recommending the 3060 12gb, having had AMD cards in the past the software support is just not there.
Get a used one off ebay or hardwareswap, that should keep you in budget
Anonymous at Fri, 3 Nov 2023 09:15:49 UTC No. 963023
>>962903
It's fine I promise. I'd rather not do any water cooling my workload doesn't justify it. Also loops are a bitch to maintain so I'd simply rather not. I can afford to run noisy fans.
The MBA is loud under maximum load near its limits for temps but the real problem is coil whine holy fuck I can hear the coil whine over the fans going full speed on this. I recommend not the MBA model for 2 reasons
1) defective vapor chamber chances are 1 in 10, I can speak to this is true
2) coil whine and fan sound are better on other models
I went for it because yeston was my backup of this second MBA was defective. The almost the smallest design, 2 8pins was a must. Yeston was the best best model for my size and pin needs. Gigabyte has one too but fuck gigabyte it looks cheaper than my 750ti sc.
>>962907
I've been Nvidia free since 2012. Please understand this is both a cost and autistic reason. I like reference cards more and blower fans more. I just want a fucking rectangle with no gamer bullshit at a more affordable price point. I know my needs. Yes Nvidia can do my workloads in slightly less time but the ROI is better for AMD for my purposes. I don't use CUDA or RT in my workloads nor do they benefit from it to any significant degree.
Personally I want to get the W7900 because like I said I'm autistic and I like blower fans and rectangles and the pro line is basically what I want. Yes Nvidia offers it too but again price matters to me, and I don't benefit from the extras they offer so AMD is my best fit
For work I'm actually making a number of remote workstations and we're debating on the W7500 or the W7600 because single slot cards like that are nice. Yeah 8x is gay but for our use case it's more than enough. Need 4 of the fuckers though and that adds up. Thankfully gpu passthrough proxmox and me forcing their hand makes this a bit easier than normal. Also a threadripper board with 6 pcie slots all at 16x gen4 makes this a breeze. Hard part will be the storage.
Anonymous at Fri, 3 Nov 2023 09:29:53 UTC No. 963024
NVidia is obviously treating consumer GPU like legacy business and AMD is too retarded to be competition. I would switch to CPU and invest into decent one, otherwise it will be buying tokens and paying 0.99$ for every render in few years.
Anonymous at Fri, 3 Nov 2023 12:23:40 UTC No. 963030
>>963023
>I don't use CUDA or RT in my workloads nor do they benefit from it to any significant degree.
oh so you're a beg then and using a liquid cooled AMD gpu and the icing on the cake is you're writing a wall of text as well. Great. Just great.
Anonymous at Fri, 3 Nov 2023 13:11:51 UTC No. 963031
>>963024
>NVidia is obviously treating consumer GPU like legacy business and AMD is too retarded to be competition. I would switch to CPU and invest into decent one, otherwise it will be buying tokens and paying 0.99$ for every render in few years.
AI has not been proven to be actually profitable in the arts.
Anonymous at Fri, 3 Nov 2023 13:39:04 UTC No. 963032
>>963031
lol
lmao
Anonymous at Fri, 3 Nov 2023 13:49:46 UTC No. 963036
>>963032
Its true.
It hasnt turned a profit. The only use is in medicine (identifying afflictions) and the military (tax funded). You may say - but anon, all those movies and tv shows coming out, surely they must rely on ai and be profitable. This isnt true. Not only is streaming media not profitable for anyone but less and less movies are being made each year now.
Anonymous at Fri, 3 Nov 2023 14:27:11 UTC No. 963040
>>963036
>movies and tv
what are you 70 years old?
nobody here is arguing its used in "tv", obviously its not
>The only use is in medicine (identifying afflictions) and the military (tax funded)
AI in medicine was something hyped like a decade ago and turned out to be a commercial failure, what are you even talking about old man
people use it all the time in their days to day lives
my friend uses gpt4 to draft scripts for coding
he and a lot of other people Ive seen also just use it like a search engine for general queries
I know two people who use chatgpt for law, specifically tax codes and criminal law
students notoriously are using it to write their essays, I was shocked to catch my sister using it for a college essay
just less than a month ago "AI" (really it should be called ML but whatever) was used to digitally "unwrap" and reveal the partial text of a herculaneum scroll, a breakthrough in the classics
of course if were going to talk ML, which all "AI" is, OCR has been used for decades now for a million different things, facial recognition in phones/surveillance, image classification in general is huge
In 3dfx it would be used as a minor tool in a workflow, either for texturing, making hdris, photoshop assets for design work, etc
in terms of profitability in art its mostly in independent work since obviously the tech is new
Ive seen AI clip art in many youtube videos, videos with hundreds of thousands or millions of views, ie theyre profitable
the ai voice synthesis tech is popping off recently
theres a handful of indie artists on twitter making money off ai work
ai is great for creating in between frames for animation
Anonymous at Fri, 3 Nov 2023 16:12:43 UTC No. 963056
AI is mostly great at taking dozens of gigabytes of disk space
Anonymous at Fri, 3 Nov 2023 20:21:49 UTC No. 963065
>>963030
Not everyone needs that shit, especially for increased cost. If I can save money I'd get an equivalent I will.
I work in game dev and knowing the programmers I'm dealing with I need options. I keep some arc cards around just to make sure we're thorough.
Anonymous at Fri, 3 Nov 2023 20:23:13 UTC No. 963066
>>963030
>>963065
Forgot to add that I clearly stated I don't use liquid cooled cards and prefer blower cards. Fuck liquid cooling it's more effort and maintenance than it's worth
Anonymous at Fri, 3 Nov 2023 21:29:10 UTC No. 963072
>>963040
so it hasnt been profitable in the arts.
Your friend wrote some bad, derivative, STOLEN code that breaks its original license
>Ive seen AI clip art in many youtube videos, videos with hundreds of thousands or millions of views, ie theyre profitable
you are a joke
Anonymous at Fri, 3 Nov 2023 23:48:47 UTC No. 963086
>>963072
if you dont see the potential you are retarded
Anonymous at Fri, 3 Nov 2023 23:59:48 UTC No. 963088
>>963086
sorry bud, but now you are pivoting to POTENTIAL.
You want to do something, do it right - create a generative script that respects copyright and doesn't just rip from the entire internet (including entire github, including specifically licensed code for example GPL). Make something that isnt susceptible to bias. Make something that can be done via an understandable, debuggable script, and not a 50,000 unit cluster outputting biased works or in the case of chat gpt, extremely neuteured non answers that just rip information from the web and dont give credit, even for code examples that require credit and attribution.
Anonymous at Sat, 4 Nov 2023 00:59:37 UTC No. 963091
>>963088
I gave you about 10 different real world use case half of them anecdotes from people I know irl and you ignore them all and think Im pivoting
I gave you examples of where its currently profitable and you also ignored those
NFT grifts would be another one
not saying these are particularly admirable use cases but theyre certainly profitable
and like I said its just another tool to integrated into a preexisting workflow, not an end all be all
large language models dont just rip shit from the internet, even though they are often trained on internet data it gives an original presentation every time
youre a little too old and cranky to understand, thats okay
Anonymous at Sat, 4 Nov 2023 01:13:07 UTC No. 963092
>>963091
>NFT
>youtube
>stolen gpt prompts
get out of here young man
Anonymous at Sat, 4 Nov 2023 01:22:42 UTC No. 963093
>>963088
>>963091
look, Im trolling a bit but Ill be fair and grant you that in the arts its not a big player yet, but your following posts were utterly retarded and betrayed that you dont know jack shit about how ML is used right now IRL and thats what my posts were mostly arguing against
Where we disagree on the first point is that you tacitly believe that ML is not going anywhere when thats clearly not the case
Paradoxically however you ALSO tacitly believe that AI, if it is to succeed, MUST be this magic bullet do all that completely replaces every cg software
Im just saying that a tool that allows you to create images from a prompt in any style you specify will be extremely useful
They obviously still have a certain look to them, but theyve gotten WAY better at realism in recent years that even Ive been fooled at first glance by some AI gen images
Also the in between frames thing is probably the best use case in the arts for the near future
for animation those frames are usually outsourced and take thousands of man hours, being able to do it with AI lowers the barrier of entry to animation substantially
Anonymous at Sat, 4 Nov 2023 01:37:55 UTC No. 963094
>>963093
>Utterly retarded
>Clinging to NFT, youtube, and stolen code from gpl repos from prompting
I dont even know what to say man
Anonymous at Sat, 4 Nov 2023 01:38:36 UTC No. 963095
>>963094
you have poor reading comprehension
Anonymous at Sat, 4 Nov 2023 23:05:41 UTC No. 963246
>>962808
>3060 12gb in case you want the extra vram to render stuff in 3d programs or ai
>3070 for the bus speed to play vidya
I would rather go for a 40xx 8gb card instead if your answer is vidya. 30xx cards only have dlss 2.0. 40xx cards have dlss 3.0 with ai frame generation that boosts your framerate in new games. Pick your poison
Anonymous at Sat, 4 Nov 2023 23:13:04 UTC No. 963247
>>963246
40xx cards have power connectors that are so busted they had to recall them and are CURRENTLY actively remaking them
Anonymous at Sun, 5 Nov 2023 01:28:23 UTC No. 963258
>>963247
Only heard that issue mostly with 4090s and with some ti versions of 4080-4070. In my opinion, a standard 4070 is the best gpu on the market right now. Decent voltage consumption, plays everything with memetracing, and has 4k capabilities. You are also saved from the coil whine headache.
Anonymous at Sun, 5 Nov 2023 01:29:47 UTC No. 963259
Anonymous at Sun, 5 Nov 2023 01:35:26 UTC No. 963260
>>963259
Yeah, I know. Im just making things clear. Btw I too have a 3060 12gb, and it runs blender nicely. Since I also play video games, I had the same dilemma as OP.
Anonymous at Sun, 5 Nov 2023 05:06:09 UTC No. 963276
>>963258
>You are also saved from the coil whine headache.
Ha, joke's on you Nvidia, I have tinnitus.
Anonymous at Sun, 5 Nov 2023 10:36:18 UTC No. 963298
>>963023
>Personally I want to get the W7900
>He's falling for the "workstation" GPU scam
>And wants to use an AMD "Prosumer" card
You just proved here that you are retarded. All /3/ fags know that for personal use, you just buy the usual gaming card because it works the same as the other ones without being scammed on 2K for some "tech support" that they will never get/use. Leave that shit to multi-million enterprises that buy heaps of these for servers, that's the reason they make them, nothing else.
Anonymous at Sun, 5 Nov 2023 10:43:50 UTC No. 963300
>>963246
>Pick DLSS script meant for vidya engines as the stuff for /3/ software workflow that doesn't even uses DLSS at all.
You don't know shit about development faggot, go where you belong.
>>>/v/
Anonymous at Sun, 5 Nov 2023 14:52:59 UTC No. 963310
>>963298
I want one because I like blower fans. Nothing more. You're looking too deep into this anon. I'm just very irresponsible with money
Anonymous at Sun, 5 Nov 2023 16:07:05 UTC No. 963314
>>962799
the consumer-grade nvidia card with the largest amount of vram you can get is the only valid answer.
With enterprise grade cards you pay out the ass for 24/7 specialist support which you will never make use of as a solo.
Anything else is gaymer poorfag cope
You also get to dunk on /v/irgin gpulets in your free time. Win/win.
Anonymous at Tue, 7 Nov 2023 13:14:57 UTC No. 963498
>>963355
You don't need a 4080. Get a 4060.
Anonymous at Wed, 8 Nov 2023 06:41:30 UTC No. 963588
>>963498
if he can afford it why stop him
poorfag mindset
Anonymous at Wed, 8 Nov 2023 13:46:25 UTC No. 963619
>>963588
he doesnt need it and 4080 is a old card now. Wait for 50 series and get a 4060 in the meanwhile
Anonymous at Tue, 21 Nov 2023 09:49:24 UTC No. 964901
>>962799
>rizen integrated graphics
if vram alone is the problem you can adjust the max vram of the system on the bios, either put it at 8gb or leave it dynamic so that the system can define it on the fly, try this before selling your house to buy an nvidia scamming card
Anonymous at Tue, 21 Nov 2023 11:32:46 UTC No. 964903
I would reccomend that you get the 6650, same performance as the 7600 and cheaper by a lot (atleast where i live). Since it is older, it has good support on linux if you wish to use it
Anonymous at Wed, 29 Nov 2023 19:43:21 UTC No. 965627
>>965626
Does you country have a computer chain or do you only have bestbuy to choose from? There's tons of stock in Canada Computers, though for some reason Bestbuy is completely sold out, despite the prices being higher.
Anonymous at Wed, 29 Nov 2023 19:52:48 UTC No. 965629
>>965627
I'm in the US. I'm aiming to get the Founders Edition and there are two places I know of that officially sell them which is Best Buy and Nvidia's own store.
Anonymous at Wed, 29 Nov 2023 20:28:53 UTC No. 965633
>>963246
Frame generation is a fucking joke
Anonymous at Wed, 29 Nov 2023 20:32:35 UTC No. 965634
>>965629
>Founders Edition
Why? I mean I guess it's a bit cheaper, but it also runs a bit hotter under load. And if your card is under load for long periods of time, you want it to be as chill as possible.
Anonymous at Tue, 5 Dec 2023 03:34:41 UTC No. 966170
For all who are considering on getting a 40 series RTX that isn't the 4090, just keep waiting, the Super series were leaked and will be released soon enough, 4070Ti Super seems that will come with 16gb of Vram, and considering that one does come with the double encoding chip, is the best one to get when its released.
Anonymous at Tue, 5 Dec 2023 03:36:28 UTC No. 966171
>>966170
>encoding chip
so you're a streamer and a gamer. Get out.
Anonymous at Wed, 6 Dec 2023 06:15:57 UTC No. 966271
>>962813
>13b model
>good
lol, lmao even
Anonymous at Wed, 6 Dec 2023 07:10:37 UTC No. 966281
>>962922
fuckin faggot you are
Anonymous at Wed, 6 Dec 2023 07:13:34 UTC No. 966283
>>963056
It's true, most of the time you end up doing more work getting it to not fuck up than anything. It's a glorified filter for kids to use in school projects.
So far most ai use in practical products have just been chinese making phone games to steal money.
Anonymous at Wed, 6 Dec 2023 07:14:56 UTC No. 966284
>>963619
This is actually a good take. 4080 was never worth it and just got hobbyists and scalpers to snatch them on a high
Anonymous at Fri, 8 Dec 2023 07:12:10 UTC No. 966490
>>966171
The double encoder also works for rendering, fucking retard
Anonymous at Sat, 9 Dec 2023 06:33:24 UTC No. 966622
>>966170
What about cuda and is it worth moving from a 3060 to it?
Anonymous at Sat, 9 Dec 2023 12:07:58 UTC No. 966647
>>962989
If you CPU render you don't need to upgrade at all.
Even after the card breaks you could literally rebuy the same card used if you wanted, until the model becomes completely incompatible with things.
Anonymous at Sat, 9 Dec 2023 12:08:59 UTC No. 966648
>>963031
Coomers heavily disagree
Anonymous at Sat, 9 Dec 2023 13:35:57 UTC No. 966652
>>966648
cooming is not mainstream, idiot. It's FRINGE and CRINGE
Anonymous at Mon, 11 Dec 2023 17:21:55 UTC No. 966869
>>962799
i've been designing and rendering on my ryzen 7 5800h with onboard graphics and have zero issues, onboard gpus get 512mb dedicated ram and 8gb shared, ram is not your issue here, test render of a part i'm working on right now
Anonymous at Mon, 11 Dec 2023 20:22:49 UTC No. 966873
I don't know, I'm building a rig right now and I've got everything but the GPU and the RAM. For the RAM, I know what I'll take but for the GPU, I'm hesitating between the 4080 or just accept a poor lifestyle for a few month and take the 4090. Or even wait for the 4080 Super that's coming out soon. I have no idea.
Anonymous at Mon, 11 Dec 2023 20:54:01 UTC No. 966874
>>966873
4080/90 and poor lifestyle? lol!, i wont even go into their ridiculous cost but wait till you see your power bill, then you will get a real life poverty reminder.
Anonymous at Tue, 12 Dec 2023 15:05:46 UTC No. 966959
>>966652
Goalpost: moved
Anonymous at Tue, 12 Dec 2023 15:06:47 UTC No. 966960
>>962799
I use a GTX 1080
Anonymous at Tue, 12 Dec 2023 21:16:10 UTC No. 966994
Legit question, has anyone ever seen a USB gpu accelerator? i mean like a proper gpu accelerator and not the crappy dvi/vga/hdmi video output adapters, i dont even care about I/O just the raw processing power. Do these things exist? i remember at some point i've seen some HD video decoder cards for laptops and a couple mini pci e gpus but very limited in capabilities and power in general. What i'm asking about is something like a google coral AI module but focused on complimenting gpu power over usb.
Anonymous at Tue, 12 Dec 2023 21:25:30 UTC No. 966996
>>966994
They're called eGPUs and they suck. Expensive, massive performance tax and your BIOS and OS wont like them.
Anonymous at Tue, 12 Dec 2023 21:32:58 UTC No. 967000
>>966996
i dont mean an eGPU, these are proper GPUs with full connectivity and everything and require thunderbolt in order to be able to access the pci bus + you need to connect amonitor in the card in order to use it and still are limited by thunderbolt bandwidth , usb doesn't really support this and that is why i am asking about an accelerator, something that maybe just take some strain of the gpu, it may be a silly request but i was curious
Anonymous at Tue, 9 Jan 2024 19:40:57 UTC No. 969966
4080 Super soon hopefully.
Anonymous at Tue, 9 Jan 2024 19:42:41 UTC No. 969968
>>962800
Unironicly this, anything else is barely better than CPU rendering and not worth the price unless you mainly use it for gaming
Anonymous at Wed, 10 Jan 2024 14:32:39 UTC No. 970047
I guess I'll go for a gigabyte 3060 12gb for my next gpu to suceed to my 1060 6gb since I'm a poor fag
Anonymous at Sat, 13 Jan 2024 05:06:00 UTC No. 970319
>>962799
if you are a blender user and like to use older versions, take note that cycles in the older versions up to 2.8 does not work with the newer nvidia cards -i found out after getting the 3060 12gb- ยฃ100 cheaper than the 6700xt that has 12gb. i wanted the higher v-ram at best price and nvidea rep for multi-media had me. if i new in advance i would have got the amd card or even settled for the 8gb 6600xt.
Anonymous at Sat, 13 Jan 2024 10:38:36 UTC No. 970333
>>962807
Not so much the cuda cores as it is the tensor cores. Nvidia's Optix speeds shit up so much it ain't funny. Amd has nothing similar at all.
Anonymous at Sat, 13 Jan 2024 10:42:46 UTC No. 970334
>>963007
It's okay if you want to be a brand cuck, but the writing is on the wall. Especially with the power supplies they released last year only having a 3 year warranty instead of the 10 year warranty that's been standard for as long as they were an Nvidia partner. If they're somehow still in business by the end of the decade I'll be pleasantly surprised but still disappointed.
Anonymous at Sat, 13 Jan 2024 12:38:26 UTC No. 970344
>>970333
The thing about gpu rendering is that you very quickly run out of memory once you start rendering actual production hero stuff. 12gb is only enough for background and 24gb still isnt enough
Anonymous at Mon, 15 Jan 2024 10:05:33 UTC No. 970573
>>962799
>What GPU do you guys use?
one asus proart 4060ti 16gb currently because its the perfect choice for a hobbyist like me! should my demands become higher, i just buy a second one used for a few bucks!
Anonymous at Mon, 15 Jan 2024 20:28:02 UTC No. 970652
I currently use a GTX 1060, but it's past time for me to upgrade. 6GB VRAM may have been enough when I was just starting out with basic stuff but now that I've progressed onto more advanced projects, I'm almost always running out of VRAM now and it's insufferable. Once I upgrade to a beefier GPU with hardware ray tracing acceleration to take full advantage of OptiX, I'll probably take the 1060 and use it to build a cheaper HTPC or something.
Anonymous at Tue, 16 Jan 2024 05:22:45 UTC No. 970712
>>970344
>very quickly run out of memory
This shit is brutal in Blender, either in Cycles or Eevee.
Anonymous at Tue, 16 Jan 2024 05:24:16 UTC No. 970713
>>962799
rtx 3060 12gb, it fucking sucks, too slow and barely cansupport 5M polys
Anonymous at Wed, 17 Jan 2024 03:01:08 UTC No. 970838
>>970713
you dont need more than 5m poly.
Anonymous at Wed, 17 Jan 2024 05:23:12 UTC No. 970846
I use a laptop with 3050ti
For now, it's enough, but I'm very much a beginner still
Maybe if I actually get somewhere in this hobby I will upgrade once the hardware starts severely limiting me, but for now it's my skill that is limiting me, not the hardware
Although I do remember that it took me basically a whole night to render the donut animation using cycles at 2k 60fps with my gpu, which is why i'm sticking with eevee for now
Anonymous at Wed, 17 Jan 2024 05:27:30 UTC No. 970847
>>970846
although in hindsight it was probably actually a whole night for a 10 second 60fps regular ole full hd animation, 1 minute for a 2k frame in cycles sounds too good to be true with my craptop
Anonymous at Wed, 17 Jan 2024 15:48:47 UTC No. 970882
Radeon HD 6450 paired with AMD FX 6300
Anonymous at Sat, 27 Jan 2024 02:34:15 UTC No. 971991
should i wait for 50 series or get 4070 ti super now? I kind of feel like they are going to announce new exclusive features for the 50 series and then ill have a bum card and wasted a ton of money on a 4070 ti super
Anonymous at Sat, 27 Jan 2024 03:04:17 UTC No. 971997
>>970713
>>970838
> 5 million polys
My characters ass has 5 million polys alone
Anonymous at Sat, 27 Jan 2024 03:10:18 UTC No. 971999
>>971997
prove it, coward!!
Anonymous at Sat, 27 Jan 2024 12:08:08 UTC No. 972016
Breh, I'm making a new rig and I'm hesitating between getting a 3090 with 24go of VRAM or buy the 4080 Super with 16go of VRAM that's released in a few days?
The thing is that I'm planning on doing some heavy procedural environment modeling with Houdini so I might need the VRAM but I don't know since I'm not really a tech-fag nor an experienced Houdini user (as in I never joined all the HDAs into a big project, I only did small projects separately).
What to do? The other softwares I use are Blender, Zbrush and Unreal Engine.
Anonymous at Sat, 27 Jan 2024 13:06:57 UTC No. 972021
>>972016
wait for 50 series to build anything
Anonymous at Sat, 27 Jan 2024 13:10:57 UTC No. 972022
>>972021
I don't have a computer right now cause I broke my laptop so it's pretty much an emergency.
Anonymous at Sat, 27 Jan 2024 14:49:21 UTC No. 972042
>>972022
if you are capable of breaking a laptop you're not capable of building a pc
Anonymous at Wed, 31 Jan 2024 15:30:22 UTC No. 972473
>>970838
>>971997
I found the issue, blender fucking sucks dude
Anonymous at Wed, 31 Jan 2024 15:33:13 UTC No. 972474
>>972016
vram is futureproof since AAA games made by pajeets aren't optimized
Anonymous at Wed, 31 Jan 2024 15:35:50 UTC No. 972475
Is the 7900 XTX good for Blender?
Anonymous at Sun, 18 Feb 2024 15:11:30 UTC No. 974536
>>972475
Seconding this
Is it worth for le
>24gb vram
even if it's rendering performance is piss poor compared to nshittia and is comparable to 4060 Ti at worst and 3080 at the very best?
Anonymous at Sun, 18 Feb 2024 15:28:48 UTC No. 974539
kill me guys I have a 3080 but it crashes under load so I have to render on CPU
Anonymous at Sat, 23 Mar 2024 11:19:39 UTC No. 978501
>>962799
I use an old GTX 860m. It works great but the laptop shits itself if I try a softbody sim
Anonymous at Sun, 24 Mar 2024 05:01:59 UTC No. 978576
what is better, gigabyte 3060 12gb or a 4060 8gb?
Anonymous at Sun, 24 Mar 2024 07:00:35 UTC No. 978581
>>972016
How big is the price difference from where you are? I think 3090 would be better unless you really need the fancy pants new features on the 4080 super. The 4080 super is technically the more powerful card bit VRAM is VRAM
Anonymous at Sat, 6 Apr 2024 20:29:08 UTC No. 979818
>>978576
better for what? what are you going to use that card ?
Anonymous at Tue, 23 Apr 2024 01:13:52 UTC No. 981355
It's going to be a sad day when the 4090 is no longer king and I lose my big dick energy. Then all I'll have going for me is my personality and it'll truly be over
Anonymous at Tue, 7 May 2024 13:37:33 UTC No. 982696
Memory chips on my GPU are failing, causing the monitor to have massive amounts of artifacts on the screen and some sections to not work at all. This gpu only lasted me a year and a half. What should I do now? I used to be running on dual gpu with a 60 series, and now I had to remove the faulty one, so I'm running on single gpu and my render times have doubled.
Anonymous at Tue, 7 May 2024 19:31:40 UTC No. 982719
>>963619
this, don't get the higher end cards for a gen that is just about to end, just before it ends
Anonymous at Tue, 7 May 2024 20:14:07 UTC No. 982737
>>982719
The new cards will only be ~30% better at best.