๐งต Untitled Thread
Anonymous at Sat, 14 Dec 2024 17:46:13 UTC No. 1003402
Where is 3d going in the next years, /3/? We all know that AI didn't pan out, especially given the amount of surveillance there is.
Anonymous at Mon, 16 Dec 2024 02:14:44 UTC No. 1003493
anyone? Where is /3/ going now?
the chair nerd at Tue, 17 Dec 2024 14:17:54 UTC No. 1003588
>>1003493
>>1003402
Bitch there is your response. /3/ 's never been anywhere and it's going nowhere.
Anonymous at Fri, 3 Jan 2025 10:20:34 UTC No. 1004585
honestly pmuch everything sideFX is doing seems like the future
also i expect a move away from direct poly modelling and subd modelling approach toward a more cad like approach a la plasticity
would be cool if we moved away from uv texturing approach toward something like mesh colors
Anonymous at Fri, 3 Jan 2025 14:02:34 UTC No. 1004588
>>1004585
People thought we would do ai neural modelling and texturing and eventually full length ai movies. Turns out that right now everybody is sick of the clearly apparent ai style and nobody wants to watch a 2 hour movie (ai or not) anyway. They just want tik tok length content which is not profitable or marketable. Additionally tiktok is literally outlawed in america now.
Anonymous at Fri, 3 Jan 2025 14:41:48 UTC No. 1004590
>>1003402
>We all know that AI didn't pan out,
Bro.. you have no idea what's in the pipe. The fact that early ANN's aren't doing anything super useful today doesn't make that genie go away. We know it's possible to generate this stuff now, it's being perfected and it'll arrive as advertised.
Anonymous at Fri, 3 Jan 2025 14:48:31 UTC No. 1004591
>>1004590
its over dude. They have already scraped everything in written and video form. Its not getting better. People dont want to watch long format. They only want short format. They only want influencers that match their world views. At the same time people dont want hallucinations which can be dangerous physically or financially if you take bad advice.
But just 2 more weeks until we'll all be bored to tears by 2 hour + long format videos, right?
Anonymous at Fri, 3 Jan 2025 15:36:53 UTC No. 1004593
>>1003402
I don't know where 3D is going, but I know where I'm going and it's upwards
Anonymous at Fri, 3 Jan 2025 16:37:34 UTC No. 1004595
A.I. current goal is to create content that isn't immediately identifiable as A.I.
Anonymous at Fri, 3 Jan 2025 17:45:22 UTC No. 1004597
>>1004591
>Its not getting better.
You're unfortunately living in fantasy land, just last month o3 trained on synthetic training data didn't regress, instead it created headlines by scoring 88% on ARC-AGI
a test deliberately made to trip up AI's. The discussion if this marks the arrival of true Artificial General Intelligence is ongoing in AI circles.
If you think this wont translate to image synthesizes as well you're dreaming. This field is moving forward at a relentless pace.
It'll never run out of training data as it is now observing the same real world as us thru cameras.
From the looks of where things are heading there is strong reason to believe true ASI will be here before the end of the decade.
The whole 'it has already scraped everything and is going inbreed' is total cope from people who don't actually keep tabs on current development.
Anonymous at Sat, 4 Jan 2025 01:42:46 UTC No. 1004636
>>1004597
>ASI
absolute cringe for even mentioning this. Go hallucinate something more, like you do with 99% of ai things that isnt just a clear cut and paste.
Anonymous at Sat, 4 Jan 2025 02:54:09 UTC No. 1004641
>>1004636
Think about it anon, we know how fast these things grow super intelligent to the point they're running circles around humans in areas where the
reward function is known, no human will ever beat a AI in Chess or Go etc.
Can't you see that as soon as the reward function is known for an area there is an explosion of capability that keeps expanding.
image synthesis went from generating goofy looking hallucinations to high levels of coherence in less than a year.
Now people can post absolute slop like a throw away effort.
Something like this has 100 views; https://www.youtube.com/watch?v=Kv2
make a video that rivals that using traditional CGI? I'd need several months to even make a legitimate attempt, where as this was someones throw-away effort.
Imagine ~2 years from now when that output will look 10 times better than this and it probably can spit out rigged 3D meshes with permanent coherence.
From our point of view that would be artist ASI realized; because it'd run circles around human artist same way chess AI run circles around human players.
The LLM's are already writing PhD level thesis that passes examination when mixed in with humans.
Public facing models are said to currently have an IQ of around 120, talking to chat GPT on topics I'm highly knowledgeable on I see no reason to question that.
It is a lot smarter than almost everyone I know.
So what metric are you looking at to be so sure this field is regressing or standing still anon? I feel you say this is 'cringe' because you dislike the implications for us, but going ostrich/denial and pretending not to see what is unfolding before your eyes helps no one.
This happens regardless of how much you keep pretending it isn't. Don't 'ur cringe' the messenger.
Anonymous at Sat, 4 Jan 2025 03:30:18 UTC No. 1004643
>>1004641
>The LLM's are already writing PhD level thesis that passes examination when mixed in with humans.
>Public facing models are said to currently have an IQ of around 120, talking to chat GPT on topics I'm highly knowledgeable on I see no reason to question that.
>It is a lot smarter than almost everyone I know.
Copy and pasting doesn't make you smart, especially when you 1) hallucinate like crazy, just making shit up, parameters that never existed in the program despite you saying they are there, advice that clearly will never work, leading whoever has even some level of passing knowledge of the topic to hurt themselves physically / financially if they follow what the ai is so confidently talking about 2) don't know anything about the intricacies of what you are talking about or any background
IQ tests are a metric that applies to humans only
Anonymous at Sat, 4 Jan 2025 04:23:58 UTC No. 1004648
>>1004643
>Copy and pasting doesn't make you smart
They're not copy/pasting anything, that is just a gross misunderstanding of how neural networks operate.
Do you think ANN's could run circles around us in games like 'Go' by copy/pasting moves in a game that has
more board configurations than there are atoms in the known universe?
The way these things do what they do is analogous to the way information is stored in our own neural tissue.
The architecture is simplified by the idea originates from and mirrors how neurons are interconnected to fire together in living brains.
When an ANN carries out a task it is a lot closer to being 'smart' in the same way you and I are smart than the way any traditional algorithm is 'smart'.
When you ask the ANN to draw a 'X' it remembers what 'X' looks like much the same way you and I do and generate a unique never before seen
visual representation of whatever is present in the mental construct it conjures up by it's learned associations with that thing.
Biggest difference is that unlike us it can manipulate millions of pixels in a flash to project the construct it holds in it's 'mind' the same way
we would be able to if we could connect a screen to display the image we envision when asked to visualize something.
>IQ tests are a metric that applies to humans only
IQ tests make zero claims to test for how human you are anon, all it says is what score you get on the test.
Much the same way the high-score screen in Super Mario would tell you the score of the level whether you're man, machine or anything in-between playing that NES.
Anonymous at Sat, 4 Jan 2025 06:02:41 UTC No. 1004654
>>1004648
get the fuck out of here dude. Like I said, go hallucinate somewhere else
Anonymous at Sat, 4 Jan 2025 06:32:50 UTC No. 1004655
>>1004654
Here you are failing to find any angle to attack my argument from and opt to spit empty insults in it's place.
Do you honestly believe mine is the part who needed to show their ass out the discussion at such point?
Anonymous at Sat, 4 Jan 2025 12:57:28 UTC No. 1004664
>>1004655
....do you have a _hallucination_ problem - yes or no? Do you realize that the things you can solve , ie GO is almost 100% coming up with completely random moves which fall well within the realm of _hallucination_ and other things like artwork only appear to be solved because they have a large factor of your _hallucinations_ built into them? Problems where you can't hallucinate you just copy and paste and hallucinate anyways and end up in obvious failure because your NN has no understanding of what you actually doing
Anonymous at Sat, 4 Jan 2025 17:36:07 UTC No. 1004681
>>1004664
>NN has no understanding of what it's actually doing
Absurd claim, if it didn't it would not be able to answer questions coherently, yet it is able to answer even very elaborate questions coherently.
Cheap to run models like GPT-4o are excellent brainstorming tools because of how coherently and rapidly they extend and fill in gaps in your own thinking.
Do they occasionally hallucinate something? Yes they do; but so does you and your co-worker who occasionally have a misconception hatching a shit plan.
The closed models that cost too much to run (thousands of dollars per prompt) do not have this issue, cept far into the margins.
They currently clear PhD level tests and successfully does higher mathematics at a level few humans can touch.
What we currently do is we run an expensive 'teacher model' that generates synthetic data; data that is high above average human level in a given field.
this data we then feed as training data into a 'student model' that is cheap enough to run it can be public facing at today's level of available compute.
As compute goes up, as the quality of information the systems are trained on goes up
as neural architecture of the models themselves grows more advanced: these systems scales.
Anonymous at Sat, 4 Jan 2025 17:37:36 UTC No. 1004682
>>1004681
>cont
Already smart/highly-educated people who use ever better AI assistants does more quality work faster, break through's in all areas come at
us at a faster rate, in turn this keep improving the quality of available training data.
So Even if you wanna be max pessimistic and believe AI architecture wont meaningfully advance beyond where we are today
this feedback loop of AI improving humans - humans improving available data for our current generation AI is still there.
But neural architecture isn't static either, improvements happens rapidly there too, so we currently have a strong self-reinforcing
feedback-loop going that ensures that these systems will grow more advanced at a steady pace. seemingly from now until ~singularity.
Once that happens and the AI can create neural architecture more advanced than what it currently has the curve goes wooosh
and any predictions what'll happen next flies out the door.
This is why people are looking at where we are at, already about to hit AGI, stating ASI looks like it's within reach come the 30's.
I have very ambivalent feelings about AI myself but this is just the nature of the reality we're now facing.
Anonymous at Sat, 4 Jan 2025 17:48:19 UTC No. 1004683
>>1004682
>he thinks there is no hallucination issue
stop typing the words 'PHD', too. Having a PHD is fucking meaningless.
Anonymous at Sat, 4 Jan 2025 18:34:52 UTC No. 1004688
>thinks there is no hallucination issue
No, the hallucination issue is real; but it's very over-played by AI critics. Hallucination rate is on a steady decline as these systems
are given space to reflect and error-correct throwing additional processing at the problem til it's verified all information checks out.
The fact we went from a GPT that hallucinated all the time to one that rarely does in the span of ~2 years don't give you pause?
>stop typing the words 'PHD', too. Having a PHD is fucking meaningless.
These are just metrics to gauge how intelligent these systems are as most people obviously can't pass these exams.
Being able to coherently answer PhD level questions isn't meaningless at all, it's a clear-cut demonstration of how these systems
can now reason coherently at a level that would cost $30-240K to train a suitable human for.
Tests so difficult that most people will be filtered even if they tried before they even get to sit down to take them.
Another way to put it is that in 2 years since GPT we've already have technology that is now smarter than ~90% of humanity.
One must believe that it's something very very special about those remaining 10% of us to think the train stops here.
The AI will eventually do to you, whatever it is you do, what the AI did to the Go and Chess players.
Anonymous at Sat, 4 Jan 2025 18:42:23 UTC No. 1004690
>>1004688
GO and chess are games that favor a computer, much like a calculator can divide pi near instantly to X number of decimal places. Stop bringing up meaningless, pointless games. What are you going to bring up next? When they taught the computer to play starcraft yet enabled cheats on the computer side and allowed it to take unlimited amounts of actions per second and have fog of war permanently disabled at all times? Get lost.
Anonymous at Sat, 4 Jan 2025 19:26:44 UTC No. 1004695
>>1004690
Go and Chess are just two examples of what we know happens when systems successfully train on quality synthetic training data.
The way machines grow so strong playing these games is by playing themselves in a pressure chamber generating networks that play
these games at super human level. This is thus an existing example of what an ASI looks like in a very narrow field.
The same thing will now happen to complex fields that historically doesn't favor computers or was utterly inaccessible.
Our current generation, primitive compared to ASI-level intelligence, are hitting that threshold when they are sufficiently advanced
to boot-strap itself the same way machines learned chess and go by playing games against itself instead of against humans.
But instead of playing games they are now having PhD level conversations with themselves reaching new conclusions at a rate that isn't limited
by two PhD humans running their 20watt brains throttled by the speed at which we can talk/type reflect. But instead throttled by how much compute and
power we throw at the problem. So hook up this PhD level AI to data center drawing power from a nuclear reactor and in minutes we are producing
years and years of quality discourse on a subject; much same way a computer could play millions of Go games in a flash.
This is what is meant by synthetic training.
Anonymous at Sat, 4 Jan 2025 19:32:41 UTC No. 1004696
>>1004695
Go and Chess are solvable old games with basically nested matricies and by their nature they favor the computer just like division, and multiplication and addition and subtraction and all types of other operators favors a calculator . Are you done?
Anonymous at Sat, 4 Jan 2025 19:43:47 UTC No. 1004697
>>1004696
You hyper focus on the example of Go and Chess and ignore the later part which you have no answer to.
The existence of GPT o3 already disproves your argument about how computers can't engage with non-solvable open ended tasks.
What I'm talking about above are not hypothetical it is the state of already currently existing things.
Anonymous at Sat, 4 Jan 2025 19:47:02 UTC No. 1004698
>>1004697
im not hyper focusing. Everything else you post is is just not even worth replying to. But, we'll see you in 2 _more_ years bud, since you are so very very smart.
Anonymous at Sat, 4 Jan 2025 19:59:00 UTC No. 1004699
>>1004698
Me being smart or not has nothing to do with this, I'm simply someone that is following these developments more closely than most.
Therefore have a better perspective on what's around the corner for us, so when I encounter people like you coping/downplaying
what's happening so massively I feel compelled to offer the counter point.
Not for you, not for me but for everyone else who reads this and benefit from being informed where things are moving.
There's a lot of people like you who can't read the room on this and pretends like it's raining; that is gonna end up smacking us all in the face one day.
Anonymous at Sat, 4 Jan 2025 20:01:00 UTC No. 1004701
>>1004699
you were saying the exact same thing 2 years ago. I was saying you were a moron then, and I'm still saying it now. You are living in a fantasy land, like a larper
Anonymous at Sat, 4 Jan 2025 20:23:09 UTC No. 1004703
>>1004701
>you were saying the exact same thing 2 years ago.
Right... Now how could I have possibly done that? Things I'm talking about above are developments that only went public as recently as one or two months ago.
And what part of anything I've been telling you could possibly construed as 'LARP'.
What role is it I'm pretending to be here exactly, certain you've understood what labeling someone a larper means?
Anonymous at Sat, 4 Jan 2025 20:29:21 UTC No. 1004704
>>1004703
> certain you've understood what labeling someone a larper means?
>Therefore have a better perspective on what's around the corner for us,
>ASI
>2 more weeks
>just wait
larper means live action roleplayer, which in your case is synonymous with "futurist" aka "2 more weeks bro"
Anonymous at Sat, 4 Jan 2025 21:23:52 UTC No. 1004708
>>1004704
2 years people we where talking about how AGI was potentially around the corner given the then recent developments.
Now here we are 2 years into this at a stage where having the conversation if we're hitting AGI are entering
the fuzzy boundary where having such a conversations is now legitimate. If someone was here talking AGI with you they stand redeemed.
I've said zilch about the next 2 months you hyperbolic person you. I've simply stated that people in the know are now discussing
ASI being a reality come the 30's is something that is taken very seriously.
Head OpenAI scientist Ilya Sutskever who has more insight to where things are going than anyone hedged his bet on it.
Branched off to form his own 'SSI - Safe Super Intelligence' company at the height of his success with OpenAI.
The fact that people on the deepest level who's driving these developments take it seriously to that degree should clue you in
just to what degree something is afoot.
>futurist
In our everyday lives we've gone from little more than an impressive chatbot 2 years ago to now being able to have verbal discussion
with a standard windows computer in real-time and you still pretend like nothing have happened.
The future is coming at us so fast the world is turning science fiction on us faster than people can keep-up it even happened.
I've seen people go wide-eyed when you start talking to MS Copilot and the PC talks back answering some question.
Yet somehow I'm now a 'larper futurist' for talking current developments; on a board filled with supposedly tech-savvy people.
All of which who already operate professional level 3D software and ought to have a way better clue than most what's up with tech.
Anonymous at Tue, 7 Jan 2025 00:50:43 UTC No. 1004876
>>1004850
>how do I use unreal engine 5 as someone with no knowledge and no experience
>why does everything I make in blender look like shit as someone with no knowledge and no experience?
>I have watched two gazillion hours of tutorials, why am I still bad?
>/3/ discussion
Anonymous at Tue, 7 Jan 2025 11:56:59 UTC No. 1004907
I think AI is a great support hand for artists but ultimatively shouldn't be relied on to completely replace you. I use AI image generation to make small things such as 2D images of bolts or small decorations which I cannot get clear images off from the internet, and then I project them onto my low poly non-pbr model and it looks good in the end