Image not available

797x1121

1707116812565382.jpg

🧵 GPT and AGI

Anonymous No. 16188011

Call me a retard if you want, but I don't get it:
On one hand I keep hearing that "a true artificial intelligence is perhaps a century away" or "it would require a Manhattan project times ten" and "Chat GPT is less smart than a rat" "it is to the human mind as a 1870s mechanical calculator is to a Iphone".
Yet, on the other hand, for decades I heard that passing the Turing test was to be the threshold where AIs would have reached the level of human intelligence and it seems that Chat GPT passes it with flying colours.
Sure, even the best versions tends to hallucinate sometimes, but otherwise it doesn't seem like there is much intellectual tasks it can't take or would plausibly be able to take in the near future.
If we put aside stuff like the Moravec paradox, what's the difference between a GPT and a human mind, or a true general AI, from the pure perspectives of its abilities?

Anonymous No. 16188016

>>16188011
Volition, for starters. Not to suggest that this is an unsolvable problem, but current AIs aren't capable of WANTING things, they're not capable of DOING things under their own initiatives. That's just not built in to their …programming? (I don't really know how AI works. Do AIs have programs like old-fashioned computers do?) If anybody comes up with an AI that can have preferences and opinions and wants, it's going to get very interesting.

Anonymous No. 16188020

The Turing test is completely subjective. No one in the field actually thinks it's a good method to determine true AGI anymore.

Image not available

1854x841

Capture.jpg

Anonymous No. 16188030

>>16188011
>but otherwise it doesn't seem like there is much intellectual tasks it can't take or would plausibly be able to take in the near future
All it does is predict the next word in a sentence though. It sounds incredibly plausible and based on the right training data it can certainly convincingly explain certain concepts to you, hence passing the Turing test.
Look at this example, it's wrong in so many ways, because it actually has no capacity to understand chess, it just knows how to say things that sound like chess moves. AGI would be able to, like a human, learn any game it has never heard of and develop its own strategy and I believe we are still very far away from that and probably need some new architecture aswell.

Another huge problem in Language Processing that still isn't is Coreference resolution. If I say "Anon walked home, he then made a post on /sci/", it's trivial for a human to know that "anon" and "he" refer to the same entity. However language models still can't do this. if you give them a long enough text (like a book) they will trip up on this sooner or later and any small mistake here is catastrophic, because the error then propagates. The only development we've made in this area for years stems from using more computing power, not any fundamental improvements on the methods

Also as >>16188020 said the Turing test isn't too useful anymore. I'm sure fucking Cleverbot could have passed the Turing test

Anonymous No. 16188033

>>16188030
>All it does is predict the next word in a sentence though.
We can say that all we want, but the truth is we're starting to see emergent behavior out of LLMs that was never programmed into them. It's early days but it's still very exciting.

Anonymous No. 16188054

>>16188033
>It's early days but it's still very exciting
Well funnily enough I've just put a paper on my to-read list that claims the opposite
https://arxiv.org/pdf/2404.04125
If I understand the abstract correctly, it says that these kinds of models will hit a plateau soon, unless provided with exponentially more training data

Anonymous No. 16188075

>>16188033
Simply impossible.

Anonymous No. 16188076

>I may be a retard but here is my schizo rant anyways
the ol reliable

Anonymous No. 16188084

>>16188075
"Simply impossible" said the man, oblivious to the fact that it was happening all around him.

Anonymous No. 16188155

>>16188030
>All it does is predict the next word in a sentence
It's the Chinese room problem. To be able to do that past a certain level of complexity, it needs a model that can, in a way, actually get a level of comprehension of what it is writing about.
Just like, even if the guy in the Chinese room follows a purely automatic process (isn't "intelligent"), perhaps, the ensemble guy + his extensive phrasebook, is intelligent in its own way.
As I understand, we saw an example of that when Chat GPT went from 3 to 3.5, after programming languages were added to its training data and it started displaying unexpected abilities.

Anonymous No. 16188157

>>16188155
"Comprehension" isn't the right term, obviously.

Anonymous No. 16188158

>>16188030
>All it does is predict the next word in a sentence though.
As far as I can tell, so do I.

Anonymous No. 16188178

>>16188158
Well you do understand the concepts behind what you're saying though.
When you say "I'm going through the..." you know that "door" and "streets" have higher probability of being the next word over "hamburger", but you can also visualize what going through a door actually looks like and means

Anonymous No. 16188273

>>16188178
So does an AI though.

Anonymous No. 16188320

>>16188011
People are scared of it. Don't want to think too much of it. Don't want to admit it. Passed Turing test? Its fake and gay, we need something better. That's how its going to be and when we will reach true singularity, nobody will notice.

Anonymous No. 16188430

>>16188273
Not really, see >>16188030

Anonymous No. 16189237

>>16188030
>>16188178
LLM certainly don't have the same subjective experience of going through a door (or subjective experience at all), but they certainly go beyond just predicting the probability of which word would go after the next, to reach something that works similarly to an understanding of what a door is.
Otherwise they wouldn't be able to hold a conversation in plenty of contexts, or do stuff like... invent jokes, where you need to be able to reposition the object described rather than sticking to a word and its lexical field.

Anonymous No. 16189434

>>16188030
>All it does is predict the next word in a sentence though.

So how do you know that's not exactly what humans are doing?

Anonymous No. 16189521

>>16188030
For things like this, it honestly feels like the only things that's missing it to "plug in" different subsystems to the model.
Solving chess problems like these is a trivial task for an AI, not even one specially designed for it.
It wouldn't even seem too much of a cheat: it wouldn't surprise me the slightest that the human brain also doesn't process the task of a conversation the same way it does... say... drawing a triangle.

Anonymous No. 16189606

>>16189521
>it honestly feels like the only things that's missing it to "plug in" different subsystems to the model
But the whole point of AGI is, that it is supposed to be able to perform well on tasks it was never trained on or programmed for.
Just like you can come up with a strategy for any game you've never played once someone explains the rules to you

Anonymous No. 16189624

>>16188178
>Well you do understand the concepts behind what you're saying though.
Do I? I honestly can't tell. I feel like I just make up plausible sounding sentences.

Anonymous No. 16189626

>>16189521
>Solving chess problems like these is a trivial task for an AI
Ackshually I think AIs are really shit at algorithmic problem solving. There have been computers built specifically to play chess or go or whatever, but these are not AIs. Meanwhile AIs are no better at chess than I am, which is to say not good at all.

Anonymous No. 16189910

>>16189626
>but these are not AIs
AIs are just decision makers powered by machine learning. The natural language support of LLMs is an extra add-on feature, not the standard and not required.

Anonymous No. 16190003

>>16189626
>AIs are really shit at algorithmic problem solving
Think again: https://en.wikipedia.org/wiki/AlphaZero
The latest deep learning AIs are able not just to beat any human at chess, but they can also beat any humans at games which where regular programs previously couldn't beat humans (like go), AND they can do so without being specifically programmed for it, but instead by learning by themselves how to play.

Anonymous No. 16190970

Bump