Image not available

250x201

1716013591940744.jpg

๐Ÿงต I got a bunch of question about AI

Anonymous No. 16181742

How exactly an LLM works?
>it just predict words
If i ask what color the sky is, and the AI answers "blue", is the answer generated just by human experience and observation?

When confronted with the question above, chat-GPT specified it's programmed to find a specific pattern according to a database, it doesn't agree but judging from the answer seems like this is the case.
It can't observe, it can't judge, it can't experience things first hand, identify objects or actions and turn then into abstract ideas to work with, it can mix things together to a certain degree but not like a human would do.

Now i'd like to ask /sci/ what does it think about it, because if this is the case i feel disappointed, for sure it's impressive how such mechanisms makes me feel like i am talking to a person but these systems only relies on a fuckton of human knowledge and can't experience or generate new data by themselves. It's like watching a guy cheating at the exam by reading stuff he wrote on his arm.

Anonymous No. 16181754

>>16181742
That's the same way humans know sky is blue. Your mom pointed at the sky and said it's blue and you went ugu gaga haha until you also knew it was blue.

Anonymous No. 16181778

>>16181754
It's not the same thing, animals like humans have billions of years of training to recognize objects that may be crucial to their survival, and this means having some sort of simulation about space, we can collect visual data and conceptualize it to solve a problem.

On the other hand, chat-gpt said itself, an LLM is trained on a fuckton of text, but doesn't actually know if the moon is round or white, it finds a pattern about the moon being "white" and "round" encampsulating these words together, ready to deliver it as an answer.

Anonymous No. 16181807

>>16181778
You can give it a novel problem, such as painting the moon blue or sky purple and it can correctly answer that too so you are blown out on that bit too.

Anonymous No. 16181843

>>16181807
Which is basically the same thing, but with images instead of text.

Image not available

391x574

m9p1Uwi.jpg

Anonymous No. 16181879

>>16181843
>>16181807

I, a human with a brain, can be told 'Draw a purple moon' and the thoughts that lead to that are 'I know what the moon is, I know what color it is, I know what the color purple is, so I can then make the moon I draw purple'.

A LLM-based chatbot like GPT or any of the image-generators is like,
>scan through 10,873 images of moons to extrapolate a new image of moon utilizing shared assets
>are any moon images purple
>if yes; apply as strongest generative source in output image
>if no; adjust hue of final image to be purple

The 'if no' part is where AI starts to trip up and hallucinate, because if it doesn't have anything directly in the training data to draw from it cannot extrapolate and create something from scratch. If the AI is told to draw a moon and there's not a single moon image in the data it was trained on, it cannot draw a moon.

To extend that notion to how human brains work is a severe misunderstanding of what knowledge and information storage and retrieval in a brain even works like.
Like this image, right? Generated in Bing ages ago. Made a dozen or so of these. Not a single one had Master Chief in the prompt. I was actually trying to get the Arbiter riding a horse, but because the AI connects any data regarding Halo to the most popular iconography of Halo (Master Chief), all you get when you try to make anything Halo related out of a general purpose generator is Master Chief.

Image not available

1024x1024

OIG_75.jpg

Anonymous No. 16181880

>>16181879
wow thats the absolute wrong image so here's the correct image
did a whole misclick

Anonymous No. 16181891

>>16181879
AI is just fine making up entirely new things that aren't in their training material. The fact that you can't use one is also a skill issue on your part. I count that as you being blown out twice.

Image not available

1024x1024

OIG-4.jpg

Anonymous No. 16181918

>>16181891
>entirely new things that aren't in their training material.
damn thats literally not how they work if they could do that they wouldn't need training data you dipshit here's john cena as a clown to remind you of what you are

bodhi No. 16181930

>>16181742
great job summing up what I have been explaining to the midwits here for years. electronic abacus can only simulate intelligence, it can never actually posses intelligence because it has zero first hand knowledge. It is just outputting calculations that YOU assign meaning to.

Anonymous No. 16182023

It doesn't have first hand experience and all exactly as we do yes but it doesn't really matter. During training it "learns" most of these abstract ideas as weights. So given a new question it will do inference from what it learnt. Many of these will be right some wrong.
>It's like watching a guy cheating at the exam by reading stuff he wrote on his arm.
Same thing can be said about humans. You're getting things from memories and experiences stored in your brain. Llm does similar thing. Or do you really want physical neurons to be present in llm to count it as intelligent?

Anonymous No. 16182429

>>16182023
It does matter, otherwise the sky would be purple and not blue for the LLM.
It can't evolve and therefore can't solve problems alone unless a smart being gives an update telling how to do it.

It feels like watching a piece of the human brain working while the rest is missing.
It can't collect new data, can't process and conceptualize it, can't simulate circumstances and judge the effectiveness of a given action.

You can't just tell me "the sky is blue" and report it because you keep repeating me the same thing, what the fuck is the sky and what is blue? How the sky and blue merge together?

Anonymous No. 16182493

>>16182429
>It can't evolve
You're comparing living beings to a piece of code. It can evolve not in our ways, but for eg it can scrap web and get new data to train/update itself. This process could be automated or a human has to do it, like we teach babies.

Anonymous No. 16182651

>>16182493
Maybe you can't grasp the problem.
First, the model is not connected to the internet, and related data from internet being fed with is controlled and filtered, for every update. You could automate it like you said, if you want your dataset being contaminated by gay furry fanfictions and shitposting fucking up the model.
Second, all of that data is created by humans, data already created and processed by human beings.

"The sky is blue" is not generated because the sky is actually blue or not, it doesn't care, the sky is blue because there's a lot of texts where these words are chained together, that's why the sky is blue. All of the evaluation stating the sky to be actually blue has already been made by actual smart beings who can experience the world around them, in comparison an LLM is just a parrot.

Anonymous No. 16182760

>>16182651
>First, the model is not connected to the internet,
Chatgpt can now access real time data. Look it up
>if you want your dataset being contaminated by gay furry fanfictions and shitposting fucking up the model.
same thing with humans, humans also have bad behaviour from learning bad stuffs.

Image not available

1080x887

screen01.png

Anonymous No. 16182927

>>16182760
>"yes he can actually do that"
>"humans are also the same"
i love how you avoid to go into details about what kind of real time data the model can access and what's the difference between human bad behavior and LLM bad behavior.

Image not available

800x450

pol_incel.jpg

Anonymous No. 16182936

>>16181742
>another anti-science poltard who doesn't understand how AI actually works

Modern AI might not be quite as smart as our smartest human just yet, but it's definitely a lot smarter than losers like (You).

Image not available

125x118

stare.jpg

Anonymous No. 16182944

>>16181807
> novel problem
> painting the moon blue

Anonymous No. 16182951

>>16182936
Nice bait, but sadly for you i look like this and say this.

Anonymous No. 16182973

>>16181742
Man just read the papers. The math and statistics isn't that hard

Image not available

1778x997

1716145199253.jpg

Anonymous No. 16183136

>>16182927
Ok retardo, what kind of real time data the model can access is set by the chatgpt guys who run the model. You can write up your own LLM which does this without restrictions set.
>Human bad behaviour and LLM bad behaviour
They're very similar, did you forget Tay? Tay learnt from the real time tweets and started behaving like that. If you had a human child listening to those tweets, the child would respond similarly. Or do you want LLM to rise from your pc and start murdering people in the streets?

Anonymous No. 16183200

>>16182429
>It can't evolve and therefore can't solve problems alone unless a smart being gives an update telling how to do it.
Not necessarily. The thing you're missing is the word embeddings semantic connections and all the hidden layer spaces. It has a bunch of words that have somehow been learned to be connected in a high dimensional space. How? I don't know I'm not smart enough to know. But given how these words are laid out by similarities and differences and then throughout the network you move throughout that space given what it has learned, it's not too far fetched that an AI can come up with some novel solutions just based on what words are close together/far apart and how it moves around that space given the input/transformations. I mean they're tokens but still, same idea.

It's hard to give an example, but like maybe worm and nuclear reactor end up positioned in a weird way together somehow. Perhaps with the right input, it might output a bunch of words that come up with a novel way that worms and nuclear reactors are related.