Image not available

1165x890

1710349758079228.png

๐Ÿงต destroys AI

Anonymous No. 16072536

no hard feelings

Anonymous No. 16072556

>>16072536
Yes it's a lookup table, a very high dimension one and can exhibit out of sample generalization, which is all we need from it anyways.

Anonymous No. 16072589

>>16072536
well yeah duh. And it's trained with a regression. That's all it is. That's why anyone who does anything serious with AI knows to not think of that thing ever reasoning. It's a model operating from the data it's been trained on. Nothing more. It can be real close to pretending to come across like a human, but invariably it'll become apparent that it's actually more retarded than a retard. And that's fine. It has its use cases. It also has its limitations. Humans are basically the same lol.

Anonymous No. 16072894

>>16072589
Pretty much. People use ChatGPT failing at certain riddles as proof it's not intelligent, yet how many humans would pass? The goalposts keep getting moved on what machines can do, so anyone paying attention can see this train ain't stopping anytime soon.

Anonymous No. 16072911

>>16072536
Your brain is also a lookup table.

Image not available

683x660

chadabs.jpg

Anonymous No. 16072936

>>16072589
> reasoning
LMAO, just give a precise definition of that word and AI will start reasoning in no time.

Anonymous No. 16072939

>>16072536
Yeah obviously. Most people are too dumb to exceed that level as well, in fairness to my OpenAI product (which I am very satisfied with).

It is kind of hilarious that a LLM or visual model can be trained on every text ever written and every picture ever saved by humanity and still not understand what a letter is or what perspective or anatomy are. Kind of depressing too, like working with a special ed kid that's been in school for 10,000 years.

Anonymous No. 16072948

>>16072536
>can someone ask a neural network if my reasoning is valid and correct?
Kek

Anonymous No. 16072982

>>16072536
>any function is a lookup table
duh
>given that a lookup table does not perform any reasoning
and the source is that he made it the fuck up

what is this 39 IQ thread

Anonymous No. 16073165

bump

Anonymous No. 16073171

Computers work because of science.
Human brain works because of magic.
Therefore, computer programs can never be as intelligent as humans.

Anonymous No. 16073173

Why are NP complete problems hard?
They're just lookup tables

Anonymous No. 16073310

>>16072589
Humans are not the same, because we're created by God out of His image and are therefore conscious and capable of subjective experience. You project your own shallowness onto others, don't do that.

Anonymous No. 16073321

>>16072536
"AI" is just a misnomer to scam investors, all of this is semantic disagreement.
these things are useful in their place, like any tool.

Anonymous No. 16073345

>>16072982
No it's not obvious mr. Sarcasm Faggot because people argue human brains work like LLMs, that there is no meaningful difference, etc. Yet humans are conscious and AI is not and never will be.

Anonymous No. 16073352

>>16072894
>>16072911
lol lmao even

Anonymous No. 16073354

>>16072982
>and the source is that he made it the fuck up
how does a lookup table reason anon? we're all waiting for your answer
> it just does ok!
lol lmao. cry some more

Image not available

635x946

1708976994220291.jpg

Anonymous No. 16073373

>>16072536
Call it Machine Learning, please...

Anonymous No. 16073488

>>16073310
>because we're created by God out of His image
God is a lookup table.

Anonymous No. 16073493

>>16073310
Christcucks literally believe this shit, LMAO.

Anonymous No. 16073497

>>16073171
Yeah that misrepresentation funny and upvote worthy three years ago before we built AI and it proved the existence of the soul empirically.

Anonymous No. 16073568

>>16072536
>completely handwaves away randomness
>this kills nondeterminism, free will, quantum woo and human reasoning
Nothing personal kid.

Image not available

631x680

F05HxqsWwAAHRCI.jpg

Anonymous No. 16073580

if only you knew how bad things are..

Anonymous No. 16073592

>>16073580
>compresses and re-encodes inputs in UTF-8 THREE times, only to calculate some "distance"
wtf?

Anonymous No. 16073606

>>16072536
>every neural network is equivalent to a lookup table
This dude wrote a whole essay on the inner-workings of AI and has never heard of an adjacency matrix.

Anonymous No. 16073639

>>16073606
What's an adjacency matrix and how is it AI?

Image not available

554x439

shurgy.jpg

Anonymous No. 16073646

>>16073354
>how does a lookup table reason anon?
how does your brain reason
what is reasoning?

Anonymous No. 16073647

>>16073354
>>16072536
nobody is going to play ball with you unless you define what you mean by "to reason"

Anonymous No. 16073683

>>16072536
And who says humans can do "actual thinking"?
What if we are just biological stimulus-response-machines, all our thinking and behaviour determined by the chemicals and electronic potentials zipping around in our neural network of synapses?

Image not available

715x741

smug.png

Anonymous No. 16073723

Transformers are Turing Complete.
https://www.jmlr.org/papers/volume22/20-302/20-302.pdf

A Turing machine can perform any computation.

Including human consciousness.

Since humans can reason, it follows that a Transformer network can reason too - in principle.

But can human consciousness be computed? Of course! At minimum, a computer could simply simulate the atoms of a human brain and compute the chemical reactions of the neurons firing.

So in summary, the person in OPs screenshot is a stupid retard.

Anonymous No. 16073744

>>16073488
>God is up in the sky
>Sits on a throne
>If he has a throne, it stands to reason that he has a table too
Makes sense to me

Anonymous No. 16073762

>>16072536
>neural network can be represented by a function
No shit.
Everything can be.

Anonymous No. 16073777

>>16073580
Training a neural network is an optimization problem.
Compression and optimization are equivalent.

Anonymous No. 16074064

>>16073321
What do you think the "A" stands for?

Anonymous No. 16074066

>>16073723
>At minimum, a computer could simply simulate the atoms of a human brain and compute the chemical reactions of the neurons firing.
It's not evident that a simulation of that detail is possible, or that it would generate a form of consciousness, a virtual awareness that could experience qualia.

Image not available

892x895

file.png

Anonymous No. 16074446

>>16072536
>equivalent to a lookup table
You could say that about any finite discrete function.
Not every abstraction is useful. Anon from picrel abstracted off the most meaningful parts of the thing he was attempting to describe. He literally fell into a classic "human is a featherless biped" fallacy

Anonymous No. 16074514

>>16072536
>>16072911
Sure, but also causality is just a lookup table for interactions between elementary particles.

Anonymous No. 16076515

>>16073683
we aren't any different in principle, biological machines just (currently) receive orders of magnitude more data through a litany of physical senses. language is hilariously less information dense than sight, smell, touch, sound, etc.

we also have a foundational instinct layer that evolved with the collective sensory data of hundreds of millions of years of past living experience. when you realize how big that data set actually is in raw number bytes, it's astronomical. AI surpassing humans is an inevitability because it's only a matter of time until we can scale the data set big enough to rival our own

Anonymous No. 16077295

>>16074446
Every neural network is finite and discrete. He's not abstracting anything

Anonymous No. 16077297

>>16072911
Consciousness is not a computation

Anonymous No. 16077300

>>16072536
His conclusions are immediately obvious and irrelevant. There are actual retards in the room with us now who believe a sufficiently complicated stack of punch cards = intelligence.

Anonymous No. 16077798

>>16077300
yes, this is correct. good job for making the same point as OP

Anonymous No. 16077828

This conversation stems from the flawed idea that we understand how human minds work. It is most likely this flaw which causes us to draw comparisons between the mind and 'AI'.

Anonymous No. 16078012

>ITT: Midwits drowning in bathwater

Image not available

1024x615

1710207110210342.jpg

Anonymous No. 16078294

>>16078012
>refuses to refute the argument
>leaves the thread

Anonymous No. 16078308

post on /g/

Anonymous No. 16078312

>>16072982
>any function is a lookup table
not correct - any finite LUT is a sample of a hypothetical function. the entire LUT can be represented by that function not because the LUT is equivalent to the function but because the function can generate any sample portion of the LUT

thinking of it another way: you can't fully prove the relation between the LUT and the function from the LUT alone, but the function remains able to generate the LUT even if you don't have any sample of the LUT. if they were equivalent you'd be able to generate exactly one function from the LUT alone (you can't do this with finite LUTs because there are infinitely many solutions that just happen to intersect each other at the values in the LUT)

there's also the problem of systems for which LUTs can exist but not generating functions (e.g. random number sequences, especially finite) - there are infinitely many potential intersecting functions that will happily predict a sample outside of the LUT that doesn't exist because there was no actual function generating the LUT

the Universal Approximation Theorem is about approximating the LUT, not emulating the function itself - that's why it's not the "Universal Emulation Theorem"

>>16072556
now if only people would realize 'outside' here means the 'new information' that regression has always provided to a dataset - it's not 'understanding' the function (see: infinitely many regression solutions)

>>16073639
a LUT for node connections

>>16072939
once artificial abstraction/mental models are cracked (you need more than regression to emulate logic that can interpret the processes underlying the regression, and for modal relations between mental models that aren't just 'adjacency probability in training output' - LLMs and FFNNs in general won't cut it (but i still think it's entirely possible to do)), the only barrier to AGI will be continuous learning.

well, unless self-direction is WAY harder than i think it will be

Anonymous No. 16078325

>>16078312
gradient descent has something known as catastrophic blowup. you should look into it. continuous learning is not possible with current tools

Anonymous No. 16078509

>>16078325
You mean catastrophic forgetting

Anonymous No. 16078513

https://arxiv.org/abs/2106.05181

Anonymous No. 16078602

>>16078509
there is that too but gradients can blow up and there is something known as the zero gradient problem which makes learning impossible. basically continuous learning is a non-starter with the current tools and techniques

Anonymous No. 16078618

>>16078602
Zero gradient problem was largely solved by skip connections, popularized by ResNet
This is 2015 research

Anonymous No. 16078624

>>16078618
it's not solved. you can still get into zero gradient zones even with skip connections. skip connections mitigate the problem but there is no guarantee you won't zero out the weights and make the skip connections useless

Anonymous No. 16078649

>>16078624
>zero out the weights
That's what normalization layers are for
Again, largely solved

Anonymous No. 16078650

>>16078649
good luck with your continuous learning plan then. i'm sure it will work out

Anonymous No. 16078654

>>16078650
You're the only one who's gungho on continuous learning
I only said "zero gradient problem" and "zero out the weights" are largely solved problems

Anonymous No. 16078781

>>16072536
Every function is equivalent to a lookup table. Only a midwit would think that's profound.`

Anonymous No. 16079818

>>16078781
so why is OpenAI worth $100B?

Anonymous No. 16079836

>>16079818
What a retarded non-sequitur. How does the value of OpenAI have anything to do with the mathematical statement that any deterministic function can be represented as a lookup table?

Anonymous No. 16080492

>>16079836
even non-deterministic functions can be represented as lookup tables but the question still stands. why is a lookup table worth $100B?

Anonymous No. 16080497

>>16080492
>why is a lookup table worth $100B?
mostly a bet on what else they can achieve.

Anonymous No. 16081283

>>16073723
>Including human consciousness.
Except consciousness is NOT computation.

Anonymous No. 16081290

>>16073777
Unfortunately, you are wrong.

Compression and optimization are not equivalent, and you'd know that if you knew the basics of information theory.

Lossy compression does use a mean square error distortion criterion if the source is continuous, but if it's already discrete/digital then it's a totally different process entirely that deals with maximization of a mutual information channel. It has an optimization step in there, but it isn't optimization itself, and the optimization is only one small part of the compression process.

Anonymous No. 16081906

>>16079818
>so why is OpenAI worth $100B?
That's air money, it doesn't really exist.

Anonymous No. 16081961

>>16080492
Because that lookup table is really useful, and it took a lot of expensive engineers and a lot of compute to create it, and investors think OpenAI is going to continue creating the world's best lookup tables and make a lot of money selling access to them.

Anonymous No. 16081992

>>16081283
unless it is.

Anonymous No. 16081994

>>16072536
Every neural network is built off discrete representations of continuous functions.

LUT are just literally the discrete representation of continuous functions.

I fucking hate computer scientists so God damn much. Neural networks are no different from ANY other classification or regression method, we are searching for optimal CONTINUOUS or DISCRETE functions (depending on inputs) that output results closest to our many equations few unknowns problem.

There's nothing special about this, but they're not just look-up tables either.

Anyway, neural networks are memes outside classification problems, and only certain flavors of classification problems like image, language, etc.

The hot shit for regression has been ensemble models for a good minute now. But all the problems are literally, at the end of the day:

Y = f(X; p) + e

We just switch around the architecture of f and search for optimal p, and then characterize e.