Anonymous at Wed, 13 Mar 2024 18:57:38 UTC No. 16072556
>>16072536
Yes it's a lookup table, a very high dimension one and can exhibit out of sample generalization, which is all we need from it anyways.
Anonymous at Wed, 13 Mar 2024 19:11:26 UTC No. 16072589
>>16072536
well yeah duh. And it's trained with a regression. That's all it is. That's why anyone who does anything serious with AI knows to not think of that thing ever reasoning. It's a model operating from the data it's been trained on. Nothing more. It can be real close to pretending to come across like a human, but invariably it'll become apparent that it's actually more retarded than a retard. And that's fine. It has its use cases. It also has its limitations. Humans are basically the same lol.
Anonymous at Wed, 13 Mar 2024 21:23:42 UTC No. 16072894
>>16072589
Pretty much. People use ChatGPT failing at certain riddles as proof it's not intelligent, yet how many humans would pass? The goalposts keep getting moved on what machines can do, so anyone paying attention can see this train ain't stopping anytime soon.
Anonymous at Wed, 13 Mar 2024 21:34:15 UTC No. 16072911
>>16072536
Your brain is also a lookup table.
Anonymous at Wed, 13 Mar 2024 21:51:23 UTC No. 16072936
>>16072589
> reasoning
LMAO, just give a precise definition of that word and AI will start reasoning in no time.
Anonymous at Wed, 13 Mar 2024 21:55:43 UTC No. 16072939
>>16072536
Yeah obviously. Most people are too dumb to exceed that level as well, in fairness to my OpenAI product (which I am very satisfied with).
It is kind of hilarious that a LLM or visual model can be trained on every text ever written and every picture ever saved by humanity and still not understand what a letter is or what perspective or anatomy are. Kind of depressing too, like working with a special ed kid that's been in school for 10,000 years.
Anonymous at Wed, 13 Mar 2024 22:01:51 UTC No. 16072948
>>16072536
>can someone ask a neural network if my reasoning is valid and correct?
Kek
Anonymous at Wed, 13 Mar 2024 22:30:49 UTC No. 16072982
>>16072536
>any function is a lookup table
duh
>given that a lookup table does not perform any reasoning
and the source is that he made it the fuck up
what is this 39 IQ thread
Anonymous at Thu, 14 Mar 2024 00:10:28 UTC No. 16073165
bump
Anonymous at Thu, 14 Mar 2024 00:21:50 UTC No. 16073171
Computers work because of science.
Human brain works because of magic.
Therefore, computer programs can never be as intelligent as humans.
Anonymous at Thu, 14 Mar 2024 00:24:15 UTC No. 16073173
Why are NP complete problems hard?
They're just lookup tables
Anonymous at Thu, 14 Mar 2024 02:16:08 UTC No. 16073310
>>16072589
Humans are not the same, because we're created by God out of His image and are therefore conscious and capable of subjective experience. You project your own shallowness onto others, don't do that.
Anonymous at Thu, 14 Mar 2024 02:25:21 UTC No. 16073321
>>16072536
"AI" is just a misnomer to scam investors, all of this is semantic disagreement.
these things are useful in their place, like any tool.
Anonymous at Thu, 14 Mar 2024 02:42:32 UTC No. 16073345
>>16072982
No it's not obvious mr. Sarcasm Faggot because people argue human brains work like LLMs, that there is no meaningful difference, etc. Yet humans are conscious and AI is not and never will be.
Anonymous at Thu, 14 Mar 2024 02:49:22 UTC No. 16073352
>>16072894
>>16072911
lol lmao even
Anonymous at Thu, 14 Mar 2024 02:51:01 UTC No. 16073354
>>16072982
>and the source is that he made it the fuck up
how does a lookup table reason anon? we're all waiting for your answer
> it just does ok!
lol lmao. cry some more
Anonymous at Thu, 14 Mar 2024 03:08:02 UTC No. 16073373
>>16072536
Call it Machine Learning, please...
Anonymous at Thu, 14 Mar 2024 04:48:11 UTC No. 16073488
>>16073310
>because we're created by God out of His image
God is a lookup table.
Anonymous at Thu, 14 Mar 2024 04:52:19 UTC No. 16073493
>>16073310
Christcucks literally believe this shit, LMAO.
Anonymous at Thu, 14 Mar 2024 04:54:32 UTC No. 16073497
>>16073171
Yeah that misrepresentation funny and upvote worthy three years ago before we built AI and it proved the existence of the soul empirically.
Anonymous at Thu, 14 Mar 2024 05:40:42 UTC No. 16073568
>>16072536
>completely handwaves away randomness
>this kills nondeterminism, free will, quantum woo and human reasoning
Nothing personal kid.
Anonymous at Thu, 14 Mar 2024 06:09:28 UTC No. 16073592
>>16073580
>compresses and re-encodes inputs in UTF-8 THREE times, only to calculate some "distance"
wtf?
Anonymous at Thu, 14 Mar 2024 06:29:48 UTC No. 16073606
>>16072536
>every neural network is equivalent to a lookup table
This dude wrote a whole essay on the inner-workings of AI and has never heard of an adjacency matrix.
Anonymous at Thu, 14 Mar 2024 07:10:44 UTC No. 16073639
>>16073606
What's an adjacency matrix and how is it AI?
Anonymous at Thu, 14 Mar 2024 07:35:58 UTC No. 16073646
>>16073354
>how does a lookup table reason anon?
how does your brain reason
what is reasoning?
Anonymous at Thu, 14 Mar 2024 07:37:21 UTC No. 16073647
>>16073354
>>16072536
nobody is going to play ball with you unless you define what you mean by "to reason"
Anonymous at Thu, 14 Mar 2024 08:24:25 UTC No. 16073683
>>16072536
And who says humans can do "actual thinking"?
What if we are just biological stimulus-response-machines, all our thinking and behaviour determined by the chemicals and electronic potentials zipping around in our neural network of synapses?
Anonymous at Thu, 14 Mar 2024 09:10:47 UTC No. 16073723
Transformers are Turing Complete.
https://www.jmlr.org/papers/volume2
A Turing machine can perform any computation.
Including human consciousness.
Since humans can reason, it follows that a Transformer network can reason too - in principle.
But can human consciousness be computed? Of course! At minimum, a computer could simply simulate the atoms of a human brain and compute the chemical reactions of the neurons firing.
So in summary, the person in OPs screenshot is a stupid retard.
Anonymous at Thu, 14 Mar 2024 09:19:44 UTC No. 16073744
>>16073488
>God is up in the sky
>Sits on a throne
>If he has a throne, it stands to reason that he has a table too
Makes sense to me
Anonymous at Thu, 14 Mar 2024 09:24:57 UTC No. 16073762
>>16072536
>neural network can be represented by a function
No shit.
Everything can be.
Anonymous at Thu, 14 Mar 2024 09:27:57 UTC No. 16073777
>>16073580
Training a neural network is an optimization problem.
Compression and optimization are equivalent.
Anonymous at Thu, 14 Mar 2024 11:12:05 UTC No. 16074064
>>16073321
What do you think the "A" stands for?
Anonymous at Thu, 14 Mar 2024 11:13:58 UTC No. 16074066
>>16073723
>At minimum, a computer could simply simulate the atoms of a human brain and compute the chemical reactions of the neurons firing.
It's not evident that a simulation of that detail is possible, or that it would generate a form of consciousness, a virtual awareness that could experience qualia.
Anonymous at Thu, 14 Mar 2024 12:34:14 UTC No. 16074446
>>16072536
>equivalent to a lookup table
You could say that about any finite discrete function.
Not every abstraction is useful. Anon from picrel abstracted off the most meaningful parts of the thing he was attempting to describe. He literally fell into a classic "human is a featherless biped" fallacy
Anonymous at Thu, 14 Mar 2024 12:42:25 UTC No. 16074514
>>16072536
>>16072911
Sure, but also causality is just a lookup table for interactions between elementary particles.
Anonymous at Thu, 14 Mar 2024 14:38:38 UTC No. 16076515
>>16073683
we aren't any different in principle, biological machines just (currently) receive orders of magnitude more data through a litany of physical senses. language is hilariously less information dense than sight, smell, touch, sound, etc.
we also have a foundational instinct layer that evolved with the collective sensory data of hundreds of millions of years of past living experience. when you realize how big that data set actually is in raw number bytes, it's astronomical. AI surpassing humans is an inevitability because it's only a matter of time until we can scale the data set big enough to rival our own
Anonymous at Thu, 14 Mar 2024 17:41:04 UTC No. 16077295
>>16074446
Every neural network is finite and discrete. He's not abstracting anything
Anonymous at Thu, 14 Mar 2024 17:43:56 UTC No. 16077297
>>16072911
Consciousness is not a computation
Anonymous at Thu, 14 Mar 2024 17:45:07 UTC No. 16077300
>>16072536
His conclusions are immediately obvious and irrelevant. There are actual retards in the room with us now who believe a sufficiently complicated stack of punch cards = intelligence.
Anonymous at Thu, 14 Mar 2024 21:18:01 UTC No. 16077798
>>16077300
yes, this is correct. good job for making the same point as OP
Anonymous at Thu, 14 Mar 2024 21:29:08 UTC No. 16077828
This conversation stems from the flawed idea that we understand how human minds work. It is most likely this flaw which causes us to draw comparisons between the mind and 'AI'.
Anonymous at Thu, 14 Mar 2024 22:57:27 UTC No. 16078012
>ITT: Midwits drowning in bathwater
Anonymous at Fri, 15 Mar 2024 00:39:43 UTC No. 16078294
>>16078012
>refuses to refute the argument
>leaves the thread
Anonymous at Fri, 15 Mar 2024 00:49:35 UTC No. 16078308
post on /g/
Anonymous at Fri, 15 Mar 2024 00:51:31 UTC No. 16078312
>>16072982
>any function is a lookup table
not correct - any finite LUT is a sample of a hypothetical function. the entire LUT can be represented by that function not because the LUT is equivalent to the function but because the function can generate any sample portion of the LUT
thinking of it another way: you can't fully prove the relation between the LUT and the function from the LUT alone, but the function remains able to generate the LUT even if you don't have any sample of the LUT. if they were equivalent you'd be able to generate exactly one function from the LUT alone (you can't do this with finite LUTs because there are infinitely many solutions that just happen to intersect each other at the values in the LUT)
there's also the problem of systems for which LUTs can exist but not generating functions (e.g. random number sequences, especially finite) - there are infinitely many potential intersecting functions that will happily predict a sample outside of the LUT that doesn't exist because there was no actual function generating the LUT
the Universal Approximation Theorem is about approximating the LUT, not emulating the function itself - that's why it's not the "Universal Emulation Theorem"
>>16072556
now if only people would realize 'outside' here means the 'new information' that regression has always provided to a dataset - it's not 'understanding' the function (see: infinitely many regression solutions)
>>16073639
a LUT for node connections
>>16072939
once artificial abstraction/mental models are cracked (you need more than regression to emulate logic that can interpret the processes underlying the regression, and for modal relations between mental models that aren't just 'adjacency probability in training output' - LLMs and FFNNs in general won't cut it (but i still think it's entirely possible to do)), the only barrier to AGI will be continuous learning.
well, unless self-direction is WAY harder than i think it will be
Anonymous at Fri, 15 Mar 2024 01:02:37 UTC No. 16078325
>>16078312
gradient descent has something known as catastrophic blowup. you should look into it. continuous learning is not possible with current tools
Anonymous at Fri, 15 Mar 2024 03:16:56 UTC No. 16078509
>>16078325
You mean catastrophic forgetting
Anonymous at Fri, 15 Mar 2024 03:19:43 UTC No. 16078513
https://arxiv.org/abs/2106.05181
Anonymous at Fri, 15 Mar 2024 04:40:56 UTC No. 16078602
>>16078509
there is that too but gradients can blow up and there is something known as the zero gradient problem which makes learning impossible. basically continuous learning is a non-starter with the current tools and techniques
Anonymous at Fri, 15 Mar 2024 04:53:32 UTC No. 16078618
>>16078602
Zero gradient problem was largely solved by skip connections, popularized by ResNet
This is 2015 research
Anonymous at Fri, 15 Mar 2024 04:58:06 UTC No. 16078624
>>16078618
it's not solved. you can still get into zero gradient zones even with skip connections. skip connections mitigate the problem but there is no guarantee you won't zero out the weights and make the skip connections useless
Anonymous at Fri, 15 Mar 2024 05:16:34 UTC No. 16078649
>>16078624
>zero out the weights
That's what normalization layers are for
Again, largely solved
Anonymous at Fri, 15 Mar 2024 05:17:27 UTC No. 16078650
>>16078649
good luck with your continuous learning plan then. i'm sure it will work out
Anonymous at Fri, 15 Mar 2024 05:19:58 UTC No. 16078654
>>16078650
You're the only one who's gungho on continuous learning
I only said "zero gradient problem" and "zero out the weights" are largely solved problems
Anonymous at Fri, 15 Mar 2024 07:32:34 UTC No. 16078781
>>16072536
Every function is equivalent to a lookup table. Only a midwit would think that's profound.`
Anonymous at Fri, 15 Mar 2024 20:17:09 UTC No. 16079818
>>16078781
so why is OpenAI worth $100B?
Anonymous at Fri, 15 Mar 2024 20:25:08 UTC No. 16079836
>>16079818
What a retarded non-sequitur. How does the value of OpenAI have anything to do with the mathematical statement that any deterministic function can be represented as a lookup table?
Anonymous at Sat, 16 Mar 2024 03:19:31 UTC No. 16080492
>>16079836
even non-deterministic functions can be represented as lookup tables but the question still stands. why is a lookup table worth $100B?
Anonymous at Sat, 16 Mar 2024 03:23:27 UTC No. 16080497
>>16080492
>why is a lookup table worth $100B?
mostly a bet on what else they can achieve.
Anonymous at Sat, 16 Mar 2024 14:54:58 UTC No. 16081283
>>16073723
>Including human consciousness.
Except consciousness is NOT computation.
Anonymous at Sat, 16 Mar 2024 15:00:32 UTC No. 16081290
>>16073777
Unfortunately, you are wrong.
Compression and optimization are not equivalent, and you'd know that if you knew the basics of information theory.
Lossy compression does use a mean square error distortion criterion if the source is continuous, but if it's already discrete/digital then it's a totally different process entirely that deals with maximization of a mutual information channel. It has an optimization step in there, but it isn't optimization itself, and the optimization is only one small part of the compression process.
Anonymous at Sat, 16 Mar 2024 20:56:37 UTC No. 16081906
>>16079818
>so why is OpenAI worth $100B?
That's air money, it doesn't really exist.
Anonymous at Sat, 16 Mar 2024 21:29:31 UTC No. 16081961
>>16080492
Because that lookup table is really useful, and it took a lot of expensive engineers and a lot of compute to create it, and investors think OpenAI is going to continue creating the world's best lookup tables and make a lot of money selling access to them.
Anonymous at Sat, 16 Mar 2024 21:43:36 UTC No. 16081992
>>16081283
unless it is.
Anonymous at Sat, 16 Mar 2024 21:43:48 UTC No. 16081994
>>16072536
Every neural network is built off discrete representations of continuous functions.
LUT are just literally the discrete representation of continuous functions.
I fucking hate computer scientists so God damn much. Neural networks are no different from ANY other classification or regression method, we are searching for optimal CONTINUOUS or DISCRETE functions (depending on inputs) that output results closest to our many equations few unknowns problem.
There's nothing special about this, but they're not just look-up tables either.
Anyway, neural networks are memes outside classification problems, and only certain flavors of classification problems like image, language, etc.
The hot shit for regression has been ensemble models for a good minute now. But all the problems are literally, at the end of the day:
Y = f(X; p) + e
We just switch around the architecture of f and search for optimal p, and then characterize e.