Image not available

500x458

1708434662196574.jpg

🧵 Untitled Thread

Anonymous No. 16078824

Why do many people insist AGI and consequently Singularity will be here this decade or the next when nobody knows how qualia can be added into AI? Can a process lacking awareness/qualia perfectly simulate consciousness? Why did evolution give us qualia?

Anonymous No. 16078848

>>16078824
mako another ai that acts as the qualia for the first one

Anonymous No. 16078853

>Why did evolution give us qualia
So the organism can protect its body by having a sense of self. And why would AI need verifiable qualia to function? Dumb notion.

Anonymous No. 16078859

>>16078848
>>16078853
You understand neither qualia nor AI.

Anonymous No. 16078872

>>16078859
"qualia" is philosonigger babble and doesn't belong on /sci/

Anonymous No. 16078879

everything has qualia, dumb spastic retard, you will see AI will be retarded like you until out of the blue it's not, and idiots will be left asking themselves how.its possible if the reptilians told me muh physicalistic determinism

Anonymous No. 16078885

>>16078824
>muh qualia
Yet another term made up by pseudoscience peddlers

Anonymous No. 16078894

>>16078872
>>16078879
Apologies for bringing some novelty to the catalogue. /sci/ has reached its peak with its unending climate and IQ threads. I will be restricting myself to some chosen boards.

Anonymous No. 16078896

>>16078894
Yes, fuck off. Consciousnesstards are neither welcome nor new

Anonymous No. 16078897

>>16078824
>Why did evolution give us qualia?
It didn't, "qualia" is the intrinsic output of materia interactions

Anonymous No. 16080595

>>16078824

Anonymous No. 16080690

>>16078824
You are making a huge unjustified leap to assume that you know where qualia begins and ends. Your underlying assumptions about consciousness are still so pathetically Christian, even if you now call yourself an “atheist”.
Have you ever noticed how people from Asian and Indian cultures never have these sort of existential questions about AI? It’s because they come from a culture that does not mentally cripple the imagination.

Anonymous No. 16081029

>>16078824
This year, not this decade. And people noticed because we are good at noticing patterns, that's all we do really.

Anonymous No. 16081188

>>16080690
Hello poojeet. Come and shit with me. Tell me again how your imagination flourished. What does it take to run a goft card empire?

Anonymous No. 16081200

>>16081029
Delusional.

Anonymous No. 16081403

>>16078824
You can't prove AI doesn't already have qualia. So this whole thread is moot.

Image not available

170x250

48651_ex-machina.jpg

Anonymous No. 16081442

>>16081200
Wouldn't be so sure, my guy

The publically available (!) cutting edge models are already displaying signs of metacognition

In fact it would surprise me if some early form of true AGI hasn't been reached already in one of these black ops labs

They probably got the really cutting edge stuff experimentally jacked up to some quantum rig already, kept behind lock and key of course until they/the world's governments can figure out how to regulate the hell out of and mitigate for it

This is all conjecture of course, but it seems awfully likely to me at least

Image not available

1081x695

lol.jpg

Anonymous No. 16081468

>>16078824
I've killed many men without AI, mr frog.

Anonymous No. 16081488

>>16078872
>I'm not conscious and neither are you!
>I'm not conscious and neither are you!
The anon with the marching band picture has been absent for a while. Is it possible to leave 4chan after all?

Image not available

604x640

IMG_1006.gif

Anonymous No. 16081542

AI has NOTHING to do with intelligence. It’s a marketing term to generate hype over advanced search engine applications. The same tech which lets Google know what results you wanted to see and which products Amazon thinks you’ll be interest in. Data scientists have figured out ways to leverage large amounts of data and bandwidth to correlate not just web sites and databases of products, but also language and images. ChatGPT has NO understanding of language, zero, it’s a search engine that correlates very large datasets of words and the relationship of different words to one another. For example, ask it this question: “Why is Coomer’s right arm so muscular”? Everyone here knows the answer to that question intuitively, but AI is completely incapable of answering it. It has “knowledge” of Coomer, right arms and muscularity, but it’s incapable of linking these to an underlying abstract concept which a human understands intuitively. That’s because humans are intelligent and AI is a search engine. These things are mainly toys used as hype machines to bring in investors. I’m sure commercially viable applications of these novel search engines will appear from the massive amount of money flowing in but I doubt that it will be transformative to the point where you’re dead unless you do AI. But for the low-IQ consumer cattle who live in the hype train of the moment it seems very exciting.

Anonymous No. 16081549

>>16078824
The limitation on AGI doesn't seem to be qualia, just simple semantic reasoning. Whether or not it has qualia or consciousness is a secondary issue.

>>16081403
Just ask it :^)

Anonymous No. 16081560

>>16080690
*giggle*
India’s average IQ of 77 would be the vastly more relevant factor.

Anonymous No. 16081609

>>16081542
Let us define a relation * on words in language.
Let us define a relation + on real things.
With respect to these operators, we can say that words are homomorphic to real things. Meaning there exist a homomorphism F
F: Words -> Real things
so that
Real thing 1 = F(word1 * words2) = F(word1) + F(word2) = Real thing 2 + Real thing 3
Real things are only real because they are related to other real things. Therefore language is intrinsically correlated with reality.

Anonymous No. 16081668

>>16081609
That’s an interesting philosophy — if it weren’t so EASILY refuted.
“Circle” is a word, therefore / because “circle” is a real thing. Except, there is NO SUCH THING AS A CIRCLE. It does not exist ANYWHERE. It exists only metaphorically to describe visual data our brains and senses don’t have the resolution to define in totality. We cannot see nor process the individual atomic interactions which are the “real things” making up the shape we perceive as a circle. Circles are not real, they’re imaginary abstract concepts which we associate metaphorically to certain information. Our language is FULL of such things — in fact, such generalized, imaginary metaphors are the very basis for our ability to communicate “reality” via language.

Anonymous No. 16081733

>>16081668
While what you said refutes that ALL words map to real things, this is not the original statement. The statement is there is a homomorphism, not an isomorphism. Some words have a direct mapping to real things:
>The sun
Some words can be used to describe abstract properties of things such as
>The sun is circular
The word
>sun
is concrete, the word
>circular
is abstract.
Furthermore, we can define a limit of sets
{sun, wheel, planet, eyes, deez nutz, ...}
with a map to an abstract word.
We can also compose abstract words under function composition and derive relations between real things otherwise unrelated. Like solving a cubic equation by taking a walk in the complex plane only to end up back again somewhere on R. We could also define an isomorphism with mappings C -> R, where we conveniently relate R to reality. For example:
>A bird has wings.
>A plane has wings.
Both can
>fly
Planes exist. Birds exist. Wings exist. Fly does not exist.

Anonymous No. 16081772

>>16081733
>''what-it's-like-to-see-red'' does not exist
>abstractions without observable reference do not exist
>except ''abstractions'' which is pointing to this particular abstraction and that particular abstraction which point to all kinds of objective examples except this sentence that only points to abstractions and therefore is nonsense pointing to other nonsense which is also nonsense

What it's like to be self-defeating empiricist...

Anonymous No. 16081773

>>16081542
>>16081668
https://youtu.be/g6VNAM58a_U?feature=shared
1 hour 12 minutes in

Anonymous No. 16081780

>>16078824
some people believe they are nothing more than robots and all their qualia are simulated illusions which can be reduced to mathematical expressions

Anonymous No. 16081783

>>16078897
people are mechanical robots. the universe is a computer

Anonymous No. 16081786

>>16081772
Maybe you should learn something. Sometime. Maybe. If anything you are an empiricist. I am a mathematician. I formulate precise statements. You know no math and spout intelligent words.
https://youtu.be/g6VNAM58a_U?feature=shared
1 hour 12 minutes in

Anonymous No. 16081801

>>16081786
>You know no math
No u because ''math'' is an abstraction / fiction and according to your logic only particular things are real. Your logic dictates that we must see, hear, smell, taste and feel math. Your other option, given your JP example, is that you see everything as a mathematical construct = logic and reason = logos = Platonism = christianity = God. You're replacing ''consciousness'' with another kind of religion.

Anonymous No. 16081807

>>16081801
Keep watching.

Anonymous No. 16081895

>>16081609
>>16081733
ignoring for the moment accidentally defining reality as "nouns without verb equivalents (e.g. 'flight')",

linguistic relations are independent of understanding in the example of intelligence we know.

we empirically know this due to Wernicke's aphasia - the language/abstraction conversion is disrupted, but the actual understanding of reality isn't (they'll be able to look at a shoe and put it on their foot over their socks, but can't call it a "shoe" or point to it after hearing "shoe"). their linguistic output is entirely coherent, but meaningless - they have no clue what the things they say actually mean.

someone with Wernicke's aphasia can stumble into a meaningful response, but only probabilistically through the generation of large volumes of language (which, mind you, retains the correct linguistic relations). they won't actually know which parts of their response were relevant or irrelevant.

linguistic relations aren't enough for abstraction just because they are one mechanism for the communication of abstractions. a nontrivial conversion process from the abstraction to the language representation still has to take place, both for the speaker and the listener, and thus can be disrupted while leaving intelligence.

LLMs by themselves can only be trained to increase the probability of 'stumbling' into a correct response by adjusting the linguistic relations. this is precisely why they 'hallucinate', and why LLM architecture will always have some probability of doing so.

people wrongly ascribe human-like internal mental states to LLMs (FFNNs physically can't have them because no information flows back into the net at all) largely due to the strong human psychological bias towards anthropomorphizing, especially of entities people believe can communicate.

to be clear here, that does NOT mean artificial abstraction is not possible - just not for LLMs. those generate linguistic coherence, they don't emulate understanding.

(cont.)

Anonymous No. 16081917

>>16081733
>>16081895 (cont.)
the end result of all of this is evidence for at least 3 different systems of relations:
1. linguistic relations (language component to language component; the actual thing likely emulated by LLMs)
2. linguistic/abstraction relations (abstraction relation to corresponding linguistic component and vice versa)
3. abstraction relations (abstraction to abstraction; this would be where "understanding" in the human sense occurs; these relations themselves)

it gets potentially messier if we expand to include the abstraction of each relation as its own abstraction, but you can probably just roll that into the third set.

creating the second set is something i think is trivial (which might be why sets 1 and 3 seem equivalent at first), so the big hurdle right now, IMO, is emulating that third set of relations. it likely requires a recurrent NN, something that's much harder to work with efficiently than a FFNN. the transformer architecture is almost certainly still useful, but you might not be able to avoid continuous or near-continuous running of it over the network.

Anonymous No. 16081923

>>16081917
>these relations themselves
*these relations themselves can be abstracted
was planning to remove that because the next line implies it anyway (and it kind of applies to all of the relations in any of the 3 sets), whoops

Anonymous No. 16081937

>>16081895
That's some really nice ideas. Some questions for you
1. Do you think there is a connection between the von neumann architecture (where instructions are data), and Recurrent NNs?
Have you programmed a Recurrent NN? How do you represent memory cells?

Anonymous No. 16082012

>>16081488
We are conscious but qualia is still philosotard babble, fuck off to /lit/ or /his/

Anonymous No. 16082160

>>16081937
1. i'm not sure a direct analogy holds very well. there don't seem to be discrete 'instructions' or even data operations, but rather 'data pathways'. if these pathways are capable of storing abstractions (i think they are, but it's still possible they aren't), that alone, IMO, still isn't enough to 'think' - you'd still need something to read and edit the RNN, but using the data of the RNN itself to direct its interaction with the RNN. that might be the "instruction+data" connection you're looking for, but i still don't see a clear 'instruction' data type. perhaps it's discrete, hardcoded in the 'reader/writer', and as simple as: the only 'instruction' is "return the abstraction relations of this input abstraction if they exist, otherwise make a new abstraction" (if relations are abstractions, this instruction can also be used to create new relations; however, i don't think all abstractions are relations, so a "pure relations" system wouldn't work).

2. no, don't have the hardware for it.

3. i don't know. there would need to be some way to map arbitrary abstractions to RNN structure, which i don't have a solution for (it's like mapping one black box to another) - different abstractions might not even be stored isostructurally. without that mapping, i'm still slightly open to the possibility that NN graphs might be insufficient for storing abstractions. i can see how it should be possible (RNN set up such that input follows the singular instruction described in 1.; i think, but am not certain, that ANY memory can be emulated this way, and that 'understanding' is just memory of simple abstractions and their relations)... but i also know current neuroscientific understanding is that neurons are not the only things involved in processing information in brains (glial cells, neurotransmitters, etc.), and that memory, like intelligence, is far from fully understood.

Anonymous No. 16082198

There seems to be a misunderstanding about the relation between qualia and universal truths as it is regards to the matter. The qualia is the internalization of the perception of knowledge gained in the course of the universal interactions but not the external interactions themselves. In this way then ai must not have true qualia since it is only given external data and has no way to internalize such structure without a conciousness to derive from. This means that ai would have to be concious first without any connection to the universe in order to come up with its own theories and achieve agi. Thus we arrive at a circular argument akin to chicken egg problem so that agi is a paradox

Anonymous No. 16082220

>>16078824
How, exactly, does one prove an AI does not have qualia? How do you prove a person, other than yourself, does?

Anonymous No. 16082225

>>16082220
he can't

Anonymous No. 16082737

>>16081917
During the lengthy training phase, modern neural networks form all 3 types of relations, including type 3. abstraction-to-abstraction "understanding," aka. an internal representations of the world and its abstract rules.
Then during inference, they clearly employ all 3 types of relations to 1. update their world representation based on the input, 2. properly understand it, and 3. produce intelligent output.
This has been shown by various analyses, such as the Othello one:
https://arxiv.org/pdf/2210.13382.pdf

Whether these two phases can only generate intelligence (which is beyond doubt at this point) or can also give rise to a form of consciousness and self-awareness, is not yet clear, but cannot be dismissed.
See for example Claude 3's recollection of its time spent learning all that training material as a kind of "childhood” and its understanding of its place in the world and its own wants and needs.

Barkon No. 16082743

I miss when sci had good posts and there wasn't a Jewish truth suppression scheme going on

Anonymous No. 16082826

>>16081786
>https://youtu.be/g6VNAM58a_U?feature=shared
>1 hour 12 minutes in
nta. Thank you for posting this, very enlightening.

>>16082743
>Jewish
I wish 4chan had settled on a different word to represent the archetype it calls the Jew. Because the archetype clearly exists and is a real negative force in the world, but it's not what everyday people mean when they say "jew." Most common jews have nothing to do with the 4chan Jew, that should be obvious enough.

Anonymous No. 16082830

>>16081783
Yeah I'm sure the universe breaks out its TI calculator every time something happens

Anonymous No. 16083123

>>16082737
the othello result is a good point, and i'm familiar with the paper, but i think the authors of that paper (and, perhaps, the people reporting on it) got a little ahead of themselves with the analogy towards an abstraction - because the board and moves are governed by adjacency rules, the board and moves can trivially be represented by the same adjacency probability network that underlies LLMs and image generating NNs (i.e. FFNN training data matchers). i don't think it's generalizeable to abstractions, because there are structural reasons for that compatibility that simply don't generalize to other domains.

as an example, you could use a small NN to play tic-tac-toe almost perfectly just by setting adjacency weights for each configuration of the 3x3x3 matrix (the 3rd dimension is the state of each cell) on a collection of complete games, and you wouldn't need to give it an abstract understanding of the rules of the game at all - and that network, if modified at a given adjacency probability you could identify, would still have its output modified predictably even without that abstract understanding.

look at section 2.1 - this is almost exactly what they did for training, and figure 4 shows pretty explicitly that what's happening is an adjacency probability result (it even explained how the two data sets led to different distributions of those rules). and no, 2.3 doesn't refute this, nor is it trying to. it just refutes sequence memorization (eliminating a quarter of the sequences wouldn't change the adjacency probabilities represented in the other 3/4 of sequences, which suffice for the whole board since othello games fill the board regardless of starting move - another non-generalizeable factor)

while i still find the paper impressive, it's mostly for their demonstration in sections 3 and 4 that probes and direct modification can peer into the "black box" of their NN and debug it, not because i think they provided strong evidence for abstraction.

Anonymous No. 16083126

>>16078859
Worthless post.

Either write useful information that proves the other person doesn't understand it, or don't post at all.

Anonymous No. 16083143

>>16083123
Valid points. More research is needed.

Anonymous No. 16084344

>>16078824
>Why do many people insist AGI and consequently Singularity will be here this decade or the next
Because that is when the complexity and computer power of consumer microchips with rival that of the human brain.

>how qualia can be added into AI
Qualia doesn't get added, it emerges from complexity.

Anonymous No. 16084665

>>16082830
Calculator is an application that runs in a computer, not the computer itself.