🧵 Can AI models be taught to reason forward?
Anonymous at Fri, 14 Mar 2025 01:32:46 UTC No. 16618335
An AI model's dataset will always consist of existing information scraped from the web, so it's natural to conclude that AI is limited to what is already known.
That said, when we write proofs, are we not also beginning with existing knowledge alone? We manage to stretch that knowledge into a novel truth using the logical rules, but shouldn't an AI model be equally capable of applying first order logic?
Given the incompleteness theorem, we know that there may be truths that are algorithmically inaccessible to it, but still, why couldn't an AI to generate a previously unknown truth? I'm asking sincerely.
Anonymous at Fri, 14 Mar 2025 01:34:22 UTC No. 16618337
>>16618335
I guess a LLM would be unable to properly express something never stated? I'm trying to make sense of this.
t. OP
Anonymous at Sun, 16 Mar 2025 03:18:21 UTC No. 16620298
>>16618335
A human's dataset will always consist of existing information "scraped" from the five senses. Can you imagine or discover a sixth sense? How about a seventh or eighth? The human mind is naturally incapable of interfacing with magnetic fields, for example, so we have to learn about the set of information necessitating its understanding through device measurement. Btw, as soon as AI can interface with the physical world directly the way we and other animals can and replicate physically, humans will be in bigger trouble than ever.... but I digress.
>novel truth
Truth is discovered. It exists outside perception as a platonic form.
>AI ... first order logic.
It already can by having an "understanding" of grammar and variables. There are three fundamental laws of logic. Everything else is build upon that. It's just increasingly complex combinations after that. Keep in mind I'm a trialist (platonist), so others might argue that these rules of logic themselves are approximations of some form of objective truth. I don't think it really matters because we can't test the laws of logic without applying the laws of logic....
You can go ask chat gpt, right now, to perform algebra with grammar. You can give it pretty much any logical form you can think of; constructive syllogism, modus ponens, constructive dilemma, etc.; and it can perform accurately. Being an LLM, it does this with language, which means it can theoretically handle a practically infinite set of logical forms (I don't remember if human language was proved to be able to represent an infinite number of ideas). I'm not sure if GPT or Grok or whatever else could do this practically, but in theory AI absolutely can.
t. /hist/orian (and armchair philosopher/logician), so people on /sci/ or /hist/ may have concrete answers for you. This is just my take.
Anonymous at Sun, 16 Mar 2025 03:23:23 UTC No. 16620300
>>16618335
>>16620298 (me)
To clarify:
You can make up a few words that have literally never existed and define them by using a combination of English words (expressing an idea), then get GPT/Grok to operate on those newly defined constants using logical forms. In theory you could have the AI perform algebra where you feed in a series of uniquely defined ideas, then have it approximate an unknown as a made-up word in your hypothetical language.
Anonymous at Sun, 16 Mar 2025 03:26:39 UTC No. 16620305
You’re a retard.
That being said, there is nothing preventing us from eventually reconstructing a perfect human brain that can reason on its own eventually. This is very different from an LLM.
Anonymous at Sun, 16 Mar 2025 03:30:26 UTC No. 16620307
>>16618335
AI can only mechanically rearrange its existing dataset (memory) into new forms, what Coleridge called 'fancy'. The secondary imagination requiers genius (in the sense of inspiration) to create new data from outside of memory and experience/sense data i.e. not merely rearranging existing data, but creating wholly new data from beyond the sensory world accessible to seeing, hearing, smell, touch, and taste: the secondary imaginatin sees a wholly different world with the mind's eye that is not at all present in its memory (dataset) or sense data (what it can search on the internet). Until AI can do this, it will always be limited to fancy, which admittedly, much of everyday human endeavour is constrained to anyway.
Anonymous at Sun, 16 Mar 2025 03:34:43 UTC No. 16620311
>>16620307
How is an AI's dataset different from a human's. Both have fixed datasets (at a given time). A human has made inferences. It seems like "francy" is the same as "primary" and "secondary" but the latter two are described using words of agency, which may not even exist.
Anonymous at Sun, 16 Mar 2025 03:37:21 UTC No. 16620313
>>16620307
>>16620311 me
Forgot to add:
Human "inspiration" might as well be random. It's as if a muse speaks an idea into one's mind. I have no idea where my ideas come from because they feel like there's some randomness injected. Could injecting a hint of randomness into AI turn fancy into primary or secondary?
Anonymous at Sun, 16 Mar 2025 03:41:33 UTC No. 16620315
>>16620298
Wrong and Ulro-brained. Knowlege comes from the mind, not from the senses or sense data, which the mind in any case produces from otherwise meaningless nerve impulses and glyphs. The question of AI is a question of what kind of mind AI has. Right now LLMs can only rearrange their datasets, and look up new data from the internet. The human mind can produce new knowledge without any dataset.
As a supposed Platonist you should know that Platonic epistemology requires that the human mind already has knowledge of all things, in actuality not potential, and has merely forgotten this. The process of Platonic education is to unforget (anamnesis), by the genius of intuition, not dianoia, what we already know.
Anonymous at Sun, 16 Mar 2025 03:45:07 UTC No. 16620319
>>16620311
>Both have fixed datasets
No. The human mind has no fixed data set. The mind is a productive power that constantly produces new data, with and without the senses.
The simple thought experiment that will confirm this to you is where did the first human knowledge from? Has is any new data at all invented and brought into the scope of human knowledge at all if data can only be rearrangement of memory? How does the dataset begin in your model?
The answer being the mind productivly creates new data, with and without sense data. Something AI can only do by rearrangement of existing memory (datasets).
Anonymous at Sun, 16 Mar 2025 03:46:08 UTC No. 16620320
>>16620319
*How is any new data at all invented and brought into the scope of human knowledge if data can only be a rearrangement of memory?
Anonymous at Sun, 16 Mar 2025 04:06:23 UTC No. 16620353
>>16620313
It's not randomness. Randomness is smashing your hands into a keyboard: nothing coherent is produced by randomness. An example from biology is protein folding: proteins can fold in an extremley large number of ways, and only very few specific ways are of biological value, the overwheming number result in the failure of the protein and ultimately the death of the organism - if proteins folded randomly the result would be mass slaughter and extinction: randomness is noise generation, not signal generation. Inspiration however is the cohesive and intelligable creation of new worlds that exist beyond memory and sense data, they are not random or generated by randomness. You are right to say they come from 'outside' the sensory world, but is not the product of dice being rolled in this world.
Anonymous at Sun, 16 Mar 2025 04:13:34 UTC No. 16620365
>>16620319
>The mind is a productive power that constantly produces new data, with and without the senses.
Give an example. Can blind people conceive of color without ever seeing color?
Anonymous at Sun, 16 Mar 2025 04:14:49 UTC No. 16620368
>>16620365
Yes. Aren’t you stupid?
Anonymous at Sun, 16 Mar 2025 04:20:54 UTC No. 16620381
>>16620368
No, and you're dumber for making claims with no evidence
Anonymous at Sun, 16 Mar 2025 04:38:31 UTC No. 16620397
>>16618335
Picrel.
Anonymous at Sun, 16 Mar 2025 04:40:28 UTC No. 16620400
>>16620381
Blind people can see colour dummy. Countless trials have proved this.
Anonymous at Sun, 16 Mar 2025 05:00:56 UTC No. 16620421
>>16620365
Every new thing that has ever been produced. Go back in time X years and ask how the new things that existed in X+10 years came about.
The chimpanzee lives in the same world as you and I, his senses produce more or less the same sense data as us. Yet he has a different mind, and his mind doesn't create the same world from that same sense data as our minds can, nor can he imagine a new and different world not present in his sense data as our minds can. Everything we experience is a product of our mind, and that productive power of the mind is not limited to sense data or memory as its building blocks. The test for AI is what kind of mind does it have? Right now AI limited to rearranging memories and sense data -- which is an important part of thinking, but very far from the whole of it.
Anonymous at Sun, 16 Mar 2025 07:04:03 UTC No. 16620524
>>16618335
> can a bloated if-else statement be taught to
No, no it can't.