๐งต Untitled Thread
Anonymous at Mon, 21 Oct 2024 17:59:39 UTC No. 16442842
The arcuate fasciculus is the most important White Matter Tract in the human brain for understanding general intelligence. Current AI large language models should take note.
Anonymous at Mon, 21 Oct 2024 18:59:06 UTC No. 16442920
Red pill me. Iโm willing to hear out any thoughts, hunches, or theories on brain function. Especially for potential multi modal model application.
๐๏ธ Anonymous at Mon, 21 Oct 2024 23:11:11 UTC No. 16443296
>>16442842
I hate this fucking piece of shit. I hate the fact that this cucking big wrinkly disgusting mess I literally me, nothing else, my body is simply a vessel for this disgusting fucking smushed intestine looking ass bitch. I despise the fact that if this fucking stupid cunt gets slightly damaged in anyway I AS IN ME. Will sieze to be and be either someone new and even more retarded, or if it gets smashed I will just be gone forever, that my fucking existence is tied to this annoying heavy fucking piece of shit that has done nothing but drown me in emotional turmoil awareness and dread.
Anonymous at Tue, 22 Oct 2024 04:40:21 UTC No. 16443604
>>16442842
By what metric is a current AI large language model not already smarter then me?
The smartest I could possibly be is to read all of Wikipedia as fast as I can, the AI can do that much faster.
Anonymous at Tue, 22 Oct 2024 04:46:02 UTC No. 16443610
>>16443604
They still get fooled by extremely simple red herrings in word problems.
Anonymous at Tue, 22 Oct 2024 04:56:06 UTC No. 16443618
>>16443610
Difficult to define all possible cases of grammar/syntax?
The imperfect, imprecise, logicality of the arbitrality and at times synonymousality (as in synonyms) of colloquial human language?
I imagine it runs into unique problems in various languages? As you see those videos of "xyz language doesn't have a word for..."
Though also, after falling for such a red herring, is it taught and explained and remembered and understood its error, and therefore little by little learning, and therefore little by little lessening such language errors?
Any examples you can provide
Anonymous at Tue, 22 Oct 2024 06:02:25 UTC No. 16443686
>>16443618
This is from a recent paper. They get hung up when there's numbers involved. LLMs don't really get better in real time, they get better between builds, if that.
Anonymous at Tue, 22 Oct 2024 06:36:38 UTC No. 16443721
>>16443686
>LLMs don't really get better in real time, they get better between builds, if that.
There are multiple versions of LLM, let's concern our selves with the better ones, and there are ones that retain memory of your conversation, so I must presume they are at the capability of being shown an error they made, walked down the path of understanding, and then not making that same error
In the example you provided, what was the result following their explaining to the AI its error? Yes I see the point is that it would make such a one in the first place is alarming.
Would be interesting I geuss to have asked it instead of told it right away, ask it to look over what it wrote and if it noticed any mistakes,
If it still didn't notice the error, tell it there is an error and to find it,
If it still didn't, give more hints,
Then ask it why it thinks it made the error
Anonymous at Tue, 22 Oct 2024 07:10:29 UTC No. 16443745
>>16443721
Then ask it if it thinks there's any way it can have a heads up for such tricks in the future.
Then I geuss train it on a bunch of those tricky problems it struggles with, while it is keeping in mind it's now awareness of your pointing out it's struggles with those situations.
Try to break it down get to the bottom of it with the llm, working towards grasping the conceptual forms of this styles of tricks it is slipping up on
Anonymous at Tue, 22 Oct 2024 14:32:48 UTC No. 16444166
>>16443686
Why do you think it made the err? Was it asked why it thinks it made it?
Anonymous at Wed, 23 Oct 2024 00:07:57 UTC No. 16445014
>>16444166
I think it made it because maybe it was operating as fast as possible and noticed the pattern of what you were asking for and I geuss without carefully reading assumed the last step was of the same variety as the previous.
Very strange, but it's such a great opportunity as above said, to go over the mistakes it makes with it, and ask it to analyze it why it made those mistakes, is it even possible for it to look into its internal history of operation steps for it to read out or analyze why and how is made the mistake,
You see it is giving those step by step outputs, but it's not showing its actual work
Anonymous at Wed, 23 Oct 2024 14:39:13 UTC No. 16445745
>>16443686
This is recently and the models still having trouble?