๐งต Untitled Thread
Anonymous at Wed, 19 Feb 2025 11:47:52 UTC No. 16591341
will we see LLMs starting proving theorems people failed to prove?
Anonymous at Wed, 19 Feb 2025 11:56:24 UTC No. 16591348
>>16591341
Given the replication crisis, it seems reasonable to assume that some LLM generated trash will end up accepted as valid. The odds of it actually coming up with something that significant, valid, and novel are close to zero.
Anonymous at Wed, 19 Feb 2025 12:03:35 UTC No. 16591354
>>16591348
>replication crisis
replication crisis in mathematics?
Anonymous at Wed, 19 Feb 2025 13:21:36 UTC No. 16591393
No, because AI only can regurgitate things that people have already come up with. But you'll certainly see attempts at "proofs" that the layman is going to believe are valid and will turn out great for marketing.
>>16591348
Good point.
Anonymous at Wed, 19 Feb 2025 14:35:15 UTC No. 16591443
>>16591341
LLM as of today no.
The tech is evolving rapidly though.
Anonymous at Wed, 19 Feb 2025 14:37:14 UTC No. 16591444
>>16591443
ye no joke, for the first time I started asking my local LLM shit, instead of looking it up online. works for a lot cases, it's actually weird. but that clearly depends on use case scenarios.
Anonymous at Wed, 19 Feb 2025 15:31:20 UTC No. 16591475
>>16591341
Yes. Not right now, tech isn't there yet. If the tech continues on the current rate of progress, which it probably will (billions and billions and billions pouring into compute, data creation, algo efficiencies, etc), then we'll have an AI that can prove unsolved theorems within five years.
Anonymous at Wed, 19 Feb 2025 15:44:42 UTC No. 16591486
>>16591341
Never. First, that's outside the purview of LLMs. And at a deeper level, even if some higher form of AGI came up with a novel technique to prove something like RH, you'd have no way of knowing whether the proof was legit or just more AI hallucinated nonsense, meaning that it would prove nothing to anyone either way. The only thing AGI might be able do is make a more streamlined version of a proof that is already widely accepted.
Anonymous at Wed, 19 Feb 2025 15:47:55 UTC No. 16591488
>>16591486
everything is a hallucination. what matters is if they're useful or not lol. einstein hallucinated his thing, but you're calling it "muh gedankenexperiment".
humans use known facts to bruteforce unknown ones. all irrelevant if they're useless, they way you reached it is irrelevant if it's useful. simple as.
Anonymous at Wed, 19 Feb 2025 15:57:52 UTC No. 16591506
>>16591488
Fully granting that perspective, a hallucination about "neighborhoods of infinity" or some analogous, idiosyncratic thing that can't be tested by prediction is useful to no one except the person who had fun hallucinating it.
Anonymous at Thu, 20 Feb 2025 14:26:40 UTC No. 16592747
>>16591443
No it's not. It's just as retarded as it was in 2022.
Anonymous at Thu, 20 Feb 2025 14:33:14 UTC No. 16592758
>>16591348
>The odds of it actually coming up with something that significant, valid, and novel are close to zero because...because...uhhh...it just can't! it's not a real intellegence!! It's not sentient!! IT DOESN'T HAVE SOVL!!!
Anonymous at Thu, 20 Feb 2025 14:36:32 UTC No. 16592765
>>16592758
It doesn't need a soul. It does need comprehension. Just because you're easily fooled by increasingly more clever party tricks doesn't magically make LLMs into mathematicians.
Anonymous at Thu, 20 Feb 2025 15:13:08 UTC No. 16592798
>>16591354
Turns out that 2 has been trademarked the entire time. Standards and Compliance Department is in chaos too after the firings.
Anonymous at Thu, 20 Feb 2025 18:05:38 UTC No. 16593233
>>16591348
look up Lean
Anonymous at Thu, 20 Feb 2025 18:49:39 UTC No. 16593432
>>16592765
It does comprehend/understand now, in a real, meaningful sense, but mostly in a diminished, philosophical one. Obviously it doesn't understand in the same sense that you and I understand. I don't know if "does AI understand?" is a productive question regardless; it might never truly understand, just as a calculator doesn't understand why two and two make four -- but just as a calculator, it might not have to.
Anonymous at Thu, 20 Feb 2025 19:23:56 UTC No. 16593554
LLMs should be banned and LLMs hosted on datacenters in other countries should be dronestriked
Anonymous at Thu, 20 Feb 2025 19:28:16 UTC No. 16593570
>>16593432
LLMs are at most a saved state that's constantly resumed. it's internal things happen at different rate, it doesn't integrate new info/experiences, just gets constantly resumed from its saved state. it's...something, nothing close to human experience, by any means. not saying it can't be...something. it's just not human in the internal experience sense.
humans have no choice but to experience, constantly, if not sleeping, with a fuckton of data input through various senses, non-stop, which contributes to our sense of experience.
LLM got no spine, no limbs, no audio/olfactory/visual senses, feeding non-stop information. even if boring one, like watching paint dry on a wall.
Anonymous at Thu, 20 Feb 2025 19:48:41 UTC No. 16593621
>>16591348
Replication crisis isnt due to computers. Its due to DEI hires and DEI/nonsensical researches being accepted for DEI agenda.
Anonymous at Thu, 20 Feb 2025 20:20:06 UTC No. 16593696
>>16591341
LLMs could conceivably accelerate some computer proof techniques like Lean; finding a valid proof is not a traditional gradient descent problem and it seems like the type of problem which LLMs could be optimized to perform. The most obvious application to start is converting natural language statements to well formalized statements.
Anonymous at Thu, 20 Feb 2025 23:48:47 UTC No. 16594136
>>16593554
honestly yeah. the world right now is divided into two groups of people; those who can count the OOMs and those who have no idea what that means.
>>16593570
think you're onto something here; human general intelligence emerges after a "training period" of several years where the "model" is exposed to terabytes and terabytes of sensory data. we might need to do the same thing with a computer if we want AGI. I think there are ways around this though -- you literally just need to build a bigger computer with more, better text and tell it to think longer before responding
Anonymous at Fri, 21 Feb 2025 03:30:16 UTC No. 16594386
>>16593696
>natural language statements to well formalized statements
and then what?
Anonymous at Fri, 21 Feb 2025 04:03:50 UTC No. 16594407
>>16591348
cope
Anonymous at Fri, 21 Feb 2025 04:13:43 UTC No. 16594414
Anonymous at Fri, 21 Feb 2025 04:14:35 UTC No. 16594415
>>16593696
>converting natural language statements to well formalized statements
good luck
Anonymous at Fri, 21 Feb 2025 04:23:25 UTC No. 16594419
>>16591341
Doubt it. At least current technology isn't close to being there (remember that data sparsity and layered reasoning both fuck up LLM performance). If LLM's get better, I could see LLM's with sophisticated function calling + humans collectively being quicker to work on novel ideas than humans alone. But for the foreseeable future, I see LLM's at best augmenting human intelligence, and at worst, causing new generations of mathematicians to fail to think for themselves and ultimately halting mathematical progress.
Anonymous at Fri, 21 Feb 2025 04:25:13 UTC No. 16594421
>>16591341
first I wanna see an LLM that can explain why I should care
Anonymous at Fri, 21 Feb 2025 04:26:14 UTC No. 16594423
>>16591348
That looks oddly tasty in a plastic 90's kinda way
Anonymous at Fri, 21 Feb 2025 16:17:24 UTC No. 16595000
>>16591341
LLM's no
Training a neural network to solve open problems probably will happen
And maybe it will interface with an LLM to communicate the solution to a human
But LLM's are fundamentally just predicting the next word
If you prompt an LLM to solve obscure or novel combinatorics problems right now it'll produce a solution that sounds correct but once you dig a little you realize its completely incorrect
Anonymous at Sat, 22 Feb 2025 05:26:02 UTC No. 16595953
>>16591341
This video was incredibly creditworthy
NIGGA ITS JUST A FUCKING COMPUTER
HOLY FUCK
WHY IS it BEING PORTRAYED AS IF SKYNET IS GOING TO NUKE HUMANITY IF HE LOSES?
Anonymous at Sat, 22 Feb 2025 05:45:26 UTC No. 16595963
>>16595953
>creditworthy
wtf
Cringeworthy*