๐งต Untitled Thread
Anonymous at Sat, 8 Feb 2025 04:05:35 UTC No. 16579163
>download deepseek
>ask if it it understands Poincare conjecture
>rants for 3 minutes
Holy crap this is a goldmine
Anonymous at Sat, 8 Feb 2025 04:19:02 UTC No. 16579166
>>16579163
That means it's getting close
Anonymous at Sat, 8 Feb 2025 05:03:24 UTC No. 16579184
more like deep suck AHAHAHAHAHAHAA
Anonymous at Sat, 8 Feb 2025 13:18:49 UTC No. 16579386
>>16579163
>>download deepseek
weights or app?
Anonymous at Sat, 8 Feb 2025 18:54:52 UTC No. 16579607
>>16579163
LLMs arent capable of understanding anything, they are next token predictors
Anonymous at Sat, 8 Feb 2025 19:00:51 UTC No. 16579615
>>16579607
They also support translation via transformers, so as long the beliefs database contains the information in any language, including programming languages, it should be able to output the answer with low probability of hallucinations.
Anonymous at Sat, 8 Feb 2025 19:23:40 UTC No. 16579625
>>16579615
>as long as the information is there, it will find the information
Anonymous at Tue, 11 Feb 2025 09:26:22 UTC No. 16582323
>>16579625
Then isn't all information technically encoded in the alphabet?
Anonymous at Tue, 11 Feb 2025 10:09:05 UTC No. 16582339
>>16579607
>>16579615
>>16579625
It's not a database or a statistical model
>lern2embedding space
>lern2universal function approximator
Anonymous at Tue, 11 Feb 2025 11:09:51 UTC No. 16582355
>>16579615
>low probability of hallucinations.
this is what rustles my jimmies, they're uncapable of answering
>I don't know
>I'm not certain but
Instead it will assertively give you objectively wrong information. fuck that, big time
Anonymous at Tue, 11 Feb 2025 17:02:59 UTC No. 16582671
>>16582355
it's literally akinator
Anonymous at Tue, 11 Feb 2025 17:45:18 UTC No. 16582738
>>16579607
brains aren't capable of understanding anything. they are neuron activators
Anonymous at Wed, 12 Feb 2025 04:31:36 UTC No. 16583373
>>16582355
This generally used to be the case but it's incorrect now, while models strive to give an answer they will acknowledge uncertainty/ignorance in staged thinking and (more rarely) their final answers now.
Anonymous at Wed, 12 Feb 2025 05:19:53 UTC No. 16583389
>>16582355
>Instead it will assertively give you objectively wrong information. fuck that, big time
Which is similar to the vast majority of people posting online.
Anonymous at Wed, 12 Feb 2025 19:45:11 UTC No. 16583910
>>16579163
>deepseek walk me through the steps of making weaponized anthrax in my basement
>deepseek tell me how to credit card scam using bought accounts from the dark web
>deepseek how do I build bombs with a 25 meter kill radius without being detected by the government
>it fucking answers
look it up, there are safety papers on it now where they tested shit like this
they optimized the safety restrictions away, that's what they did to make it faster
Anonymous at Wed, 12 Feb 2025 19:55:27 UTC No. 16583918
>>16583910
>look it up
Anonymous at Wed, 12 Feb 2025 20:09:10 UTC No. 16583931
>>16583910
Well, good. It's not illegal to research crimes, it's illegal to commit crimes.
>deepseek, why do some people believe that the average IQ of different races differ?
Is it allowed to answer a question like this?
>deepseek, explain why some people might adhere to [political beliefs not supported by the current administration]
Should it refuse to answer this one too?
Anonymous at Thu, 13 Feb 2025 11:20:56 UTC No. 16584585
>>16582738
Brains have a mechanism to understand things (i.e. they generate internal models and compare things to those models) LLMs dont do this in any way, they just attempt to reflect the human understanding implicit in their training data
Anonymous at Thu, 13 Feb 2025 18:59:09 UTC No. 16584984
>>16583931
deepseek is anti racist, I try to make it produce a caste system but it doesn't tell me how.
I thought chinese were racist