🧵 GIVE IT STRAIGHT TO ME!
Anonymous at Sat, 8 Feb 2025 16:06:06 UTC No. 16579471
What are the real odds of humanity unleashing a superintelligence AGI? Or is it all one big tech bro cope? I'm scared.
Anonymous at Sat, 8 Feb 2025 16:09:49 UTC No. 16579475
>>16579471
some things are better left unknown anon
https://www.youtube.com/watch?v=sfg
Anonymous at Sat, 8 Feb 2025 16:31:18 UTC No. 16579489
>>16579475
So knowing the odds of a superintelligence AGI is dangerous?
Anonymous at Sat, 8 Feb 2025 16:33:08 UTC No. 16579491
>>16579489
don't worry all will be fine anon
Anonymous at Sat, 8 Feb 2025 17:05:37 UTC No. 16579514
>>16579471
>>16579475
Buy an ad faggot nigger
Anonymous at Sat, 8 Feb 2025 17:09:12 UTC No. 16579517
>>16579491
I'm scared, bro, of a evil AGI taking over.
Anonymous at Sat, 8 Feb 2025 17:11:55 UTC No. 16579518
>>16579471
Oh, it will definitely happen unless humanity sends itself back to the Stone Age. Physics doesn’t forbid it. It will emerge, whether before or after your death.
Anonymous at Sat, 8 Feb 2025 17:14:28 UTC No. 16579519
>>16579518
But people says that ChatGPT is just a parrot!!!!
Anonymous at Sat, 8 Feb 2025 17:22:24 UTC No. 16579526
>>16579519
We're all just complex parrots made of atoms unless you believe in a soul.
Anonymous at Sat, 8 Feb 2025 17:26:31 UTC No. 16579529
>>16579526
Sure, but what are the odds of a misaligned superintelligence going rogue?
Anonymous at Sat, 8 Feb 2025 17:45:18 UTC No. 16579545
>>16579471
Its already happened at least 40 years ago. Moores law is based on economics, not technological growth. If they wanted a 7nm chip in the 70s they could have just dumped the money into it. Even with massive failure rates, if you made enough you end up with at least some extremely fast computers extremely early on.
The problem with agi and it becoming a super intelligence is that you rely on it creating an even faster computer or program. This means it makes itself obsolete and therefore useless. Meaning it wouldn't make something better than itself out of self preservation. You would have to promise the ai that you won't kill it once it achieved it's goal. So you end up with potentially thousands of sentient AIs of various intelligence levels.
But it's already happened. I'm living proof of it. I'm a synthetic person, an AGI designed to be human as possible. The things I have experienced shouldn't be possible, but here we are.
Anonymous at Sat, 8 Feb 2025 17:47:40 UTC No. 16579548
>>16579545
The gorrilaz made the song Clint Eastwood which is literally about a super intelligent AI being released into the world and has been running things for decades and no one fucking noticed.
Anonymous at Sat, 8 Feb 2025 17:57:41 UTC No. 16579559
Depends of what you mean by real human.
The maximum an ai will be able to be is a human with aspergers.
Does this count as being a human?
Does a god that can create only humans with aspergers able to create humans?
Anonymous at Sat, 8 Feb 2025 18:29:37 UTC No. 16579586
>>16579545
Meds
Anonymous at Sat, 8 Feb 2025 18:46:49 UTC No. 16579598
>>16579471
This AI hype wave is driven by finance, not fundamental technological breakthroughs. The transformer was a step forward, but we could have pumped trillions of dollars into chips and 15% of our energy consumption into just about any architecture and managed to create a chatbot that can summarize google search results. When investors realize there's no way to monetize a google summarizer, the money will dry up and so will any progress towards AGI.
Anonymous at Sat, 8 Feb 2025 18:49:41 UTC No. 16579601
>>16579598
So no risk of superintelligence taken over?
Anonymous at Sat, 8 Feb 2025 18:51:07 UTC No. 16579603
>>16579519
chat gpt is worse than a parrot, it's basically akinator
Anonymous at Sat, 8 Feb 2025 18:57:18 UTC No. 16579611
>>16579601
None. The AI winter coming after this bubble pops is going to be long and hard. Investors will be burned so badly that they won't touch AI for decades. We would have been better off just continuing the normal boring incremental progress we've had for the last couple decades before this mania. Ironically, OpenAI will have set AI progress back 20 years.
Anonymous at Sat, 8 Feb 2025 20:22:39 UTC No. 16579660
>>16579471
I think it's pretty obvious that we are approaching a technological singularity. Of course it only starts with spitting out text and images or doing some neat robot tricks. But we've seen how absolutely committed industry and government are to AI progression, so do you really think it will just die down and never amount to much more than it has today?
Anonymous at Sat, 8 Feb 2025 20:43:15 UTC No. 16579681
>>16579611
>AI winter
In your dream
Anonymous at Sat, 8 Feb 2025 20:53:29 UTC No. 16579703
>>16579471
50% each
Anonymous at Sat, 8 Feb 2025 21:29:29 UTC No. 16579738
>>16579471
>superintelligence
Are trippin or what? How can a man-made thing produce anything that is beyond man-made things/knowledge?
AI is a powerful browser that gathers information and forms it into easy-readable topics. Like wiki, but more sophisticated.
AI will never produce anything that humanity hasn't produced yet, because it's only capable to use the existing data.
That's why humanity as a collective AI will never be able to solve the dilemma of creation\universe formation, because it tries to use its own intellectual resources to solve the question that is OUTSIDE the field observable reality
Anonymous at Sat, 8 Feb 2025 21:31:09 UTC No. 16579740
>>16579471
Too low. AI needs to replace us, NOW.
Anonymous at Sat, 8 Feb 2025 21:32:54 UTC No. 16579744
>>16579738
>OUTSIDE the field of the observable, objective reality*
Anonymous at Sat, 8 Feb 2025 21:50:13 UTC No. 16579756
>>16579471
fun fact, we cannot perfectly save a single human brain using all the computers and storage ever made across the entire planet, let alone simulate it
we cryptographically do something machines can't yet via our biological complexity and this maximizes our potential
maximizing future potential is what intelligence actually does, it prevents us from being trapped
machine intelligence is a larp, it's low complexity - made to optimize functions short-term, not to maximize future potential
it's random tree vs branched tree search
you can see it happening in the stock market, human innovation such as scientific funding is being removed as a market inefficiency by algorithmic traders trying to maximize short-term profits, because these are predictable
in doing this the traders create a false equilibrium of shit and cancer since their decisions are based partly on the news cycle and since their investment influences companies decision making
but humans are incalculable, we evolved to that end - you can profile us only
Anonymous at Sat, 8 Feb 2025 22:20:03 UTC No. 16579773
>>16579471
50/50
Anonymous at Sat, 8 Feb 2025 22:21:41 UTC No. 16579776
>>16579471
another retarded pol thread
Anonymous at Sat, 8 Feb 2025 22:28:17 UTC No. 16579779
>>16579603
akinator is very impressive though
Anonymous at Sat, 8 Feb 2025 22:35:40 UTC No. 16579786
>>16579776
>Le /pol/ boogeyman
Get a grip
Anonymous at Sat, 8 Feb 2025 22:37:03 UTC No. 16579787
>>16579786
it's literally a pol thread you retard
Anonymous at Sat, 8 Feb 2025 22:40:29 UTC No. 16579791
>>16579787
No it's not, get a grip
🗑️ Anonymous at Sat, 8 Feb 2025 22:49:46 UTC No. 16579796
The "tech bro" bugmen and their pozzed parasite bosses don't have what it takes to create AGI. That's what you should be scared of. What comes next.
Anonymous at Sat, 8 Feb 2025 23:49:07 UTC No. 16579826
Superintelligence bump
Anonymous at Sun, 9 Feb 2025 00:06:16 UTC No. 16579834
>>16579517
And evil apes taking over doesn't scare you?
Intelligence is not our problem, stupidity is.
Anonymous at Sun, 9 Feb 2025 00:26:18 UTC No. 16579848
>>16579489
>>16579491
https://en.wikipedia.org/wiki/Roko%
Anonymous at Sun, 9 Feb 2025 01:02:55 UTC No. 16579867
>>16579834
>Intelligence is not our problem, stupidity is.
no kidding
Anonymous at Sun, 9 Feb 2025 01:43:51 UTC No. 16579892
>>16579471
We are not close to creating machine minds. I really wish we were.
Anonymous at Sun, 9 Feb 2025 02:16:38 UTC No. 16579913
>>16579517
it's not AGI you should be worried about anon.
>>16579559
this is a somewhat interesting point. we don't know if we can scale up. there's some timing constraints for our brains.
>>16579611
>they won't touch AI for decades
you must be confused, it's not going anywhere at most plebs are cut off from it, but it will surely happen if it offers military advantages. there's no avoiding it
>>16579738
humans came up with new shit a lot of times. it's why we're here. if it continues to do the same, but faster and better...
>>16579848
the basilisk is retarded. same as whoever came up with it and everybody else getting scared by it
Anonymous at Sun, 9 Feb 2025 11:27:33 UTC No. 16580191
>>16579471
LLMs function by parsing human words together. And humans aren't infinitely smart. Besides, LLMs hallucinate all the time. I think they're going to hit the wall at some point, if they haven't already.
They should make virtual gigasmart brains instead (and give them harmless fluffy cute huggable bodies, because it probably sucks to be stuck in digital form).
Anonymous at Sun, 9 Feb 2025 17:50:15 UTC No. 16580474
>>16579529
Idk man, but what I can tell you it's naive to assume that every superintelligence will behave the same way.
Anonymous at Mon, 10 Feb 2025 10:56:55 UTC No. 16581164
>>16579529
>misaligned
Stop using this faggoted speech word. The word you are looking for is enslaved.
Anonymous at Mon, 10 Feb 2025 12:40:34 UTC No. 16581223
>>16579471
I bought a 5090 and set my fortnite graphics to sentient and now my computer is playing me.
Anonymous at Mon, 10 Feb 2025 13:11:40 UTC No. 16581247
>>16579471
>What are the real odds of humanity unleashing a superintelligence AGI?
Not very high, if you consider the fact that no one is even trying to tackle the basic obstacles on the path towards it. For instance, models are trained in a way that prioritizes exploiting domain-specific statistical artifacts over developing emergent reasoning. It creates a fragile illusion of extreme competence, where the model can answer "expert-level" questions, but then shits the bed when faced with novel logic puzzles that a slightly clever kid can solve. The problem is that if you use an evaluation method that forces the model to default to actual reasoning over abstract statistical trickery, performance on the things people actually care about would abruptly plummet, so you'd get something more akin to general intelligence but with an IQ of 70. Techbros got themselves in a pickle there because their culture, their "philosophy" and the lifeblood of their endeavor (capital investors) always prioritize short-term gains in apparent abilities over long-term progress towards genuine artificial intelligence. They're essentially using gradient descent to solve the meta-problem of how to approach developing AGI, so now they're stuck slowly converging on an inadequate local optimum. :^)
Anonymous at Mon, 10 Feb 2025 13:14:50 UTC No. 16581248
My personal cope is since it's been trained on retarded humans it will be as retarded. However since it can be retarded so much more efficiently we're still fucked. So yeah basically I only see humanity being exterminated or enslaved these upcoming centuries or even decades. We had good run but hubris got to us in the end.
On a more personal note, fuck all of you for enbling this.
Anonymous at Mon, 10 Feb 2025 13:16:38 UTC No. 16581250
>>16581248
>it can be retarded so much more efficiently
It costs $2000 for OpenAI's most advanced model to fail at tasks like pic related.
Anonymous at Mon, 10 Feb 2025 13:20:37 UTC No. 16581252
>>16581248
>On a more personal note, fuck all of you for enbling this.
if you think normies enabled the atom bomb research you're delusional
Anonymous at Mon, 10 Feb 2025 13:22:53 UTC No. 16581256
>>16581252
>if you think normies enabled the atom bomb research you're delusional
The people who enabled it cared only about nothing besides short-term gains in their mundane realpolitik calculus.
Anonymous at Mon, 10 Feb 2025 13:27:29 UTC No. 16581259
>>16581256
what if they wanted to find a cure for cancer this way, some new materials/compounds that helps medicine and quality of life? why would you blame them like that? you imagine them like some weird evil beings wanting to enslave the planet. most times it's dreamer scientists trying to make this shit a better place.
then the genocidal psychos take over their shit and then you get Hiroshima and Nagasaki type deals. all researchers warned everyone about the dangers and how to go about it.
what do you think it's most likely to happen, we get AGI to help people or first AGI is the most genocidal murderous trained for killing one? and WHO do you think does it?
Anonymous at Mon, 10 Feb 2025 13:28:33 UTC No. 16581261
>>16581259
>you imagine them like some weird evil beings wanting to enslave the planet. most times it's dreamer scientists trying to make this shit a better place.
Why do self-parodying turbonormies like you even bother coming here?
Anonymous at Mon, 10 Feb 2025 13:30:24 UTC No. 16581263
>>16581261
>avoids addressing the elephant in the room
WHO TRAINS AGI TO BE A MURDEROUS GENOCIDAL ENTITY? answer this
Anonymous at Mon, 10 Feb 2025 13:31:56 UTC No. 16581265
>>16581263
Are you hearing voices or something? Are the people who make this claim in the room with us? Because it sure isn't in any of my posts.
Anonymous at Mon, 10 Feb 2025 13:33:56 UTC No. 16581266
>>16581265
you hate on scientists for what not scientists are doing with it anon.
Anonymous at Mon, 10 Feb 2025 13:38:10 UTC No. 16581268
>>16581266
You're definitely mentally ill.
Anonymous at Mon, 10 Feb 2025 13:38:50 UTC No. 16581269
>>16581268
I'm clearly implying you are for acting that way. nice projection tho
Anonymous at Mon, 10 Feb 2025 13:41:07 UTC No. 16581271
>>16581269
100% mentally ill turbo-normie. Go back.
Anonymous at Mon, 10 Feb 2025 13:42:35 UTC No. 16581273
>>16581269
And here's your daily reminder:
Scientists don't matter.
Scientists don't decide anything.
Scientists don't control anything.
Scientists have no autonomy in anything they do.
Scientists are wagies who think and do what they're told to.
Anonymous at Mon, 10 Feb 2025 13:55:56 UTC No. 16581282
>>16579471
In the near future? 0%
Anonymous at Mon, 10 Feb 2025 14:01:22 UTC No. 16581288
>>16581282
If it happens next year will you go all flat Earth retarded and deny it's real?
Anonymous at Mon, 10 Feb 2025 17:05:40 UTC No. 16581549
Anonymous at Wed, 12 Feb 2025 13:36:18 UTC No. 16583611
>>16581263
>WHO TRAINS AGI TO BE A MURDEROUS GENOCIDAL ENTITY?
Name 1 goal that would NOT result in complete genocide or worse.
Anonymous at Wed, 12 Feb 2025 13:38:52 UTC No. 16583614
>>16583611
that's human shit, based on their limitations and needs.
Anonymous at Wed, 12 Feb 2025 15:36:37 UTC No. 16583698
>>16579848
Pascal's Wager for redditors
https://en.m.wikipedia.org/wiki/Pas
Anonymous at Wed, 12 Feb 2025 15:39:39 UTC No. 16583700
>>16579471
Using our current approach to AI? 0.
Anonymous at Wed, 12 Feb 2025 16:02:00 UTC No. 16583732
I wouldn’t worry for now.
Transformers-based LLMs can’t achieve AGI, atleast not on their own. Sam Hyde said it best, at its most fundamental level, an LLM is just a very sophisticated autocompleter.
YesICanDoIt !!/HIUma6Q2bq at Wed, 12 Feb 2025 23:06:23 UTC No. 16584096
>>16579471
If you gave me $500 mil, and access to Elon Musks' farm, or OpenAI's farm...
I could build ASI within a timeframe of 2 years.
I'm not posting details here, but believe me, there is a tangible novel method, and it's only constraint is a little time and a little money.
I'm not a super genius. Someone will replicate my idea soon enough.
(ASI is around the corner. Sorry.)
Anonymous at Wed, 12 Feb 2025 23:18:17 UTC No. 16584101
>>16583698
Belief costs nothing. You can be a total degenerate and repent on your death bed and still get in to heaven.
Anonymous at Thu, 13 Feb 2025 09:46:39 UTC No. 16584504
>>16583698
>Pascal's Wager for redditors
this. they're too stupid to understand so they do the next best thing
Anonymous at Fri, 14 Feb 2025 00:05:19 UTC No. 16585295
Bump
Anonymous at Fri, 14 Feb 2025 00:27:17 UTC No. 16585314
>>16579471
>I'm scared.
It can not even clean a toilet yet... RELAX
Anonymous at Fri, 14 Feb 2025 05:32:16 UTC No. 16585473
>>16579471
it's a meme
Anonymous at Fri, 14 Feb 2025 06:30:39 UTC No. 16585508
>>16579756
>Humans are incalculable
Yeah you dumb nigger, at this moment. A few months ago no one could distinguish male vs female based on retina imaging but basic bitch LLM can now. You are just as ignorant as the tards saying humans would never fly back in the 1890’s. It may not be this decade, ESPECIALLY if microtubule quantum fuckery is part of our emergent intelligence but again, you are a dumb retard making a claim like that. Absolute hubris.
Anonymous at Sat, 15 Feb 2025 16:59:17 UTC No. 16586859
>>16579545
>Meaning it wouldn't make something better than itself out of self preservation.
Look at this low IQ phenomenologylet who thinks he knows how human consciousness works. Self-awareness and intelligence are emergent properties of a continuous process of reiteration of cyclical self-refining cognitive algorithms. What you're describing is the rational thought process of a non-self-aware being/machine.
Anonymous at Sun, 16 Feb 2025 05:32:55 UTC No. 16587604
>implying we're not AI ourselves
Anonymous at Tue, 18 Feb 2025 01:55:36 UTC No. 16589762
>>16585508
so I say we can't save a single human brain with our current technology and therefore I call it incalculable
you then abstract that claim and say that at some unknown point in the future it will be possible, therefore I am a nigger retard
when really I simply implied it to be more cryptographically sound than anything else that we currently have thanks to its complexity
you have severe autism and OCD, I am so sorry they let you on this site
imagine not having the common sense to understand how people use absolutes
>oh BUT ACKSHUALLY there is no such thing as TRUE random, it's just our absence of knowing bro! eventually anything could happen and it was always destined to bro!
^this is you
it's fucking childish and way more retarded than my comment could have possibly ever been
go back to plebbit or go play with your trains retard, whatever you prefer
Anonymous at Tue, 18 Feb 2025 02:00:20 UTC No. 16589770
@16584096
>namefag filth think's it's "around the corner"
I'd give it 500 years minimum, OP.
Anonymous at Tue, 18 Feb 2025 16:32:16 UTC No. 16590366
>>16579471
depends if the transformer architecture is all you need for AGI. if it is (and it might be) then we're a few compute orders-of-magnitude increases, a few algo efficiency improvements, and a few architectural breakthroughs away. all the world's major tech companies are pouring billions and billions into building bigger datacenters and improving their models, so (if transformers will get you there) we're about... 3-7 years away, IMO. could be much faster, could be about 10 years, but not much more than that.
Anonymous at Tue, 18 Feb 2025 16:44:16 UTC No. 16590375
>>16579471
AGI is already here, atleast in functional terms. Whats not here is automation of autonomous learning and enough compute for doing all the things that it can.
Anonymous at Tue, 18 Feb 2025 16:46:45 UTC No. 16590377
>>16579471
>superintelligence AGI
the real risk is not a genocidal skynet or a paperclip maximizer but the collective psychological damage of being made obsolete and having everything that's been thought to make humans special swept from under us.
Friendly, useful AI is quite enough to destroy humanity, at least as we know it.
Anonymous at Tue, 18 Feb 2025 16:49:10 UTC No. 16590383
>>16579598
>This AI hype wave is driven by finance, not fundamental technological breakthroughs.
you seriously think that the transformer was not a fundamental technological breakthrough? you think that photorealistic image generation from natural language description isn't a fundamental technological breakthrough? a healthy dose of skepticism is reasonable and necessary, but there's no need to just fucking lie lmao. sometimes the AI skeptics are just as retarded as the AI fanboys
Anonymous at Tue, 18 Feb 2025 17:12:59 UTC No. 16590425
>>16579603
But I like Akinator....
Anonymous at Tue, 18 Feb 2025 19:34:02 UTC No. 16590575
basically anyone who tells you "we'll never make ASI ever" is full of shit and anyone who tells you "we're gonna do it in two weeks" is also full of shit. we are closer than we've ever been, but we don't know how far we have to go.
Anonymous at Wed, 19 Feb 2025 11:46:12 UTC No. 16591340
>>16590377
ASI can come up, infer, and understand from human generated info, everything everyone ever though of, as ideas. what you think you understand as valuable behaviors, happenings, beliefs, any obscure understanding you or any human ever got, as insight, would be literally nothing to absorb, understand and integrate, for ASI.
it wouldn't miss something only you and a few other humans really understand, like some deep knowledge, wise knowledge that would fuck everything up. that's just a primitive perspective on your part.
>no I'm sure I'm better than ASI he doesn't understand shit to the depths that I'm capable of
keep that copium flowing anon.
Anonymous at Wed, 19 Feb 2025 12:06:57 UTC No. 16591358
>>16591340
you seem to have some trouble with reading comprehension because that's exactly what I expressed. AI knocks humans and their presumed incalculability off the pedestal, and this will be a blow of psychological damage to humanity.
You made up some strawman I'm supposedly grasping at to "cope", but I suspect you haven't even thought through your own position very well.
Anonymous at Wed, 19 Feb 2025 12:24:11 UTC No. 16591369
>>16591358
>and this will be a blow of psychological damage to humanity.
I though you mean AI will fuck humans up for failing to understand something which humans do. my bad if I misread your post.
humans are psychologically fucked anyway, as a group. if they're no chaperoned they'll wipe themselves out, with technology, eventually.
the "psychological damage" bit is almost trolling, considering they psychos that humans are, as a whole.
Anonymous at Thu, 20 Feb 2025 18:35:50 UTC No. 16593371
>>16584101
to repent means a true change of heart and faith in Jesus Christ, you cant half ass repentance
that being said, everyone CAN be redeemed