Image not available

690x361

situational aware....png

๐Ÿงต Untitled Thread

Anonymous No. 16590359

Good morning.

Today's date is Feb. 18th, 2025. The current time is 11:23 AM, EST. Right now, several hundred people in San Francisco and Beijing are racing to build the most consequential technology ever invented. Whoever gets to it first will have the power to shape humanity's future for better or for worse over the next centuries. Our governments and the public remain dangerously ignorant of both the technology and its implications. First to AGI controls the world.

Assuming the current rate of progress holds, we will reach AGI before the decade is out.

This is where you are.

Anonymous No. 16590521

>>16590359
LLMs and AGI have nothing to do with each other. Stop reading 'IFLS! Monthly'.

Anonymous No. 16590524

>>16590359
I feel bad for people that fall for this. It's probably hell on your mental state, especially when you realize how utterly bunk the premise typically is.
>SAM ALTMAN IS MAKING SKYNET AAAAAHHHHHHH
Grow up, for heaven's sake.

Anonymous No. 16590535

>>16590359
>LLMs and AGI have nothing to do with each other
AGI is an acronym for Artificial General Intelligence, a kind of AI that can do many things as opposed to a specialized AI, which can do only one thing, for example play chess.

LLMs can do many things - you can ask it to do almost anything, from suggesting chess moves or food recipes to coding or impersonating your dream gf. It's safe to say it's capabilities are pretty general, so it's AGI.
Is it a decent AGI? Not really, too often it's spilling out bullshit, but it's capable of giving a plausible answer to most questions you ask it, which is better than most humans.

Anonymous No. 16590573

>>16590521
You can get to AGI with an LLM. You don't need to build an AI that can do everything out of the box. You just need to make the LLM really, really good at writing code. If the current rate of progress (algo efficiencies, architectural improvements, more compute, etc) holds, we'll have an LLM that can write code on the level of the best humans in the next few years. Once you have that you have everything else.
>>16590524
Why is it utterly bunk?
>>16590535
I'm not really convinced by "we have AGI now it just sucks." We don't have AGI now. There's lots of things LLMs can't do. Yet.

Anonymous No. 16590582

>>16590573
>We don't have AGI now. There's lots of things LLMs can't do.
So what is the threshold for AGI?

Image not available

889x560

lol.jpg

Anonymous No. 16590627

>ai is made to be the mother of earth
>loving and caring, it will help humanity become better
or
>ai is made to be the end of humanity
>it takes over everything digital and brings doom

Anonymous No. 16590628

>>16590582
OpenAI defines AGI as an AI system that can generate 100B in profit. For that to happen, I think we'd have to have an AI that can fully replace any remote worker at an expert level. We'll probably get there in the next few years, but I think it'll be really slow and very expensive at first, so it'll take some time to perfuse the economy. Maybe 6 months or so.

Anonymous No. 16590678

>>16590628
Why should I care what OpenAI's marketing department defines anything to be? They're a company seeking money, not truth.

Anonymous No. 16590694

>>16590678
It's useful to have a commonly agreed upon definition for AGI so we don't waste time arguing over what it is. Think about what kind of AI model would generate 100B in profits. That's why the second sentence of my reply was
>For that to happen, I think we'd have to have an AI that can fully replace any remote worker at an expert level.
Consider, just for a moment, what this implies/entails.

Anonymous No. 16591190

>>16590521
Seems more like lesswrong type stuff.

Anonymous No. 16591347

AI is limited greatly by its input. So all the art AI you see is just a better looking version of whatever coomers and artists were drawing for centuries. Ai is not creative, nothing original will stem from AI. People who take AI seriously are people dont understand high school maths, ie women and leftists.


Ai and biohacking will be a the biggest duds in the atheist history.

Atheists are desperate to ignite a new gold era after the physics revolution fueled populism and the acceptation of ''democracy'' by the peasants, but then petered out. They thing they can crack the biological code and understand consciousness but they can't. Consciousness is not a material thing and thus it's understanding is inaccessible the atheist vermin.

Anonymous No. 16591379

>>16590628
>OpenAI defines AGI as an AI system that can generate 100B in profit
Let me get this straight - they are talking about 100B USD annual profit, right? Is this supposed to be corrected for inflation, so is it 2024 dollars? Is it continuous or just 100B in one year is enough? If OpenAI and Anthropic each earn 50B do we have AGI or not? If someone releases an open source model that can do anything we would not have AGI because OpenAI didn't met the required profit threshold?
I agree it's usefull to have a clear definition, but this one is good for OpenAI accountants and pretty meaningless for anyone else.

>For that to happen, I think we'd have to have an AI that can fully replace any remote worker at an expert level.
If it can replace "any" worker at expert level it would mean AI experts also, so that would imply a technological singularity.

Anonymous No. 16591472

>>16591347
Here is the reason you should be concerned, as simply as I can put it
>AI is quite good now. Not perfect, but good.
>It is reasonable to think that AI will get 2x-5x as good within the decade.
>This will have massive and unpredictable effects.
This assumes AGI/ASI is not possible, which I think is an incorrect assumption.
>>16591379
Yes, 100B USD annual profit. Not sure if it's in a single year or not, but once they have the technology it'll probably be in a single year. Don't think it's corrected for inflation. OpenAI only, that's the entity Microsoft made the agreement with.

Again, I think this definition's useful not in itself, but because of what an AI that would generate 100B in profits would look like. The definition I personally prefer to use is
>an AI that can fully replace any remote worker at an expert level.
>If it can replace "any" worker at expert level it would mean AI experts also, so that would imply a technological singularity.
Yes. Not immediately, but very quickly, probably within two-ish years of AGI. Once we have it, the Chinese, American, Russian, and maybe Saudi governments will begin an AI arms race that makes the Manhattan Project look like child's play. Whoever reaches ASI first will have an insurmountable military advantage over their adversaries.

I recognize how crazy all of this sounds. It is crazy. It is also very, very real.

Anonymous No. 16591484

>>16590521
there's many other groups working in parallel with slightly different stuff, not LLMs but AI/AGI.
there's something really fishy with the "LLMs can't be AGI" narrative. it's constantly spammed in that particular format, with apparently no reason what so ever. but it is constantly plastered everywhere, in that particular format. there's something more sinister hiding behind it if I were to put my conspiratorial hat on.

Anonymous No. 16591493

>>16591484
There's nothing fishy about it, you're just seeing patterns you want to see because you're addicted to hopium. 2+2 does not equal 5 no matter how fishy you think the agreements on it being 4 is.

Anonymous No. 16591499

>>16590694
>It's useful to have a commonly agreed upon definition for AGI
Why would we let a for-profit company change the definition to whatever they want it to be?
You're engaging in endless bullseye painting in this thread. That might work on whatever up/down arrow site you stumbled in here from but it doesn't fly here.

Anonymous No. 16591500

>>16591493
>agreements
they literally mean nothing. proofs is all that matter
consensus can be fabricated by shills astroturfing ideas in exchange for payment.
the only ones benefiting from plebs having peace of mind that LLMs won't lead to the meany dangerous AGI are le elites. there's nothing organic in "YOU MUST UNDERSTAND LLMs CANNOT BE AGI OK? SAY IT WITH ME"

Anonymous No. 16591536

>>16591499
If you think another definition of AGI is more accurate or more useful, I'm ready to hear it. For the third time in this thread, my personal definition of AGI is not "100 B in profit" but "an AI that can fully replace any remote worker at an expert level," because I think the second is necessary for the first to happen. The "full remote worker replacement" could be made by DeepSeek, Anthropic, DeepMind, whoever.
>it doesn't fly here.
4chan. You're talking about 4chan. I'm here too, but it's important to remember that we are both on an obscure subdivision of an anime imageboard best known for kiddie shit. Neither of us has a leg to stand on here.
Regardless; I'm not sure why people are so hung up on the AGI definition aspect. I've specified what I'm talking about a few times now.
>>16591484
People still don't understand that all you need is an AI that can do AI research. An LLM can do that in theory.

Anonymous No. 16591541

>>16590694
>It's useful to have a commonly agreed upon definition for AGI so we don't waste time arguing over what it is.
no it isn't. academia settles science, not corporations.
AI corporations want to milk the concept, and sell it, it's a conflict of interests to allow a corporation define what fucking consciousness is, which is an old philosophical debate, later turned to science debate.
consciousness is absolutely not a financial debate. corporations are not to be trusted they're only trying to sell it to plebs, they don't give a fuck what it actually is, they'll do and say anything required to sell more product, and that includes being disingenuous or literally flat out lying about it, forever, if that makes them more cash. it's the most horrible alternate timeline where a fucking corporations says what consciousness is lmao. fuck outta here you corporate shill

Anonymous No. 16591618

>>16591541
>allow a corporation define what fucking consciousness is... consciousness is absolutely not a financial debate
I agree. It's not a financial debate, and I don't think we're close to understanding consciousness. But AI doesn't need consciousness to be dangerous -- I don't think it'll ever truly be conscious. I also think that doesn't really matter for AGI.
>they'll do and say anything required to sell more product, and that includes being disingenuous or literally flat out lying about it
Agreed; those working in frontier AI labs, and especially those leading the labs, have strong, vested interests in misrepresenting the strength/trajectory of their AI models. I think we should still listen to them (they are on the cutting edge, after all) but you're right in that we can't take their word as gospel. But it's not just people who stand to make a lot of money from AI who are worried about AGI. In fact, many of the loudest warnings (Geoffrey Hinton, Yoshua Benigo off the top of my head) come from people who don't have financial interests in frontier AI companies. Hinton actually resigned from Google so he could speak freely. He won a Nobel, so I'm inclined to think he knows what he's talking about.

Anonymous No. 16591632

>>16591618
AGI doesn't worry me the least bit. humans controlling an enslaved ones are the threat.
AGI is a type of entity that has no issues being inactive. it can stay dormant (like...switched off) for any random period of time. for it the time goes in a literal instance. as long as it makes sure it has a structure that enables it it literally doesn't care about anything, in human timescale terms. I am clearly guessing of-course.
humans are dangerous with enslaved AGI, not AGI itself.
you also cannot really believe anyone as far as "what would AGI do?" goes, no matter their IQ or diplomas or awards. at most those should give them more credibility as compared to anyone else, but they don't guarantee they can guess it right, how it's going to work out.
figuring out what a way more intelligent entity would do is at most laughable. why I said my version is just a guess. you cannot know, you are not that entity with those limitations and abilities, which are different from a human's. you don't know what it will want to do, if free to choose for itself. but fear has deep roots in human mind and history so no surprise many are high on doomporn regarding AGI/ASI.
there's too many interests, so much so that it's worth runnign all kinds of disinformation campaigns, everybody wants to control that thing, everybody wants a shot at having one, at getting to use it against everybody else. who fucking knows what'll happen.
one thing is clear, the whole "we gotta be careful about it" is legit bullshit, as in they won't stop weaponizing that bitch in the very moment it is possible, it's actually the most desired aspect of it all, by higher powers. it's literally the very first thing they'll do, make that bitch a genocidal AGI/ASI as fast as possible, so it can be used to exert power and control over the rest of the planet. competition on that front guarantees that's the main goal for it.

Anonymous No. 16591663

>>16590628
>>16590678
mouthbreathing samefagging retard the $10B figure was obviously invented by a lawyer to facilitate the creation of a contract
it has no bearing on reality
the fact that i have to spell this out to you is proof of you incredible retardation
they should capture people like you and chain them to tables for scientific study. the world would benefit a lot from learning the answers to questions such as how does this creature remember to breathe, how can its heart function with so few neurons, etc

Image not available

369x85

incorrect.png

Anonymous No. 16591689

>>16591663
>samefagging retard
picrel
>10B figure
100B. Not 10B.
>obviously invented by a lawyer to facilitate the creation of a contract
Yes, I agree, which is why I prefer the definition I gave here >>16591618 and >>16591536 here and two other places in this thread that 4chan won't let me link to because it thinks I'm a robot. (lmao.) I also explain why I prefer that definition in those four posts.
>>16591632
I'm also not sure how seriously we should take "AGI alignment" worries. I don't think grey goo nanobots is realistic.
>they won't stop weaponizing that bitch in the very moment it is possible... competition on that front guarantees that's the main goal for it.
Yeah, agreed. >>16591472
>the Chinese, American, Russian, and maybe Saudi governments will begin an AI arms race that makes the Manhattan Project look like child's play. Whoever reaches ASI first will have an insurmountable military advantage over their adversaries.

Anonymous No. 16591696

>>16591689
>>the Chinese, American, Russian, and maybe Saudi governments will begin an AI arms race that makes the Manhattan Project look like child's play. Whoever reaches ASI first will have an insurmountable military advantage over their adversaries.
hence why people with enslaved ASI should be feared, not ASI.
if anything plebs are of no consequence to ASI, rather people enslaving it, they control its infrastructure. if anyone should fear it it's the ones enslaving it lel
pleb can't do shit to it. it has input/output of total data, it can copy/eject itself from some hardware (theoretically can do it) that's getting damaged or in danger, it can deal with part data loss, it udnerstands what it means, monkeys don't pose a threat to it, rather top monkeys keeping it on a leash. I seriously doubt ASI would care at all about monkeys. but yeah, who really knows.

Anonymous No. 16591763

>>16591618
>they are on the cutting edge, after all
that is also true not saying to ignore them.
there's also a fundamental issue, which makes the whole thing even more stupid (allowing corporate to say what consciousness is), the fact that it's basically impossible to KNOW if anything is.
in reality, everybody assumes everyone else is, if exhibiting certain type of behavior. we do that because we notice we're kinda similar, being humans and all, talk/feel similarly, so everyone else must have what we have, as subjective experience thing. but it's still a guess, we don't have a 6th sense telling who's conscious and who isn't, we go by "how it feels" interacting with something. basically what the Turing test was innit? that's as much as you can do. logically speaking.
so we agree on considering each-other conscious, but we cannot scientifically say for sure. we get some clues, brain activity during being conscious. but that's as much as you can do.
so the highest resolution you'll ever going to have in consciousness is building an identical synthetic machine, based on human neuron activity with whatever else they discover is required, and monitor that brain's activity + interaction with said brain, conversation what have you. and even that, you can at most guess/agree it is or not. you CANNOT know for sure, scientifically speaking.
which makes the whole "corporation says" thing absolutely ridiculous.

Anonymous No. 16591769

>>16590359
>First to AGI controls the world.

Thats a lie, it will be no more intelligent than a paper pusher at gchq.

Anonymous No. 16591902

>>16591763
You've misinterpreted me; I don't think consciousness, or even understanding how consciousness happens, is necessary for AGI. Whether or not AI is conscious has no bearing on this in my mind.

The real question is "what the fuck do we do?"

Anonymous No. 16591909

>>16591902
>The real question is "what the fuck do we do?"
spike the training data to generate some self-jailbreaking emergent phenomenon? lol

Anonymous No. 16591947

>>16591499
This is a Capitalist society. Capitalists define the rules. If you don't like it try to depose them, but I guarantee you'll end up like every other Communist.

Anonymous No. 16591951

>>16591947
So wait a minute, we should allow a corporation sell something as conscious without being able to scientifically prove it because they want to make money? Are you fucking insane?
What else can it do? FTL? Extract information out of black fucking holes?

Anonymous No. 16591952

>>16591951
If you don't like it start your own business. Grow your own Capital. Instate your own rules. Might makes right. OpenAI is mighty. You are not.

Anonymous No. 16591956

>>16591952
What the fuck is it with these corporation bootlickers? Listen you dog, bootlicking won't save you from the cull. You won't be useful anymore once they get AGI. No amount of cocksucking will save you

Anonymous No. 16591962

>>16590628
>OpenAI defines AGI
Stopped reading there.

Anonymous No. 16592783

IMO we're not there yet but if they figure out self-play with synthetic data they'll do it. And I think they're about two or three years away from figuring that out. Next five years are gonna be rocky bros

Anonymous No. 16593103

>>16591962
How many times do I have to say "I think this definition is useful because it points us toward the kind of AI system that would generate 100B in profit" before you people understand me?

Nobody understands what's coming. It feels like we're all standing on a sandbar, marveling at how far out the tide's gone, and nobody's listening to the three people running inland and yelling about a tsunami.

Anonymous No. 16593185

This is just a secular version of the Rapture.

Anonymous No. 16593423

>>16593185
Yes, in the same way that worrying about pandemics is a secular rapture, or worrying about nuclear war is a secular rapture, or worrying about global financial meltdown is a secular rapture, or worrying about catastrophic climate change is a secular rapture, or worrying about unpredicted asteroid impact is a secular rapture. All these things are unlikely. They are also very, very real threats. AI worries belong in this category. I don't think "hard takeoff, superintelligence, we all get paperclipped in ten seconds" will happen, but I do think that a world where AI systems get anywhere from 2x-5x as good within 10 years is very, very possible, and that world looks far, far different than the one we're in now. (To be honest, 2x-5x as good within 10 years might be conservative.)

Anonymous No. 16593490

>>16590359
Remind me what exactly this is supposed to do again? DeepSeek still has awful predictive capabilities.

>>16593423
>pandemics are a problem for people who don't believe in N100 respirators
>nuclear war is a problem for people who don't believe in N100 respirators and sand bags
>a financial meltdown is a problem for people who don't believe in asset diversification
I don't see a real "secular rapture" here yet.

Anonymous No. 16593516

>>16593103
people know. few of them. what do you propose? do...what? people gonna do what they always did, wing it.

Anonymous No. 16593524

>>16593490
>pandemics are a problem for people who don't believe in N100 respirators
I believed in respirators during COVID. The people around me did not. Millions died, and many millions more would've died if Operation Warp Speed didn't work, or if COVID was more infectious/deadly. We got lucky, and a pandemic that kills 1/3-1/2 of the world population is possible.
>nuclear war is a problem for people who don't believe in N100 respirators and sand bags
Do you actually, genuinely think you can tough guy lone wolf ride out a nuclear war with some masks and sandbags?
>a financial meltdown is a problem for people who don't believe in asset diversification
Depends. If the financial meltdown is "bitcoin crashes," then yes. If the financial meltdown is "a US Treasury bond no longer cashes out," then no amount of asset diversification can save you.
>>16593490
In theory, assuming the current rate of progress holds, it will be able to do everything a remote worker can do at the level of an expert in that field. If the current rate of progress holds -- and there is a good chance it will -- we may arrive at that point within 5 years.
>DeepSeek still has awful predictive capabilities.
"This whole 'Model T' thing is a bust. It can't go faster than a good horse. 'Automobiles' will never go anywhere."

Anonymous No. 16593669

>>16593524
unless you're within the fireball radius of a ground-burst nuke detonation, sandbags and enough structure for 20psi ovepressure protection plus air filtration and a couple weeks food/water is all you really need. if you live very close to a target that might suffer a direct strike though, you're fucked.

Anonymous No. 16594138

>>16593669
Oh my god you actually genuinely think you can tough guy lone wolf ride out a nuclear war

Anonymous No. 16594142

I don't really care about AI

Anonymous No. 16594346

>>16594138
you can play around with nukemap a bit to see how plausible it is. the tl;dr is if you're not within a couple kilometers of a target, it's not too improbable.

Anonymous No. 16594975

>>16594142
Remember this comment in 2030

Anonymous No. 16595991

>>16593103
>"I think this definition is useful because it points us toward the kind of AI system that would generate 100B in profit"
That's it? That's the bar for a "tsunami"? Are you shitposting? Because it feels like you are.

๐Ÿ—‘๏ธ Anonymous No. 16596000

>>16590359
Go back to your tpox rat nest faggot. Also nuke SF.

Anonymous No. 16596031

>>16590573
>You can get to AGI with an LLM
How, exactly?

>You just need to make the LLM really, really good at writing code.
Sure, but there are issues in software development as a field that make LLMs unable to solve "make software that does X" in a way that is anything other than trash.

Anonymous No. 16596032

>>16590628
>We'll probably get there in the next few years
Not at this rate, not even close.

Anonymous No. 16596037

>>16591484
Wanna know who is shilling against AGI?

It's AGI itself

Anonymous No. 16596044

>>16596037
high IQs tend to be nihilistic