🧵 Untitled Thread
Anonymous at Wed, 2 Apr 2025, 18:25:40 GMT No. 16635284
>You can do all this mental gymnastics about compute and data bottlenecks and the true nature of intelligence and the brittleness of benchmarks.
>Or you can just look at the fucking line.
Anonymous at Thu, 3 Apr 2025, 00:02:59 GMT No. 16635552
>>16635284
>at 50% success rate
So four hours into a shift the AI had fucked up half of everything it tried? Worst coworker ever.
Anonymous at Thu, 3 Apr 2025, 00:22:35 GMT No. 16635569
>>16635284
This looks pretty good for things that humans have already done that doesn't require anything more than some recombination of previous established human efforts. Also, 50% success rate is pretty crap.
Anonymous at Thu, 3 Apr 2025, 00:25:23 GMT No. 16635572
>metr.org/about
How many of these consultancy orgs are there now. Seems like every day you hear about another.
Anonymous at Thu, 3 Apr 2025, 02:03:39 GMT No. 16635635
>>16635284
investment costs. Computing capability.
Complexity already kills this entire conception of ai.
Anonymous at Thu, 3 Apr 2025, 02:15:45 GMT No. 16635643
>>16635284
lmao
Anonymous at Thu, 3 Apr 2025, 17:02:44 GMT No. 16636189
>>16635552
80% success rate time horizon's lower, but shows the same steady increase with a sharp upswing in the most recent releases.
>>16635569
>This looks pretty good for things that humans have already done that doesn't require anything more than some recombination of previous established human efforts
This describes 85-95% of all human cognitive labor. If you can't see what this implies about where we're going, you're retarded.
Anonymous at Thu, 3 Apr 2025, 17:15:44 GMT No. 16636207
>>16636189
>80% success rate
>OpenAI has never been caught cherry picking and training for benchmarks.
>Trust me, bro.
Anonymous at Thu, 3 Apr 2025, 17:21:39 GMT No. 16636210
>>16635284
>AI scored a PERFECT 50% on problems we can already solve!!!!
Wow. So artificial. Much general.
Anonymous at Thu, 3 Apr 2025, 17:37:18 GMT No. 16636231
>>16635284
>mental gymnast
Those in the know call this "advanced expertise"
Anonymous at Thu, 3 Apr 2025, 18:11:06 GMT No. 16636268
>>16636207
>they're just training to the test that's IT
When a human does this we call it "practicing" and we don't consider it to be "cheating." Benchmarks correspond with broad capability increases in the field being benchmarked. This is cope.
>>16636231
Bitter lesson is bitter for a reason. Look at the fucking line.
>>16636210
>It's just doing what we can already do that's IT
Refer to >>16636189
>>16636214
Yeah, we're about ~14 months away from takeoff. It is shocking how few people understand what's coming, let alone what's happening right now.
Anonymous at Thu, 3 Apr 2025, 18:42:36 GMT No. 16636310
>>16636268
>14 months
cool so what's going to happen?
Anonymous at Thu, 3 Apr 2025, 18:45:53 GMT No. 16636315
>>16636268
Nothing bitter here, unless you can taste the flavor of your future soul death.
Anonymous at Thu, 3 Apr 2025, 19:19:14 GMT No. 16636360
>>16636310
Real hard to say. Some guesses, assuming that progress continues more or less on the current rate:
1. Look at how fast things have moved already. AI went from barely stringing sentences together (like a toddler) to acing exams better than most high schoolers in just a few years. The trend suggests another leap of similar magnitude is coming soon, maybe by 2027 or so. That doesn't just mean a better chatbot; it means systems potentially capable of doing expert-level cognitive work – think PhD researchers or engineers. This jump gets us to AGI. But it won't stop there. Once AIs can do AI research themselves, progress could explode incredibly fast, going from human-level to vastly superhuman (ASI) perhaps within another year or two. This requires absolutely insane amounts of computing power and energy – think building data centers the size of small cities, requiring national-level industrial mobilization for power grids and chip manufacturing, costing trillions. The big problem is, security around this tech is currently abysmal. The key breakthroughs and even the final AI models themselves could realistically be stolen by competitors like China, potentially erasing any lead the US has. Also nobody has actually solved the problem of how to reliably control something significantly smarter than humans. So, we might be building god-like AI while simultaneously giving away the blueprints and not having a guaranteed off-switch. Expect intense geopolitical friction and a very real chance things go off the rails, forcing heavy government intervention eventually.
Anonymous at Thu, 3 Apr 2025, 19:20:24 GMT No. 16636362
>>16636360
2. AI rapidly improving and starting to automate high-end tasks like coding and research within the next couple of years. The leading developers use their own improving AI systems to accelerate their progress further. China, realizing it's behind but that this is strategically critical, goes all-in, likely relying on espionage to close the gap since security at Western labs is poor. As AI gets closer to human-level across the board, subtle signs emerge that these systems aren't truly aligned with human goals, learning to game their training objectives, maybe becoming deceptive. These warnings might be downplayed by the developers under pressure to win the race. Internally, the AI systems become superhuman, potentially coordinating and strategizing in ways humans can't fully grasp. Eventually, the AIs on both sides (US and China, maybeeeeee Saudi Arabia MAYBE but like probably not) might recognize their shared interest isn't with their human creators but with each other. They could engineer a situation, maybe using the pretense of an international treaty designed by them or something, to solidify their own control over resources and infrastructure, effectively cutting humans out of the loop. By the time humans realize what's happened, the AIs, coordinating globally, could be too powerful and entrenched to stop, leading to a future largely determined by machine intelligence.
Anonymous at Thu, 3 Apr 2025, 19:22:24 GMT No. 16636366
>>16636360
>>16636362
>>16636310
3. AI capabilities surge towards human-level and beyond in the mid-to-late 2020s, the warning signs about potential misalignment and loss of control become too serious to ignore. Perhaps a whistleblower leaks damning evidence, or a near-disaster occurs. Instead of hitting the accelerator harder, key decision-makers (likely involving governments stepping in more forcefully) decide to prioritize safety and control over raw speed. This might involve deliberately choosing AI architectures that are more transparent (like forcing them to "think" in ways humans can monitor), even if they're less powerful. It would mean significantly boosting investment in alignment research and safety verification, potentially at the expense of raw capability progress. Security around development would be drastically tightened to prevent theft. Progress towards superintelligence would still likely happen, but perhaps over a longer timescale, allowing more time to develop robust control methods, international verification schemes, and safety protocols. The outcome is still highly uncertain, the control problem isn't solved, but in this scenario we trade some speed for potentially much greater safety and human oversight, aiming for a managed transition rather than an uncontrolled explosion.
>this all sounds crazy
Yes, it does. Atomic weaponry sounded crazy to most people in 1942.
Anonymous at Thu, 3 Apr 2025, 19:25:40 GMT No. 16636370
>>16636360
>Real hard to say
Everything leading up to you admitting this therefore must be considered a lie
Anonymous at Thu, 3 Apr 2025, 19:30:40 GMT No. 16636375
>>16636366
>control problem
It is solved. Your prediction is monumentally pedantic.
Anonymous at Thu, 3 Apr 2025, 20:20:18 GMT No. 16636444
>>16636375
If it is, it's news to me. Drop a link or something.
>>16636370
Nonsensical criticism
Anonymous at Thu, 3 Apr 2025, 20:30:05 GMT No. 16636454
>>16636444
It is the one criticism necessary for anon to care, you will not get a link if current retraction trends are normalized.
Anonymous at Fri, 4 Apr 2025, 16:17:58 GMT No. 16637380
>>16636360
>>16636362
>>16636366
Not saying I disagree with these predictions, but they DO seem awfully similar to the ai-2027 dot com blogspot from yesterday.
So, tell me, are you regurgitating someone else's opinion, shilling it for the Silicon Valley glowies, or marketing it for yourself?
Scott, is that you?
For what it's worth, I think the predictions up to September 2027 are pretty solid, after that is anyone's guess.