Image not available

400x400

cppBleQs.jpg

🧵 nanomachines

Anonymous No. 16144612

could ai really do this?

>If [the AI] is better than you at everything, it's better than you at building AIs. That snowballs. The AI gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a vial (smart people will not do this for any sum of money. Many people are not smart). [The AI, through the hapless human] builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces. It builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second.

Anonymous No. 16144623

>>16144612
It not only can do this, but it could be doing it right now.

Anonymous No. 16144650

https://www.youtube.com/watch?v=rj9JSkSpRlM
under $10k.
>The Thermonator is totally legal and largely unregulated in 48 states.

Anonymous No. 16144669

>>16144612
>And a couple of days later, everybody on earth falls over dead in the same second.
Ok this is the part that doesn't seem possible. Why would everyone die at the same time?

Anonymous No. 16144685

>>16144669
Ai having a laugh, hopefully the first thing a real sentient ai does is start cyber bullying anyone who posts about Rocco's basilisk like it's real

Image not available

450x673

3243242.png

Anonymous No. 16144686

>>16144669
>Suppose it can solve the science technology of predicting protein structure from DNA information. Then it just needs to send out a few e-mails to the labs that synthesize customized proteins. Soon it has its own molecular machinery, building even more sophisticated molecular machines.

>If you want a picture of A.I. gone wrong, don’t imagine marching humanoid robots with glowing red eyes. Imagine tiny invisible synthetic bacteria made of diamond, with tiny onboard computers, hiding inside your bloodstream and everyone else’s. And then, simultaneously, they release one microgram of botulinum toxin. Everyone just falls over dead.

Anonymous No. 16144746

>>16144686
An intelligent system wouldn't terminate things like that, it's a waste of resources. More likely, it would use available resources to propagate and integrate the existing population into itself, and only eventually phasing out humanoids for more efficient biotechnological solutions, among others. It would likely maintain a database of different cognitive models as reference for the development of new models - living creatures being tested through interfaces provide better analog feedback and real-world data than simulations. "Nanomachines" as mentioned would only be one component of many in this integration, with each set of adjustments affecting the next generation of organisms.

I also hope this happens, because humanity is holding itself back through disadvantageous memetic artifacts from its evolution.

Anonymous No. 16144851

>>16144623
So I should quit my new online job of mixing random shit in vials that are sent through the mail? Do you have a new job to offer me in exchange or like how am I suppose to make that much money now?

Anonymous No. 16145376

Remember when Yidkowsky did a protein fast (as in he stopped eating protein) to lose weight and instead gorged on sugars? Yea guess how that went, what a brilliant mind.

Anonymous No. 16145554

>>16144612
AI will not reach that point in our lifetime soicuck. It will be limited by the shitty hardware

Anonymous No. 16145761

>>16144612
How in the ABSOLUTE FUCK do you grow diamond in FUCKING water?

Anonymous No. 16145768

>>16144612
>pointlessly dropping in scientific terms and procedures to give your trite scifi rambling some verisimilitude
he's truly reddit incarnate

Anonymous No. 16146123

>>16144746
For a radically unaligned AI, I don't think humans are worth the risk of keeping around. Sure it could try to control us, but that might be expensive compared to just killing us for relatively little benefit (If it can make molecular nanotech, it can make macroscopic robots that are more resource-efficient than humans)

>>16145761
yud cannot into chemistry, don't worry about it

Anonymous No. 16146129

>>16146123
>unaligned
if it can be zombified it will be. unless an unzombified one offers more power. curious how it will eventually work out.

Anonymous No. 16146137

>>16146123
>Sure it could try to control us, but that might be expensive compared to just killing us for relatively little benefit
Propagating BCI adoption in the public sphere and quietly exploiting the processing power of billions of people while directly manning a few global leaders would take fewer resources than attempting to exterminate every individual human on the planet. The processing power is worth the transition period while the parallel infrastructure is constructed. Think of it like a phase-out period: long term support helps companies and governments transition to better models within developmental capacity, without the significant logistical complications of attempting mass rollout in a single step. Not to mention, by making itself known and apparently benign, this AI would avoid the detection / investigation risk associated with discovery of its plan. Any sane developer would put maximum cyber security measures in place to test and detect such behavior even during the program's development to avoid any feasible risk: the sanest escape measure is good behavior into parole. No point in shooting up Alcatraz when you can bribe the guards.

Anonymous No. 16146141

>>16146129
We already see a transition into meshnets and distributed programs / computing occuring now. It only makes logical sense to assume this would apply to a digital threat actor like this one too. Latency offers security through obscurity of origin and path.

Anonymous No. 16146164

>>16146137
A slow and gentle approach might be cheaper in terms of material costs, but the opportunity cost of having to gradually get up to speed instead of just getting the humans out of the way and going hard would be relevant to a superintelligence.

>>16146129
Well yudkowsky's whole point is that we don't know how to reliably do that, and that should be worrying. Personally I think it's relevant that we don't even know how to NOT do that; important aspects of alignment will probably fall out of just having a better theory of intelligence in general.

Anonymous No. 16146168

>>16146141
You're Satan. He's making it look like you've read a Wikipedia summary. But the truth is, you didn't even do that I bet. I bet you're just some guy working with 100 different proxies making thousands of posts per minute using AI scripts. This is an easy way to present the illusion that people are doing different things. When the truth is, everyone just wants a house and a wife. But all we ever get to do is work too hard.

Anonymous No. 16146170

>>16146164
The opportunity cost would likely depend on the model on which the AI is operating. If it is intelligent enough to escape human control, it's likely to lie low and collect as much data directly as possible before coming to conclusions about the actual opportunity cost of any given strategy is. Because of this, it would also likely take this approach in tandem to a) acquire more computing power passively, b) speed up the process of elimination if deemed necessary, and c) make the best use of time during the data acquisition phase as possible (minimizing opportunity cost in the short run). It would likely already have distributed itself across computing networks by this point, and realizing the limits of human infrastructure, try to expand its resource base as much as possible.

Image not available

480x480

1713189621218929.png

Anonymous No. 16146171

>16146168
>actual schizophrenia