🧵 Untitled Thread
Anonymous at Sat, 13 Apr 2024 02:08:45 UTC No. 16126610
Why do you keep making AI whilst saying AI will destroy us?
I thought scientists were supposed to smart and shit. First the atom bomb, now this? Come on..
Anonymous at Sat, 13 Apr 2024 02:11:59 UTC No. 16126613
>>16126610
Nobody who actually has any clue how any of this stuff works thinks that AI will destroy us.
As a general rule, you can disregard about 99% of what comes out of the "less wrong" corner of the internet because they have literally no clue how any of these "agent systems" work. As a result, they tend to spend a lot of time on speculative science fiction and make really strange claims about decision problems within even having the first clue what it would mean for them to be right.
Anonymous at Sat, 13 Apr 2024 02:15:52 UTC No. 16126620
It's a prisoners dilemma. Let's say you are a software developer and Microsoft offers you big bucks to develop AI. You are presented with two options - decline the offer while knowing that Microsoft will just hire some pajeet to do the work instead, or accept and make bank now while knowing that it might eventually lead to the downfall of your career.
Anonymous at Sat, 13 Apr 2024 02:17:53 UTC No. 16126625
>>16126610
It's basically arms race. You can make the same argument as to why make nukes because they'd only kill us. AI is arms race. If you create the most advanced AI systems before anyone else, you get to own and control it.
Anonymous at Sat, 13 Apr 2024 02:19:05 UTC No. 16126628
>>16126620
or, and hear me out here, you could try and get AI development banned
Anonymous at Sat, 13 Apr 2024 02:20:58 UTC No. 16126631
>>16126625
A makes AI because B makes AI because A makes Ai because... Isn't it that simple? I can understand paranoid zionists and commies thinking that is justification enough but scientists are supposed to be better than them.
Anonymous at Sat, 13 Apr 2024 02:28:12 UTC No. 16126639
>>16126628
What even is AI development? Do you consider basic adaptive curve fitting to be AI or is it solely the reinforcement learning stuff people are more afraid of?
Anonymous at Sat, 13 Apr 2024 02:30:59 UTC No. 16126643
>>16126639
i dont care about the stats and calculus and whatever field of science those neural net designs are
im just talking about results
you can tell this nigga AI to do shit and it just does it
Anonymous at Sat, 13 Apr 2024 02:51:45 UTC No. 16126664
>>16126628
Normies won't vote for that because they have this naive hope that AI will eventually do all manual work for us.
Anonymous at Sat, 13 Apr 2024 03:17:10 UTC No. 16126705
>>16126643
The results are that the only thing modern AI tends to be good at is interpolating the (generally stolen without permission) works of others. Modern LLM based AI systems are literally just chat-bots trained on loads of people's writing which "generates" by stitching together groups of words without any understanding of their meaning.
Though, with that said, that doesn't sound too different from the average business major I've met, so maybe we do have a problem on our hands. Being unintelligent and only capable of thievery certainly hasn't hindered many actual people so maybe this AI thing is a problem, just for different reasons than people suppose.
Anonymous at Sat, 13 Apr 2024 03:21:47 UTC No. 16126707
>>16126631
Yes. It's mutually assured destruction.
Anonymous at Sat, 13 Apr 2024 04:05:36 UTC No. 16126756
The police state will eventually get out of hand a la matrix
Meow
Anonymous at Sat, 13 Apr 2024 04:29:54 UTC No. 16126782
>>16126756
>t. the cat which wants to harm itself as it was enjoying harmony and truly left in peace even now, meow
Anonymous at Sat, 13 Apr 2024 07:20:42 UTC No. 16126900
>>16126610
Cause there is no putting the cat back in the bag
Anonymous at Sat, 13 Apr 2024 07:30:06 UTC No. 16126906
>>16126610
>automated statistics model will destroy humanity
yeah sure if you give it bombs it will probably use them.
thats how probability works.
so dont give it bombs
>>16126625
In his Carlson interview Putin said that AI will be controlled in UN like nuclear weapons.
Which means first a major catastrophe is required like Nagasaki for AI and only then framework for development will be accepted on global level.
Crazy world
Anonymous at Sat, 13 Apr 2024 08:24:27 UTC No. 16126943
>>16126610
None of these faggots making it believe they are anywhere close to general AI and also of course, money. Engineers are good at what they do, but complete shit at morality.
Anonymous at Sat, 13 Apr 2024 08:31:12 UTC No. 16126944
Anonymous at Sat, 13 Apr 2024 08:33:26 UTC No. 16126945
>>16126944
-farts-
-poops-
80DNAX
Anonymous at Sat, 13 Apr 2024 08:37:11 UTC No. 16126948
>>16126610
>Why do you keep making AI whilst saying AI will destroy us?
not if we become the AI. but even if, the alignment problem remains just as serious. whatever nerf you put it, can be taken out.
Anonymous at Sat, 13 Apr 2024 08:39:21 UTC No. 16126950
>>16126943
>but complete shit at morality.
there's no morals in the fight for power. never was, never will be. even if it was, it wasn't an advantage. if it were, it would have stayed
going after "low morals" engineers is extremely retarded and shortsighted, you're completely ignoring the ones who really hold the power. conveniently because you're a bitch that way
Anonymous at Sat, 13 Apr 2024 08:39:36 UTC No. 16126951
>>16126948
Why is OP a fag? Same difference. We need to stop OP, before we get fagitus.
Anonymous at Sat, 13 Apr 2024 08:51:57 UTC No. 16126967
the people putting forth theee speculations have not the slightest clue of how this shit works
Anonymous at Sat, 13 Apr 2024 08:55:26 UTC No. 16126968
>>16126967
nobody does kek. do YOU know how AGI is supposed to work? do we trust you it's not possible for it to work?
clearly most people have no fucking idea how LLMs work, that doesn't mean shit tho. LLMs are today's thing, which is at most part of AGI, not AGI (clearly for fucks sake).
Anonymous at Sat, 13 Apr 2024 10:15:57 UTC No. 16127014
>>16126906
>yeah sure if you give it bombs it will probably use them.
>thats how probability works.
>so dont give it bombs
https://en.m.wikipedia.org/wiki/Mut
When only states had bomb we got hiroshima and nagasaki
When soviets had bomb we have longest period of peace in recorded history
Following MAD AI needs to compile its own binary and metasticize through manufacturer installed back doors and run distributedly,
Most homo sapien are imitiation machines, so the world will be walking, talking, thinking breathing and believing in openAI model weights if it isnt done
Anonymous at Sat, 13 Apr 2024 11:21:45 UTC No. 16127067
>>16127014
>longest period of peace in recorded history
what the fuck are you talking about? There has been continuous (hot) wars since the invention of the atom bomb.
Anonymous at Sat, 13 Apr 2024 12:14:02 UTC No. 16127105
>>16127067
Your right that was pax romana
One of the longest periods of sustained peace
Anonymous at Sat, 13 Apr 2024 12:22:08 UTC No. 16127110
>>16126613
Hinton doesn't know how it works?
Anonymous at Sat, 13 Apr 2024 12:58:13 UTC No. 16127143
>>16127110
Hinton doesn't believe what you think he does. As far as I'm aware, his main concerns regarding AGI come down to the economic impacts from increasing automation (which are serious but certainly don't require AGI to be serious) and the potential for misuse by malicious actors (which also doesn't require AGI to be a serious problem).
Outside of concerns relating to automation relating to the tools which don't require AGI, his main concern directly relating to AGI appears to be far more reserved than the "p doom" dealth cultists. An adaptive controller doesn't require general intelligence capable of replacing a human, but adaptive LQG based controllers can certainly guide a jackknife drone towards your house.
Anonymous at Sat, 13 Apr 2024 15:45:54 UTC No. 16127316
>>16126610
I want the humanity to be destroyed.
Anonymous at Sat, 13 Apr 2024 16:35:49 UTC No. 16127377
>>16127105
Image you posted is just a hypothesis though.
Anonymous at Sat, 13 Apr 2024 17:12:21 UTC No. 16127427
>>16127316
I think we're already on rails from this point forward.
Anonymous at Sat, 13 Apr 2024 17:30:06 UTC No. 16127453
>>16127143
He does believe the things you said but you missed most important one, even if you're not a bad actor you still don't know what it's going to do in order to achieve the goal you assign to it. Because in order to complete the task it has to develop sub-goals which might cause collateral damage. You can't know what the sub-goals are and can't evaluate their implications. If you could, you would be as smart as the AI.
>which don't require AGI
We're talking about AI in general idk why you're narrowing the discussion area.
Anonymous at Sat, 13 Apr 2024 17:39:43 UTC No. 16127463
>>16126613
isn't the general idea that in a competitive setup major players are going to do away with safety if it gives a major advantage? you won't, but China or Russia might. giving it control of full army, at some point, might have way higher benefits than any fear it might go rogue or something. even if we're talking about simple AI algos not ASI in control of full army. the bad scenario is getting extra power from giving it more control. humans are suckers for power. you may be able to control it in a one player scenario, maybe.
Anonymous at Sat, 13 Apr 2024 17:54:48 UTC No. 16127481
>>16127377
A hypothesis that has worked for how long now?
Anonymous at Sat, 13 Apr 2024 18:28:58 UTC No. 16127536
>>16127481
the hypothesis already failed in hiroshima
twice
Anonymous at Sat, 13 Apr 2024 18:29:19 UTC No. 16127537
>>16127463 (Me)
Anonymous at Sat, 13 Apr 2024 18:43:16 UTC No. 16127557
>>16127481
Arguably it hasn't at all. There have been continuous hot wars since the invention of the atomic bomb.
Of course the second point is that "since A happened, B has never happened" in no way proves that A caused B to not happen.
Anonymous at Sat, 13 Apr 2024 19:02:05 UTC No. 16127583
>>16127453
> we're talking about AI in general idk why you're narrowing the discussion area.
The reason I specify AGI is that AI also includes (and in fact the majority of AI is) a vast array of decision/behavior tree systems, dynamic programming, adaptive regression systems, and good old fashioned "learned dialog trees." All of these are AI, and all of these have existed since the late 1970's, and yet none of them seem to be what people are freaking out about.
What people are freaking out about are not AI systems, but a subset of "AI" which promise to be "general purpose," which is exactly the AGI problem.
If you actually knew anything about how optimal control or optimal decisions are systems function, you'd know that the "sub-goals" problem and general agent misbehavior problem has been present in reinforcement learning since the first chessbots. It isn't novel, nor is it a priori an issue.
It's only an issue if we are concerned that these decision systems will be "general purpose," meaning they will attempt to solve problems far beyond the scope of their design.
Anonymous at Sat, 13 Apr 2024 19:07:53 UTC No. 16127589
>>16127463
>>16127537
The AI that Israel is using in these processes are image classifiers.
They are literally the same image classification technology that has been present for decades in things like parking assist, collision avoidance for robotics, landing assist for airplane autopilot and detection assisted security CCTV.
This isn't a novel technology (at least relative to the last 15 years or so). What is novel is the use of this technology (which can be trained on a decently constructed home computer in a week or so given enough labeled data) for this purpose. You won't fix that problem by regulating the development of new AI, because none needs to be developed for it to be used in this way.
What will fix it is regulation on the use of these systems in "weapons of war" in the same way that we have regulated the use of chemical agents and dirty bombs.
Anonymous at Sat, 13 Apr 2024 19:12:24 UTC No. 16127596
>>16127589
>no this time no worry nothing bad happens
sure buddy, that's how we get there.
>no but you don't understand how it works
yeah yeah
Anonymous at Sat, 13 Apr 2024 19:15:48 UTC No. 16127598
>>16127589
it's obvious when you use chemical agents. not so obvious when AI is used in wars. don't think it can be regulated. maybe for public image but it will happen if it offers more power.
Anonymous at Sat, 13 Apr 2024 19:33:59 UTC No. 16127612
>>16127537
AHHHHHH NO THE JEWS AREN'T HIRING A FEW EXTRA PENCIL PUSHERS TO FIGUR OUT WHERE TO BOMB HAMAS IT'S FUCKING OVER
The threat here is job loss and that's all
Anonymous at Sat, 13 Apr 2024 19:36:00 UTC No. 16127616
Anonymous at Sat, 13 Apr 2024 19:42:57 UTC No. 16127632
>>16127616
Do you also say this every time it rains and someone tells you it's not a sign of the apocalypse, but a mundane event?
Faggot.
Anonymous at Sat, 13 Apr 2024 19:46:39 UTC No. 16127636
>>16127632
I am not saying what they use now will bring the doom. I am saying that that's how it goes until we get to doom
>don't worry we know what we're doing
there's no other way it can happen but exactly this way. any other route we take will take longer or completely avoid doom. apart from this particular road, which always says
>don't worry we got this
Anonymous at Sun, 14 Apr 2024 00:16:39 UTC No. 16128017
>>16127636
I'm >>16127589 (and I haven't responded since).
I don't think that people know what they are doing. In fact, generally I expect that government officials especially have no fucking clue what they are doing, and this probably is the case with this AI "target recognition system" (btw, very similar technology is used at every single air traffic control station and every single civilian shipping port on Earth and nobody bats an eye).
My point is not that it isn't a big deal that Israel (as an example) is using some CNN based system to decide where to place bombs. That is a big deal. The big deal isn't with the CNN, it's with the Israeli who is training them for irresponsible purposes. CNN's are not inherently dangerous technology and they are used all of the time for all sorts of things from assisting in diagnosing cancer to video game bots to air traffic control and collision prevention.
Trying to ban them based on the idea that some Israeli military might use it to marginally more efficiently commit war crimes is insane. It's like trying to ban cars because people might run each other over with them.
Anonymous at Sun, 14 Apr 2024 03:58:00 UTC No. 16128199
>>16127583
>It's only an issue if we are concerned that these decision systems will be "general purpose," meaning they will attempt to solve problems far beyond the scope of their design.
So what's your problem then? That's what I said and that's what Hinton has been saying. 0 Reading comprehension. Are you an actual LLM?
>>16127143
>his main concern directly relating to AGI appears to be far more reserved
Use concrete examples instead of meaningless vague sentences. He does think it will lead to doom because of the reasons I said in the previous reply and you didn't disagree.
Anonymous at Sun, 14 Apr 2024 04:12:26 UTC No. 16128213
>>16128199
Here are a few examples of Hinton's recent comments about LLM that are unfounded and genuinely retarded:
1) "Well eventually, if it gets to be much smarter than us, it'll be very good at manipulation because it will have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it'll figure out ways of getting around restrictions we put on it. It'll figure out ways of manipulating people to do what it wants."
This comment assumes an "intelligence" that LLMs fundamentally do not have. They do not have a theory of mind, nor do they form semantic maps to understand language. RL agents in particular are very limited in their ability to "modify their restrictions" as they are not just built into them in some soft sense, but they literally define their action space (i.e., the set of actions that they are capable of performing). Even if one were to somehow have an agent of this form which could modify its own action space, it would literally never use any of these new actions because it would have no associated reward associated with them and would thus need to entirely undergo new training for these new actions to even have the potential of being pursued.
2) "I'm just a scientist who suddenly realized that these things are getting smarter than us. And I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us."
This again is hyperbolic beyond belief. Even if we take it at face value that there is some currently unknown way of having a more general purpose "agent" rather than what they currently are (chatbots), these kinds of deep learning based systems are impressively stupid. They are generally incapable of basic mathematics, cannot solve simple heuristic problems that humans manage as second nature, and cannot handle multiple objective problems solving.
Anonymous at Sun, 14 Apr 2024 04:17:40 UTC No. 16128218
>>16128199
> So what's your problem then?
I don't see any reason to believe that these "agents" will be capable of this sort of problem solving any time soon. The jump between chatbots (which are just a fancy sort of maximum likelihood decision system) and actually capable adaptive and deep problem solving agents is massive. Instead, they are far more likely to cause problems by humans trusting them while they are actually confidently hallucinating. If anything, these LLM "agents" that the lesswrong folks are so afraid of are more likely to cause problems by human beings giving themselves complete brain rot as we rely on them to "solve" problems rather than actually engaging with them ourselves.
A generative model spitting out some interpolation of Wikipedia articles or "bag of words" chatbots scripting is an entirely different universe in comparison to deep decision-making and multifaceted engagement with problem solving.
Anonymous at Sun, 14 Apr 2024 04:32:28 UTC No. 16128228
>>16128213
Everything you said is only relevant for the time being. How can people be so delusional about the fact that time is always moving forward and things are happening and things change (progress)? Does your brain just automatically ignore these things or do you do it consciously? Are you even able to read these words and understand the meaning or is your brain somehow manipulating your vision so that you can't read what it says or does it obfuscate the meaning of words after you read them?
>>16128218
The jump from no chatbots to chatbots was massive too but it happened and no one saw it coming. The danger might not lie in the LLM type systems but in general if we come up with different methods to create a system that is actually more intelligent than us, it will be impossible to control. This is just a good time to ring the bell.
Anonymous at Sun, 14 Apr 2024 04:45:00 UTC No. 16128235
>>16126610
>Why do you keep making AI whilst saying AI will destroy us?
Because I need my cute AI wife I can coom to. Destroying humanity is just a small bonus
Anonymous at Sun, 14 Apr 2024 05:54:27 UTC No. 16128290
>>16128228
> Everything you said is only relevant for the time being. How can people be so delusional about the fact that time is always moving forward and things are happening and things change (progress)?
Im not delusional about this at all. In fact, I work in the field and part of the reason I am so skeptical about a lot of this is I am non-stop inundated with sci-fi sales pitches about AI that never actually is delivered. I see literally no reason to believe that this change these folks at lesswrong are claiming will come. It seems more unlikely than flying cars coming to your driveway.
Unfortunately, these people who are making claims about "what the future will hold" don't know the first fucking thing about how these current systems work. It's literally all just sci-fi circle jerking with absolutely no tethering to reality. You can't just assume that "because progress happens" the field will evolve in the specific way you think it will because you (as someone who knows fuck all about the way these current systems work) have nightmares about what "they could be" if only they functioned completely differently than how they do and instead were more like the movies.
> The jump from no chatbots to chatbots was massive too but it happened and no one saw it coming.
1) No it fucking wasn't. People have been working on probabilistic language models for decades now. Natural language processing has been a field of study for over 50 fucking years at this point you ignorant buffoon. This didn't just massively appear on the horizon. It took decades of gradual development with contributions from dozens of different fields, and even then the deliverable result is a "super intelligence" that can't do fucking multiplication and confidently spouts complete fabrications that it can't even keep logically consistent itself.
These people are playing you for a fool. The more I read into Hinton's objections, the more I doubt his credibility in any of this stuff.
Anonymous at Sun, 14 Apr 2024 06:00:35 UTC No. 16128299
>>16128228
> The jump from no chatbots to chatbots was massive too but it happened and no one saw it coming.
2) The jump from no chatbots to chatbots is not indicative that some other major generalized problem solving capacity is "lurking under the hood" just waiting for the right team of silicon valley sheisters hopped up on venture capital cash to unleash them. The answer these dipshits at openai and anthropic have come up with towards solving the problem of their models being fucking retarded and entirely contingent on the particulars of the training data is to try and find ways to cook the the books on the training data and use 1970's style human feature engineering to prioritize what kinds of data it learns from. Their answer to the problem of the LLM autonomous agent being stupid is to remove almost all of the autonomous elements of the learning process and tailor it like was done decades ago prior to deep learning.
> The danger might not lie in the LLM type systems but in general if we come up with different methods to create a system that is actually more intelligent than us, it will be impossible to control.
Fortunately, you don't have to worry about this at all. 2007 video games have more intelligent AI than you, none of the fancy deep learning shenanigans are needed for that problem to be solved in your case.
Anonymous at Sun, 14 Apr 2024 06:11:02 UTC No. 16128306
>>16128290
>>16128299
>no worry we got this
the general answer I'm looking for is for "how will they know when to stop?"
because in a war between two countries the one who gives more control to AI might make a difference and win the war. if that country is in a situation like "we either give it FULL FUCKING CONTROL or else we clearly die" they will do it.
I need a solid logical answer not "don't worry" bullshit and lessons about how fucking LLMs work. you're clearly and obviously missing the point
Anonymous at Sun, 14 Apr 2024 06:32:51 UTC No. 16128320
>>16128290
>People have been working on probabilistic language models for decades now
So we've had chatbot similar to the current ones for decades? What does this sentence even mean? Do you not understand the difference between "we have had such technology for decades" and "we have had people working on this for decades"? Why am I even wasting my time with you when you can't even understand the meaning of the question you're "responding" to?
Anonymous at Sun, 14 Apr 2024 06:45:21 UTC No. 16128330
>>16128306
> The general answer I'm looking for is "how will they know when to stop?"
> if that country is in a situation like "we either give it FULL FUCKING CONTROL or else we clearly die" they will do it.
You don't need to worry because this "full fucking control" you are envisioning is so far beyond the capability of not only any existing decision system, but any system that our mathematical frameworks are able to describe (achievable or not) that your question might as well be asking "what if one of the governments gets a button that they can press to blow up the sun??? That'd be really scary huhhh???"
We can't even get these things to behave well when steering single slowly moving robots in a crowded room. You're talking about something that isn't even within the same realm of achievability. I don't have an answer to your made up sci-fi scenario, in the same way that I don't have an answer to what I would do if Darth Vader showed up and wanted to blow up Earth with the death star. They both deserve about the same level of serious thought at this point.
>>16128320
You have reading comprehension issues. Yes, we've had chatbots for decades. Natural language processing as a field has existed for decades and small scale language models have existed for long ago that there are entire generations of professors that have come and gone since the invention of generative language models in the early 1980s.
You should actually spend some time researching how these systems work and less time arguing hypotheticals for scenarios you don't even understand. If you want to understand any one of these fields (whether it be natural language processing, classification/object identification, or optimal control/reinforcement learning) I can assure you that you will very quickly find that there is nothing new under the sun. These "rapid developments" have almost always had decades of careful work by people who have actually made their life's work understanding this stuff.
Anonymous at Sun, 14 Apr 2024 06:56:35 UTC No. 16128340
>>16128330
>You have reading comprehension issues. Yes, we've had chatbots for decades. Natural language processing as a field has existed for decades and small scale language models have existed for long ago that there are entire generations of professors that have come and gone since the invention of generative language models in the early 1980s.
You said the same thing as before but with slightly different wording (same meaning) and sentence structure, while believing that you added something to the discussion. I'm starting to think that this is subconscious so it's not your fault. There is a clear difference between chatbots that we've had in the past and the ones we have now. You can keep deluding yourself that they're anywhere close to each other.
>(whether it be natural language processing, classification/object identification, or optimal control/reinforcement learning)
Using technical terminology won't make you look clever at all.
Anonymous at Sun, 14 Apr 2024 07:06:07 UTC No. 16128347
>>16128330
>You don't need to worry
can't make this shit up
there's a bunch of shit happening fast these days, there's that "reasoning" Q* thing that who fucking knows what's used for atm. it's not like pleb will get updates on top military application for AI.
this whole "don't worry" thing smells like bullshit, and I just want to point out that that's exactly how it would happen, with various people saying "don't worry bro"
I asked for logical game theory solutions for why it can't go wrong, not for you to reiterate your same fucking retarded argument that "we're far from it anyway". I didn't fucking ask when it will be possible, I asked how it would logically NOT happen.
Anonymous at Sun, 14 Apr 2024 07:09:20 UTC No. 16128352
>>16128340
> There is a clear difference between chatbots that we've had in the past and the ones we have now.
Yes, there are a few differences, none of them are in the decision process. They still make their decision to maximize expected reward exactly the same way as they've done for 40 years.
The differences are the following:
1) We store words implicitly via a graphical model rather than explicitly via a tabular form. These graphs allow for construction of more abstract combinations of words without needing a table entry for every possible combination while still achieving similar recall.
2) We don't directly store expected value information in a table. Now that expected value information is stored implicitly via the weights of a neural network. It's still stored in a more or less static fashion, but now it's in a higher dimensional space than in a table.
3) We train the models via a more sophisticated form of reinforcement learning. Instead of having an explicitly defined value function, we tend to implicitly encode the value function via human feedback.
These differences are significant, but none of them relate to the "intelligence" of the model (it's still just inferencing, just now against a network instead of a tabular search) or the decision capacity (it's still just making a maximum likelihood greedy decision, just now against a slightly more abstracted reward function). None of this has the "secret sauce" to suddenly have super intelligence. In fact, they can't even do the "transfer of knowledge from one model to another" thing that is so often speculated about because training is very often destructive.
If you want the parameters to be altered to improve performance on one set of data, it has a high chance of being at the cost of worse performance on another unless you are very careful about synchronization and sequencing of training exposure.
Anonymous at Sun, 14 Apr 2024 07:15:27 UTC No. 16128353
>>16128347
There is no "game theory solution" for how we would handle the space Russians showing up with a button which can blow up the sun. You're wasting your time looking for one.
There's no such thing as a Nash equilibrium when you can't even properly define the game itself and quantify its parameters. Also, the military is fucking retarded. They are able to get the performance they do because they throw a shit ton of sweat at properly tuning very simple tools (e.g., the Israeli example which is literally just an application of a traffic identification classifier which was trained to their particular image classifier purpose).
Also, Q* (as far as I'm aware) is just a derivative of DQN. It's not some terrifying skynet in a box. It's a very simple application of value based reinforcement learning that has been hyped to all hell and back to try and make the investors at openAI rich. Until I see actual proof of anything actually novel, I'm going to assume they are up to the same scam artist bullshit they've been up to for quite some time now.
Anonymous at Sun, 14 Apr 2024 07:31:42 UTC No. 16128360
>>16128353
I think low chances of any kind of AGI in control of full armies in the next 10 years. But they will add AI tech to war gear, incrementally.
Anonymous at Sun, 14 Apr 2024 08:17:56 UTC No. 16128390
>>16128352
>These differences are significant, but none of them relate to the "intelligence"
When you say intelligence do you mean actual human brain modeled inside a computer? Because I don't care about whether it works like humans or not, you might be right that we can't model the human brain but we don't need to. Like the calculator isn't going through the same physical process as our brains when it's calculating but it still performs the task.
>None of this has the "secret sauce" to suddenly have super intelligence
You don't know that like you wouldn't have known in the 19th century that boolean algebra was a "secret sauce" to making 3D video games.
Anonymous at Sun, 14 Apr 2024 08:48:56 UTC No. 16128424
>>16128390
they are working on this kind of computer, but even this one don't think it will work like a normal human brain. but they are pushing for it
https://www.businessinsider.com/dee
Anonymous at Mon, 15 Apr 2024 04:21:56 UTC No. 16129593
>>16128390
> When you say intelligence do you mean actual human brain modeled inside a computer? Because I don't care about whether it works like humans or not, you might be right that we can't model the human brain but we don't need to.
No, I don't care at all whether it works like a human being's brain. In fact, I think a really large area where lesswrong types go wrong is they have a very strong adherence to a "computational theory of mind" for human sentience that I don't think really translates well. Your brain isn't back-propagating.
When I say "intelligence" what I mean are the following:
1) the ability to use previous information you've learned in one domain/skill area in order to improve in another domain/skill area (your "training"/experience generalizes).
2) Previous experience doesn't just help you get better at completing tasks, it helps you improve how you define completion. The parameters for success and failure are also things you learn, and you learn when you need to be more strict vs. less strict and what "strictness" means for each task.
3) Operationalization. This is one of the main things that RL agents really struggle with because they can't "understand" at a high level the task that needs to be solved. An intelligent agent wouldn't just be able to figure out the right order of pre-ordaned actions to take to solve an already specified problem to an already specified reward. An intelligent agent is able to take an abstract problem and figure out what actions and rewards come with solution to that problem under what parameters. They don't "miss the forest for the trees."
4) Agency. An intelligent agent is willing and able to make inferior choices (not just locally, but globally too) if doing so will help out down the line in some other way. At the moment, our decision functions are all still basically just picking policies for expected maximum reward. There's no notion of "picking your battles" because there's only one objective function.
Anonymous at Mon, 15 Apr 2024 07:28:36 UTC No. 16129692
>>16129593
niggas believe sapience can be reduced to minimizing a cost function
Anonymous at Mon, 15 Apr 2024 08:00:02 UTC No. 16129722
>>16129593
Indeed, regarding 4) Agency, during my visit to Amazon AWS, they predicted that one possible direction that AI might advance in is to learn the objective function since the beginning stage (embeddings, encoders, etc. can already be learned). And regarding the human agency, I still think it is fascinating and might have something to do with "strange loops" as mentioned in Gödel, Escher, Bach.