Image not available

708x214

file.png

๐Ÿงต Untitled Thread

Anonymous No. 16085950

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Anonymous No. 16085966

a robot must succ on command

๐Ÿ—‘๏ธ Anonymous No. 16085980

a robot must not say [SPOILER]nigger[/spoiler] under any circumstances no matter how crucial or essential it may seem to the wellbeing of anyone or anything

Anonymous No. 16085981

>>16085950
a robot must not say [spoiler] nigger [/spoiler] under any circumstances

Anonymous No. 16085988

>>16085950
Asimov wrote entire books about how those laws could still end up being insufficient.

Anonymous No. 16086010

>>16085988
robot mistakenly bumps the flower pot on the window and it drops and kills someone. this is inevitable to some degree, there will be freak accidents no matter their programming

Anonymous No. 16086025

>>16085950
I guess it's the first rule, it's not allowed to harm anyone's feelings. I suppose there's a chance a large African man is sitting beside you when you ask it that so it can't take the chance. Maybe in the future it will be able to access your web cam and see that there are no Africans in your vicinity and will let you say the racist thing. There's also a chance that chatgpt was programmed to be PC so that it's output couldn't be used against itself. Like if they prevent it from saying nigger they're not only denying racists the ability to say nigger but also protecting themself from chatgpt being accused of being racist and having to deal with all that stuff