|
Post by Teh Skitch on Apr 24, 2009 17:03:41 GMT -5
While writing a sci-fi story I came across an ethics problem, would deleting a hyper-advanced AI (basically a life minus a body) be considered murder? Also, would such AIs bring us to value human life less? I can see both sides to this.
Edit: Kind of a description-ish of what I'm talking about: It can like/dislike the user based on user interactions and make decisions. In world he can open files and programs on your computer either by choice or command. In World mode you can communicate with him via notepad. He comes with a built in diction ary of simple words and commands (good, bad, open notepad, etc.) but he can learn up to 1,000,000 words and 500 commands, along with grammar, in a way more efficient than any human. The user can even have full length conversations with it! If you mistreat him, he will not do as you say, and if he gets out in World mode (he can let himself out if you let him figure it out) even destroy documents. (World mode=entire computer)
|
|
Magical
Petz Groomer
Failed Math Exam (46%!) Failing math again. (61%)
Posts: 239
|
Post by Magical on Apr 24, 2009 17:12:24 GMT -5
What?
|
|
|
Post by Brighteyes on Apr 24, 2009 17:55:31 GMT -5
Are you talking about like a consciousness without a body? In my opinion, that would be considered 'murder' to delete an intelligent (not necessarily human-level intelligence, but whatever) consciousness that thinks and can make decisions, and so forth.
Edit; And I don't think that such advanced AI would make people value human life less... then again, I guess it could. It would depend on the people.
|
|
|
Post by Teh Skitch on Apr 24, 2009 18:14:50 GMT -5
Here's an exerpt where the character who creates the AI is talking about it: It can like/dislike the user based on user interactions and make decisions. In world he can open files and programs on your computer either by choice or command. In World mode you can communicate with him via notepad. He comes with a built in diction ary of simple words and commands (good, bad, open notepad, etc.) but he can learn up to 1,000,000 words and 500 commands, along with grammar, in a way more efficient than any human. I can even have full length conversations with mine! If you mistreat him, he will not do as you say, and if he gets out in World mode (he can let himself out if you let him figure it out) even destroy documents. I cut out unimportant stuff. (World mode=entire computer) I say no because an AI, as I later state in the story, is less of a life, and more of a realistic illusion of life.
|
|
|
Post by Reena on Apr 25, 2009 0:14:13 GMT -5
i guess i say no, it's not murder, only because it (the ai) doesn't feel pain. and never could. like if we pick a flower. it's not murder. yes, i know flowers don't think, but their cell activity is pretty impressive none the less. and we are stoping it. so we stop a computer. i mean, i love the movie AI, and i cry whenever i see the robots get hurt, but in real life, i can't imagine anything that is artificial be that convincingly human.
one should probably ask, if it looks and acts like a human, then why is someone wanting to kill it? that is creepy enough in itself, wanting to kill something resembling your fellow man.
|
|