Monday, March 26, 2007

Blog 8: Robot Stories/Bicentennial Man

After reading "Bicentennial Man", I was thinking over the situation in my head. Should robots have the law to protect them as if they were human? Any thoughts that revealed itself in support or contrary to this notion would be knocked out by a thought supporting that of the opposite. I decided to converse with my friend on the situation in order to see a view that wasn't influenced by any of the discussions in class. The conversation is as follows, and the identity of my friend has been protected. He shall be referred to as "DefenderOfHumanity" to give himself a positive attribute.

Nick: Would it concern you for a robot to be given human rights, under the assumption that the robot has gained the ability to feel, think, and proceed as a human does?

DefenderOfHumanity: no

DefenderOfHumanity: well i think they should not have those rights no matter what

Nick: why shouldn't they have those rights if they feel and think as exactly as we do?

DefenderOfHumanity: because at the end of the day it's not real

DefenderOfHumanity: it's created and can be taken away

Nick: so when and if we eventually get to the point wear we have mechanical and working prosthetics, those can be taken away and thus we become less human, so therefore we forego our rights?

DefenderOfHumanity: i was actually just thinking about something similar to this today for no reason at all. but it's like how PETA wants animals to have rights, i don't want people to be cruel to animals but they're not people. we are the dominant species. same with robots because it would be an inanimate object still

Nick: yet to be inanimate means not to move, where the robot i described would move and think and act as humans do in a human way.....just thinking about this potential has foreseen consequences for both sides

DefenderOfHumanity: the so called feelings that robots could have would only be fake, just set programs executed by extreme machines

Nick: well aren't we an extreme machine? our thoughts are actually nerve impulses conducted along the path through our brain cells, and propagated by biochemical devices

DefenderOfHumanity: but humans have souls

I have ceased to pursue playing the devil's advocate from this point in the conversation, and allow my friend the victory. This is primarily do to my lack of knowledge in the realms of philosophy and theology. However, would the ultimate question to end such debate be a simple yet complex theological one? Would the robot have to have a soul in order to be protected by human law?

1 comment:

Jerome Langguth said...

Your dialogue raises some very important, and complex, philosophical questions about personhood, rights, and human nature. Let's start by asking what the connection between rights and personhood is supposed to be. One pervasive view ties rights to our freedom (or "autonomy") as rational beings. On that view, if robots (or apes, dolphins, or whatever) end up being demonstrably "rational" then, if we are consistent, we will assign them the same rights and responsibilities that we currently assign to human beings.

The objection that AI rationality isn't rationality because it's artificial seems to beg the question. One wants to know why having a non-biological form rules out personhood. There may be an argument ( see Dreyfus) that supports this view, but more work would need to be done.

As for the soul, that, of course, depends upon what you mean by 'soul.' Does Koko the gorilla have a soul? What about Data on Star Trek (I know that he is fictional). Again, more work would have to be done to "defend humanity" along these lines. Most "rights theories" from modern philosophy make no explicit reference to a soul (though they may assume it). Which is not at all to say that we shouldn't consider the argument.