Thursday, April 24, 2008

Final Exam: Countering the Arguments against the Case for Robots being Human

In previous entries, I have argued against the case for allowing robots to be considered human. In response to my previously held views concerning Data from Star Trek: The Next Generation, and David from the film A.I., I will now put forth an argument which considers the possibility of their humanity and defends it. The humanity of robots is a hot-button issue, one which brings to mind many ethical issues in the treatment of these new individuals.

Concerning Data, the android robot from Star Trek, it must be made clear that while he is a machine, a robot, he begins to express certain unexplainable emotions and feelings, doing activities a robot would not seemingly do. One example comes to mind when he keeps the device that present a holographic image of sorts. This image contains a woman, and when asked why he keeps it, he explains, that they had been “intimate”. Obviously, Data had had feelings for her in the past. In addition, others around Data consider him to be a human, even their friend. If Data were not a person, how could someone truly be his friend? Data is loyal, even though many consider this to be a robotic quality with which he was programmed. In another scene Data is seen packing in his room, in a rather depressed mood. Clearly, he does not want to leave his crew, and it saddens him to do so. He had feelings about certain events. It is also interesting that others around Data and David call the robots ‘him’, not ‘it’. They feel that they are separate individuals, capable of some sort of rational thought. I also question the criteria of having ‘rational thought’ as being a qualification for being human. Look at mentally incapable individuals who cannot form a single rational thought. Look at individuals who are brain damaged in accidents, in a coma but still alive. Are they any less human? No.

If androids look and seem much like humans, but lack qualities that make them disagreeable, are not these humaniform individuals like saints from the past? Certain holy individuals who lived a life of service, doing what they were told, and not arguing, laying down their lives freely for another…these saints were obviously human. So are not humaniform robots human as well, especially when displaying human characteristics, such as will? Data was eventually given a choice of what to do, and he chooses to remain with his crew. He can make a decision for his freedom provided he knows the condition and desires it.

We should not consider these robots human for our sake, but for theirs. If we do everything for our sake, are not we becoming less human? We would become selfish to the degree of monstrosity, which has happened in the past. This is evident in the film A.I., when David finds himself at the ‘Flesh Fair’. He is exposed to the torture and destruction of robots for human amusement, and it shakes him to the core. When David finally arrives on stage, however, even the audience is upset when they try to do something to him, because he looks so much like a human. David is a human, especially in his will, however, when he begs for protection from others when his robotic life is threatened. Data and David are just humans of a different sort, with renewed criteria for personhood.

As time and technology progress, we cannot understand the implications of what is actually going on in their robot minds, when they grow and develop according to the situation around them. It is too complex for our human minds to grasp. Therefore, when one robot deviates, we suddenly become afraid, and treat them even more like slaves. In reality, owning these created individuals is like a new form of slavery, an idea presented in one of our films. In the past, did not people own others, such as the whites owning African Americans? Indeed, what a horrible crime that was against humanity!! Individuals got it into their mind that they could own another human being because the one under them is somehow ‘less’ than them. Isn’t that what could happen with robot slavery? Forcing them to work under us because that is the only way they have been programmed, not letting them deviate and follow their own creative robot potential, what a sad situation that would be. We would have not learned from history and progressed as a human race if we are to continue into a new form of slavery.

As humans, we are machines as well…biological machines. I argued previously that Data and David are different from us in being limited by the actions of their human Creator. In response, however, are not we, as humans, limited creations by the fact that our Creator has not given us all-powerful abilities. Data and David may not have human souls, but now I will question the possession of a soul as being criteria for robots to be considered human. I look at human individuals who believe that they have no soul…does this make them any less human? No. I look at human individuals who believe they have sold their soul to the devil. Does this make them any less human because they believe themselves to be lacking a soul? No. Furthermore, one should consider the ironic fact that we have many ideas about God, and yet we cannot even begin to fathom the way in which He chooses to work. Who are we to say that God, looking mercifully upon these sad creations at the service of humanity, chooses to endow them with souls at the moment of their creation? God is not limited by the rules of man.
It becomes clear throughout the films that Data and David are capable of some self-awareness, and can follow their dreams and desires. David seeks out the blue fairy based on a fairy tale he heard from his mother, Monica.

In addressing the fact that robots cannot die, this brings up an issue from Asimov’s story
The BiCentennial Man, in which the robot Andrew chooses to die eventually to finally be considered human. If all robots were to choose this condition, that would make them seem more human. Humans, as of now, cannot live forever, but one can question this criterion of death as well. Look at how much we have increased them human life-span over the years. There was a time when people would not live past 40. Now, people can live to over 100 years old. Who is to say that years from now we will develop a way to replace organic body parts and retain ones that we wish with technology? What if we could live forever, or even just to much longer lengths? Would this make us any less human? No.

As the film A.I. progresses, one sees David change so that he can love and dream. Does not this allow him to be treated with the intrinsic dignity of the creation that he is? Professor Hobby explains his rationality, “didn't God create Adam to love him?" In a way, yes, God did create Adam to love him, and to have a unique life to one day be in union with Him. Some robots, too, in the stories and films we have seen, are created to love others, and have a unique robotic life, and to one day be in union with the human race to such a point that they are a new part of our society. This is just another way of looking at things.

In conclusion, David and Data are two interesting cases for a possible future of robotic questions and ethical issues to be raised. One can only imagine a future of this sort now, but it is best to consider the questions now, rather than later, when we have already begun a new form of slavery and oppression. It is best to consider the questions now when we can think about the issues before allowing these new rational beings to be present and contribute to society. Let us consider the fact that these machines might be capable of possessing arguable qualities toward personhood. Who are we to refuse them when the case for their humanity is so strong?

No comments: