Monday, April 02, 2007

Insufficient Data (Blog #9)

Star Trek: The Next Generation frequently raises interesting questions about human relationships with technology, but never more obviously than in the case of Data. The most interesting observation about the episode we viewed in class, was that Commander Data, an android devoid of emotion, appears to have feelings. His actions seem compassionate, kind and sympathetic, but he should not be able to behave in this manner if he has no emotion, should he? Data is also one of the most popular characters n the Next Generation universe which also seems anomalous given his inability to reciprocate emotion.

What could be producing this perception of emotional capacity? It might be that Data simply appears to have emotions because, as feeling people, we can’t imagine any animate thing not having them. In other words, we tend to anthropomorphize and project our emotion into Data’s void. Or, perhaps, Data may actually be developing emotion. As a final possible, quite cynical, reason, Brent Spiner, because he is a human being, might be creating emotion where there should be none.

It is my opinion that Data is not capable of emotional responses and does not display them. Yet, he is still my favorite Next Generation character, largely because he acts the most humanely, and sympathetically, of any crew member. Inability to emote does not exclude Data from acting in a way that human beings understand as good. Moral actions, any actions involving free will, in fact, do not necessarily require emotional support. Spock proves this point (despite McCoy’s objections). He acts according to logic, yet he can still befriend, love and act morally. Perhaps Spock is just a hopeless hypocrite (that would be illogical though), but some philosophers believe that rationality, not emotion, is what makes us truly human—Kant is one such philosopher. For the sake of brevity, I will confine myself to the Kantian argument.

As we mentioned in class, Kant believed that human dignity, or personhood, is rooted in our rational intellect and ability to set our own goals and make decisions.
If we can accept this, then it is more pertinent to prove whether or not Data can act rationally and freely, to determine his “humanness”. Unfortunately, philosophers, such as Sartre (we are not free not to choose), have questioned whether or not human beings have this ability, so it becomes even more difficult to prove the same for robots. For the purposes of this discussion, however, I will not question whether or not human beings can act freely and just assume that they can thereby creating the fundamental difference between say, Captain Picard, and you average robot vacuum. Data exhibits what appears to be autonomous behavior. He spends time learning human arts, reading and even practicing sneezing. No one commanded Data to engage in these tasks. Data has the ability to reflect on his own existence and recognize the difference between himself and humans. He has self awareness.

Does this mean he is an autonomous, rational being? Not necessarily. Unfortunately, much like with chimpanzees and gorilla, we cannot fully comprehend the interior life of a complex android. What appears like autonomy might just be an epiphenomenon. For instance, MyTMC (aka Jenzabar) frequently acts independently of instruction. It deletes documents, closes windows and even grades exams. A more computer savvy individual might be able to explain Jenzabar’s actions more accurately, but I wouldn’t call it autonomous action. It fails to indicate the ability to think rationally in addition to its vague autonomy.

Data, however, behaves in a manner that could be considered rational. He reasons quite effectively. For instance, when Picard referred to Data’s “brother” Lore as an “it”, Data immediately transferred the appellation to himself. If Picard’s statement t is followed to its logical conclusion, then he would have to call Data an “it: instead of “he” as well. Data had to employ logic and a keen understanding of language to recognize this.

Thus far, following Kant’s lead, we would probably have to grant Data personhood. Something seems wrong with this, however. First, Data’s rational autonomy might still be anthropomorphism. If this is the case, then Data is once again a machine. Based on the past evidence, this seems less likely. Another problem enters in, though, when Data’s origin is considered. He was created by a human, a genius, but still a human. Soong, Data’s creator, endowed him with very human qualities, sans emotion (which seems to have been a response to Lore’s instability). Bec ause of this, Data is limited by the intellect of his creator who cannot give more than he has.

Soong may have failed to include crucial aspects of human brain function that make full autonomy possible. The lack of real emotion, for example, may be detrimental for Data’s freedom. He does later acquire an emotion chip, but this too is more programming—more simulation. All of these factors serve to limit Data’s ability to set goals and pursue them freely. In addition, it is doubtful that Data has an understanding of moral right and wrong. I believe it would be impossible for Soong to impart knowledge of good and evil—metaphysical concepts that cannot be reduced to a program. This, I think, would make it very difficult for androids, not matter how advanced, to become moral agents. This drastically limits Data’s autonomy. It essentially would bar true free will. The ability to choose good, or choose evil. As a disclaimer, I will not approach the question of the existence of good or evil, since both have been challenged. There is not enough space here.

Another important difference between androids and humans is the fact that androids remain in-organic. In Bi-Centennial Man, Andrew managed to give himself many prosthetic human organs, but he was never organic. Importantly, his brain was never organic. If we believe Descartes’ odd proclamation that the pineal gland is the point at which the soul connects to the body, we would have to exclude Andrew from having a soul--or at least a spiritual-corporeal unity. Granted, Descartes’ statement is wildly implausible, but it does present, one reason why an organic brain might be a crucial part of human uniqueness and personhood (since Descartes believed the mind-soul was true self). In addition, robots, of every kind, lack natural life. They are not born, they do not die (without some kind of alteration). Thus, it is possible (as we saw in the TNG episode) to rebuild an android and save its positronic brain. Humans, animals and plants, by contrast, are not so easily restored. The frailty of life is another reason for human sacredness.

For these reasons, I am loathe to grant an android even my favorite one, Data, personhood. This does not mean we can treat androids, or robots as we please. As we discussed in class, a very good argument can be made for secondary duties towards artificial humans. Because of their closeness to humanity in appearance and activities, it would degrade human beings to mistreat them. For instance, of Commander Riker were to hit Data repeatedly with a baseball bat for jollies, he might be more inclined to do so to a human. It would also deform his own nature and sensibilities in much the same that way that pornography might. Data might also be considered a work of art. He should therefore be treated with the same respect due to such a creation.

It is interesting. Human beings agonize constantly over their origins. Data, and other androids, know that humans have created them (although Cutie in “Reason” by Asimov challenges this). The Christian tradition recognizes that human beings were created in the image and likeness of God. The “Imago Dei” principle is the foundational principle in the Christian moral tradition and the chief source of human dignity. Unfortunately, Data was created in the image and likeness of human beings. We are incapable of giving androids natural life. Data’s lack of natural life is exemplified in his relationship with his cat, Spot. Data is an incredibly complex piece of robotic science yet he still lacks something so basic, yet impossible to attain. Something Spot the cat has—life.



Iris

1 comment:

Steve said...

I'd love for you to read a script that I wrote for STLTNG. I think it would give you some food for thought. Write to me if you'd like to give it a read.

Steve