Technical activity automatically eliminates every nontechnical activity or transforms it into technical activity. This does not mean, however, that there is any conscious effort or directive will. Jacques Ellul
Wednesday, April 30, 2008
The Laws of Humanics
Asimov presents a very well thought out case in determining the laws of humanics in his Robot Stories. But are these laws enough? I feel they are inadequate. The first law states that "a human being may not injure another human being or, through inaction, allow a human being to come to harm". We have laws similar to this in our real world and culture, but these laws are broken by criminals everday. People murder other people and, in a much lesser extent, people lie to others, which causes harm. Humans are not logical beings and decisions are often clouded by emotional responses. These implications make it difficult to determine whether or not a human actually meant to impose harm on another or will, in the long run, do more harm than good. The second law states, "A human being must give orders to a robot that preserve robot existence, unless such orders cause harm or discomfort to human beings". My first issue with this law is that "discomfort" can mean a lot of things. Some people may feel discomfort with a robot being even remotely near them. Does this mean the person can give the robot the command to destroy itself just because a human feels that the robot looked at him funny and caused discomfort? And what if a robot accidentally steps on a human's toe? That could be really uncomfortable, but, hey, accidents happen. Is this discomfort to a degree that a human can give a command to destroy a robot? The third law can also be left to interpretation. As I stated previously, humans are not necessarily logical beings. In some of the stories we read, robots were disobeying orders by humans because the overall outcome was to protect humans. It may not have been seen at the time, but ultimately, it helped. When I was reading "Reason", I found myself thinking thoughts such as these. How would a human be able to come up with the existance of a "master" and all the other strange things QT comes up with in order to save the earth from the storm? Was the robot actually conscious of this, or did he seriously believe that there was a whole other explanation for existance? The odds of a human being able to save a robot in this fashion seem a little farfetched, to say the least...
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment