Monday, April 28, 2008

Journal 13: Laws of Humanics

Isaac Asimov is famous for creating the 3 Laws of Robotics. Yet, the 3 Laws of Humanics, which he created later, are also intriguing (if not, in my opinion, completely adequate). I think the first Law of Humanics is perfect. Mirroring the first Law of Robotics, this Law says that humans shall not harm other humans nor let other humans come to harm. Not only does this Law help the robots follow their own first Law, but, if followed, it would create a more peaceful world than the world has ever known. Also, Asimov makes a good point: why ask robots to do something we ourselves are unwilling to do? Why make them unable to harm humans if we ourselves do so all too frequently?
The second Law of Humanics poses a slight problem. It states, "A human being must give orders to a robot that preserve robotic existence, unless such orders cause harm or discomfort to human beings." The word I have a problem with here is ‘discomfort.’ I believe that human beings should not (unless necessary to save a human - see third Law) order a robot to do something that would harm it physically, mentally, or (if it is a sophisticated-enough being) emotionally. Indeed, humans must strive to give orders that are not only not harmful but "preserve robotic existence." Yet, if a human must give an order that does not preserve robotic existence, it should be only when a human’s safety is truly at stake. If an order given to a robot to preserve its existence only discomforts a human being (and I cannot even imagine what such an order would be), then that human bothered can consult the human ordering the robot about his/her situation or just leave the area. There is no need to put such a word as ‘discomfort’ in the second Law. I think ‘harm’ is sufficient to what Asimov is trying to say.
Finally, the troublesome third Law: "A human being must not harm a robot, or, through inaction, allow a robot to come to harm, unless such harm is needed to keep a human being from harm or to allow a vital order to be carried out." The first part of this Law is excellent. Human beings should have a responsibility to not harm robots, if only for the reason that they are fantastic works of human ingenuity and intellectual/technological pursuit. I do not have an issue with a robot being harmed if it is to keep a human being from being harmed either, but I think this part of the Law needs to be more specific. In my opinion, the amount of harm allowed to come to a robot should equal the amount of harm that would have come to the human the robot saves. For instance, a robot should not be allowed to shut down just to save a girl from scraping her knee. The biggest issue I have with the third Law, however, is the last phrase about carrying out a vital order. No ‘vital’ order should be made that jeopardizes a robot’s safety. A robot should not be used as a self-sacrificing device at human beings’ disposal. A robot, for example, should not be made to go into a burning building to save a precious document if it means that robot would have to be shut down afterward. The last part of the third Law, I think, is an infringement on the rights that robots should have. It is denying them personhood. Granted that, whenever the term ‘robot’ is used, it is referring to an Asimov robot – a sentient, intelligent being capable of self-reflection, reason and sometimes emotional feeling – it would be a crime to allow the last part of the third Law to stay.

No comments: