Wednesday, April 30, 2008

Journal 13

When considering the laws of Humanics, it was intersting to compare and contrast those laws to the laws of robotics. The laws of humantics basically states that humans must not harm other humans or robots unless them themselves are in danger. For robots, it is protect a human at all costs, including if it means taking your own life. I find it interesting to think about these ideas because I believe that basic law should be to defend yourself if someone attempts to harm you, but you should also desire to follow a sort of moral law where you don't allow harm to come to you unless it is to protect your fellow man, or even in some cases, to protect your fellow robot. If robots could in fact take on such human likenesses and characteristics, why should we not value there existence and want to protect them as much as we would our fellow human? If artificial intelligence were in fact achieved, we would then have to go back to questioning their human-like qualities as in the case of Andrew Martin. He slowly developed many characteristics of humans and even desired to die like a human. In a case like Andrew Martin, I believe that the one basic law of protecting yourself and the moral law of protecting your fellow "being" should definitely be considered.

No comments: