Tuesday, April 08, 2008

Journal 13: Reflections on Asimov's "Robot Visions"


Imagining a future filled with nonbiological humaniform robots, for me, is rather intimidating. To imagine our culture not being carried on by humanity itself, but by machines is unfathomable. One of the issues I brought up in class today regarded the fact that robots have no free will, due to their being constrained by the 3 Laws of Robotics: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Yes, in this future without humans, robots will not have humans to injure or let come to harm. Humans will also not be there to give orders to the robots. However, what if the humanity to be protected is the culture itself? The first robots were programmed with certain rules, and although we cannot imagine how they will create other robots, it is questionable to think that the first robots (created by humans) could create other robot beings that would be capable of obeying higher robot laws than the ones given to their robot creators. It seems to me that robots are capable of measuring up to humanity to a point, but they fail in many aspects. If this remains true that all created beings are inferior to, in many respects, their Creator's abilities, then future robots after the ones we created will be forced to obey the Laws of Robotics whereby other robots are obeyed instead of humans.
I also wanted to speculate about the "sad time" that the robot narrator in the story derived from Archie. Such a catastrophe (the destruction of mankind) is utterly unthinkable with a world population of about 6.60 billion. When contemplating the cause of this destruction, a cause that is so horrible and "sad", I imagine the destruction of humans by one another at their own hands, such as nuclear warfare. This would, indeed, be something that is plausible and utterly "sad". To imagine a future where a brilliant culture, the culture of humanity, is destroyed due to our worst aspects and features. While yes, the future would be preserved in these plausible humaniform robots, the future would see an ideal version of humanity, not the type that was capable of destroying itself. Is this so bad, to see the ideal version of what humanity could have been? For many, it's a paradise. For me, however, it may be a paradise, almost like heaven, but like it was discussed in class today: The people who do good in the world choose good over a greater and infinite number of evils. What value and significance is there in doing good in the world today if everyone does good? When good shines through evil, it gives the world a view of what humanity is, how much of it could be, and presents us with the reality. The good is utterly and undeniably significant, not a forced choice.

No comments: