Technical activity automatically eliminates every nontechnical activity or transforms it into technical activity. This does not mean, however, that there is any conscious effort or directive will. Jacques Ellul
Monday, April 21, 2008
Blog 13 : Robotics
The laws of robotics are a very interesting topic. At first glance, they look very thorough, and look like they would keep humans free from harm. But I have the same concerns as Asimov when he says, “If we imagine computer like brains inside the metal imitations of human beings that we call robots, the fear is even more direct. Robots look so much like human beings that their very appearance may give them rebellious ideas.” (423) How can one guarantee that everything will go right? Things go wrong everyday with technology, whether it’s your CD skipping on your ride to school, or if you turn on your computer only to find that you have lost the paper you thought was saved. So who’s to say that these robots will not malfunction? I liked the example that Asimov used about the sufficiency of safeguards. We put automobiles through a considerable amount of simulations, and crash tests with the hope of saving lives, yet 50,000 people are killed by automobiles every year. The idea of artificial intelligence seems like a wonderful one at first glance, but what would you think if one out of every 100 robots malfunctioned? Or what if a robot got so out of whack that it killed 100 people? There is no guarantee that these three laws of robotics will completely control the robots, and if they become as humanlike as some people are hoping, it could have a negative outcome.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment