Technical activity automatically eliminates every nontechnical activity or transforms it into technical activity. This does not mean, however, that there is any conscious effort or directive will.
Jacques Ellul
Wednesday, April 02, 2008
Dear All,
Please reply to this post with your response to the film AI. Try to point out at least one scene that reflects the views of Dreyfus and/or Lyotard.
Indeed, (as I have stated in my previous blog) AI is a fascinating, disturbing and thought-provoking movie that raised many different questions pertaining to ethics and technology. There were many scenes in the film that touched on issues discussed by the philosophers Hubert Dreyfus and Jean-François Lyotard. The movie’s portrayal of the creation of a sort of artificial intelligence with a “love sensor” that, when triggered, would enable the robot to love would be just ridiculous and impossible to Dreyfus, a great critic of how AI is researched and studied today. According to Dreyfus, AI specialists are moving in the wrong direction if they wish to make such a ‘loving’ or ‘feeling’ robot (if this is even possible). Dreyfus criticizes the mentality that the human brain is just a machine that can be replicated mechanically through “symbol manipulation”; in other words, the philosopher does not believe that humans just compute everything with a humungous list of explicit rules. The human experience is much richer than that, involving more than just rational maxims followed by the brain. For example, sometimes human beings act irrationally, oftentimes because of emotion, an aspect that Dreyfus holds robots can never truly have if built via symbol manipulation. In sum, Dreyfus believes that AI researchers will ‘hit a wall’ (so to speak) in the progress of their studies when they discover that disembodied robots built through symbol manipulation and made to follow rules and maxims can never be exactly like humans because they are not able to experience the world with a body and they cannot truly feel, much less love. Thus, Dreyfus would scoff at AI which portrays an AI researcher supposedly using systematic means and symbolic manipulation to create a loving robot. Many of the robots in AI that can only do as they are programmed (unable to break the rules encoded within them) like Gigolo Joe are congruent with Dreyfus’ ideas of AI research. For instance, when David (the loving boy robot) asks Gigolo Joe why he does something, Joe responds “I don’t know why. It’s just what I do!” However, David, who can think, hope, love, plan and aspire, would seem very unrealistic to Dreyfus. There are also scenes in AI which are relevant to our discussion on Lyotard’s question, “Can thought go on without a body?” The end scene almost perfectly correlates to Lyotard’s predicament of what happens to thought at the end of the world. Lyotard begins his article on the question (Can thought go on without a body?) by reminding the reader that the Sun will one day exploded and destroy the Earth. If, in fact, humans still remain on Earth, they will all be killed. As a philosopher, Lyotard worries that the death of humanity may mean the death to thought altogether. Yet, Lyotard mentions, AI research has given humanity hope that ‘human’ thought can become independent from corporal confinement via robotics and artificial intelligence. Though the Earth’s destruction may mean the end to human bodies, if human thought conveyed through artificial intelligence is placed in such a piece of matter that can withstand the Sun’s blast and can be supported “by sources of energy available in the cosmos generally,” then human thought can survive without a body, making it immortal. However, Lyotard admits that there are problems with AI carrying on ‘human’ thought. For instance, human thought is greatly affected by human experience, which, in turn is affected in part by how our bodies relate us to the world around us. Robots can never have fully organic bodies, so they would never have the same experience and, by extension, the same thoughts. It is true that robots could potentially ‘think’ much like humans, but in the end, only a human can think as a human would. If a robot thought exactly as a human does, then it would be human and no longer artificial intelligence. {[Spoilers]} AI ends thousands of years into a future where every last human has died. Besides the evolved sort of robot that has never experienced humans and has taken over Earth, the only being that remains is David, the loving robot boy just wanting to be a ‘real’ boy so that his Mommy will love him. When the evolved mecha find David, they are fascinated by him because he has had contacts with humans and is displaying human emotions. Thus, the end of AI portrays both sides of Lyotard’s argument. Advanced human intelligence is continued and improved through the evolved mecha, yet these creatures are far from exact replicas of humans and whether or not they have human emotions is debatable. David, in his ability to love, is much closer to being human and, in fact, it could be argued that his thoughts are genuinely human. Yet he is not human but mecha, and thus it could always be debated that his love really is not human after all. The ending of AI, then, leaves one in doubt as to whether true human thought could be carried on via artificial intelligence.
1 comment:
Indeed, (as I have stated in my previous blog) AI is a fascinating, disturbing and thought-provoking movie that raised many different questions pertaining to ethics and technology. There were many scenes in the film that touched on issues discussed by the philosophers Hubert Dreyfus and Jean-François Lyotard. The movie’s portrayal of the creation of a sort of artificial intelligence with a “love sensor” that, when triggered, would enable the robot to love would be just ridiculous and impossible to Dreyfus, a great critic of how AI is researched and studied today. According to Dreyfus, AI specialists are moving in the wrong direction if they wish to make such a ‘loving’ or ‘feeling’ robot (if this is even possible). Dreyfus criticizes the mentality that the human brain is just a machine that can be replicated mechanically through “symbol manipulation”; in other words, the philosopher does not believe that humans just compute everything with a humungous list of explicit rules. The human experience is much richer than that, involving more than just rational maxims followed by the brain. For example, sometimes human beings act irrationally, oftentimes because of emotion, an aspect that Dreyfus holds robots can never truly have if built via symbol manipulation. In sum, Dreyfus believes that AI researchers will ‘hit a wall’ (so to speak) in the progress of their studies when they discover that disembodied robots built through symbol manipulation and made to follow rules and maxims can never be exactly like humans because they are not able to experience the world with a body and they cannot truly feel, much less love. Thus, Dreyfus would scoff at AI which portrays an AI researcher supposedly using systematic means and symbolic manipulation to create a loving robot. Many of the robots in AI that can only do as they are programmed (unable to break the rules encoded within them) like Gigolo Joe are congruent with Dreyfus’ ideas of AI research. For instance, when David (the loving boy robot) asks Gigolo Joe why he does something, Joe responds “I don’t know why. It’s just what I do!” However, David, who can think, hope, love, plan and aspire, would seem very unrealistic to Dreyfus.
There are also scenes in AI which are relevant to our discussion on Lyotard’s question, “Can thought go on without a body?” The end scene almost perfectly correlates to Lyotard’s predicament of what happens to thought at the end of the world. Lyotard begins his article on the question (Can thought go on without a body?) by reminding the reader that the Sun will one day exploded and destroy the Earth. If, in fact, humans still remain on Earth, they will all be killed. As a philosopher, Lyotard worries that the death of humanity may mean the death to thought altogether. Yet, Lyotard mentions, AI research has given humanity hope that ‘human’ thought can become independent from corporal confinement via robotics and artificial intelligence. Though the Earth’s destruction may mean the end to human bodies, if human thought conveyed through artificial intelligence is placed in such a piece of matter that can withstand the Sun’s blast and can be supported “by sources of energy available in the cosmos generally,” then human thought can survive without a body, making it immortal. However, Lyotard admits that there are problems with AI carrying on ‘human’ thought. For instance, human thought is greatly affected by human experience, which, in turn is affected in part by how our bodies relate us to the world around us. Robots can never have fully organic bodies, so they would never have the same experience and, by extension, the same thoughts. It is true that robots could potentially ‘think’ much like humans, but in the end, only a human can think as a human would. If a robot thought exactly as a human does, then it would be human and no longer artificial intelligence.
{[Spoilers]} AI ends thousands of years into a future where every last human has died. Besides the evolved sort of robot that has never experienced humans and has taken over Earth, the only being that remains is David, the loving robot boy just wanting to be a ‘real’ boy so that his Mommy will love him. When the evolved mecha find David, they are fascinated by him because he has had contacts with humans and is displaying human emotions. Thus, the end of AI portrays both sides of Lyotard’s argument. Advanced human intelligence is continued and improved through the evolved mecha, yet these creatures are far from exact replicas of humans and whether or not they have human emotions is debatable. David, in his ability to love, is much closer to being human and, in fact, it could be argued that his thoughts are genuinely human. Yet he is not human but mecha, and thus it could always be debated that his love really is not human after all. The ending of AI, then, leaves one in doubt as to whether true human thought could be carried on via artificial intelligence.
Post a Comment