Tuesday, April 08, 2008

Week 11 Entry

Lyotard puts the question of AI in a very interesting light. He suggests that there will come a time when Earth is no longer habitable for humans. The calamity he proposes that will wipe out humanity is the eventual explosion of the sun, which will happen eventually. He proposes that at this time, human thought, as we know it, will cease to exist. He defines philosophy as an endless discussion of questions that philosophers claim are unanswerable. He poses the question: is thought for the sake of thinking about unanswerable questions worthwhile if thought is destined to come to and end? Certainly some thought is worthwhile if it leads to developments that can improve quality of life, but is thought for the sake of thought worthwhile if thought is destined to end? It is avery interesting question, and I have no idea what the answer is, but Lyotard also suggests that even if the human race goes extinct, thought, in some form, may persist if we are able to produce an artificial intelligence. This artificial; intelligence, presumably, would be able to survive the explosion of our sun, or some other disaster that would drive humanity to extinction, and thought would live on even if humanity didn't. Some interesting questions are raised by this posibility. Would it be human thought that survived? In my opinion, the mere existence of artificial intelligence does not ensure the survival of human thought. Is it possible, then, to preserve human thought, or as has been suggested before, even human minds, in an artificially intelligent entity? These are questions that I have no answer for, but they are certainly interesting questions to think about.

No comments: