Two articles from Gizmodo for people to ponder:
Technical activity automatically eliminates every nontechnical activity or transforms it into technical activity. This does not mean, however, that there is any conscious effort or directive will. Jacques Ellul
Saturday, March 31, 2007
Wednesday, March 28, 2007
Blog #8 Robot Visions and Bicentennial Man
I must say that I'm so very glad that we have moved into some reading that has really excited me. I am reading more of these stories than we have to. I may have found my new favorite author. I hope this guy wrote a lot of books. I liked Robot Visions. I like how this short story starts with us finding out that our narrator is a member of a group that has discovered time travel, but we could never understand the mathematics of it all if we had not had Temporalist training. It's perfect and what's even more perfect is the end of the story when we find out that our narrator is a more advanced robot and actually tells Archie what to say in case he is asked by the humans of the group. They designed a robot that ultimately made the decisions that would ensure that human life ended and robot life continued.
The next story also interested me too. I found it amazing that the robot had began to have the same kind of feelings that we as humans have. Andrew slowly became more and more human. He even got to the point where he wore clothes, gave orders to humans, and wanted to die. HEY WAIT! That's not what we humans want to do, is it? Don't we still have nudist colonies? Don't we want to talk issues over and just get everyone to agree? Who wants to be the bad guy and make everyone work? And last, but certainly not least. Don"t we want to live forever? That Andrew sure had it wrong. How could he want to do all the things that we humans shy away from? Or do we? I thought about this some more after we discussed it in class. Andrew wanted to die. He made an individual decision. I remember watching the video in class before where they were going to save the guys brain. He also just wanted to die. Maybe that's it. Maybe Andrew was human after all. He made an individual decision. We all make these kinds of decisions and sometimes other people think we're crazy too. I wonder if someone has looked at the decisons I made and thought they were crazy too. I'm not crazy! Am I?
The next story also interested me too. I found it amazing that the robot had began to have the same kind of feelings that we as humans have. Andrew slowly became more and more human. He even got to the point where he wore clothes, gave orders to humans, and wanted to die. HEY WAIT! That's not what we humans want to do, is it? Don't we still have nudist colonies? Don't we want to talk issues over and just get everyone to agree? Who wants to be the bad guy and make everyone work? And last, but certainly not least. Don"t we want to live forever? That Andrew sure had it wrong. How could he want to do all the things that we humans shy away from? Or do we? I thought about this some more after we discussed it in class. Andrew wanted to die. He made an individual decision. I remember watching the video in class before where they were going to save the guys brain. He also just wanted to die. Maybe that's it. Maybe Andrew was human after all. He made an individual decision. We all make these kinds of decisions and sometimes other people think we're crazy too. I wonder if someone has looked at the decisons I made and thought they were crazy too. I'm not crazy! Am I?
To blog or not to blog, that is...
I'm starting to like this blog thing. I hate to digress, but I feel I must. I really think it's interesting that I took a course that is on Ethics and Technology. The main book was Technology And The Lifeworld. At the end of the book, it's clear that Don Ihde is against science, the military, and maybe even the government. The point here is that technology and science might not be all that good and I have done nothing but learned new ways of using the internet and doing this blog thing. Ihde may have wanted us to turn our backs on technology and science, but this course has made me use technology more and made me realize how ingrained we are in it.
Blog # 7 Why do we hide who we really are?
There are many times when I have been reading this book and I have been so lost and confused. But after reading through the final conclusion a couple of times and then looking back through the book, I have found that I missed what Ihde was trying to say all along. I should have taken the time to read the ending so I could understand what he was trying to tell us all along. His interests seem to truly lie in the fact that he's a conservationist. No that's might be wrong too. Maybe he's an animal rights activist. I hear him extolling the virtues of several different areas in this epilogue and i wonder what his true beliefs might be. I seem to pick up that whales are more important than the nuclear reactor and we need to find other ways of producing power for our society. I wonder about what he has done for our society. Has he conserved energy or raw materials in some way? Was this book made from recycled paper? Did he insist that the only energy used to make this book come from windmills? I did find something that seemed to bother him and that was science. It's the evil that we all need to beware of in our lives. Even the government favors science. The government gives all this money to science and the poor arts and humanities fields must do without the money it wants. I must in all honesty say that I believe that must truly reflect on the things that are important, but what had Don Ihde 'created' when he wrote this book? Did he leave something that will change our society for the better? Will we compare this book with the invention of the the telephone? How important will this book be fifty years from now? I think that Mr. Ihde would have been better served to keep his politics out of his book when he wrote it. We find what's really important when he talks about Galileo II and brings up that "funding agencies geared to social needs, economically just wealth distribution, issues of human nature, and conservation of planet earth...would be a nice daydream for tomorrow night." 218 (I must thank Dr. Easley for hammering home the proper way to do a quote). Mr. Ihde, technology can't be stopped. It can only be controlled. I have come to believe that we are embodied with technology and can never go back. I once told a story in the classroom that said everyone can go back if they want to and leave technology behind. I consider it the ultimate in leaving technology behind. Take all your money and buy land wherever you want. Drive up to the edge of your new property and get undressed. Leave your car, the clothes on your back, and stroll back into Eden. Leave technology behind and become a truly free person.
Tuesday, March 27, 2007
Fear the Robots!
It is a wonder what one can do with Lite Brite to convey a warning against robots:
http://www.youtube.com/watch?v=YxZJYbVd1hE
http://www.youtube.com/watch?v=YxZJYbVd1hE
Blog #8 Robot Visions
In Asimov's ideas and thoughts are very interesting and leave you thinking more and more about the future and what is to come. Rather it be some sort of natural disaster or the creation of the robot taking of mankind, according to the Temporalist there would be some sort of disaster that would end mankind. In sending the robot back in to time they found that mankind was still in existence or rather it was a humaniform that resembled humans so much that the robot sent back in time was unable to tell a difference. Yet the robot did report that there was some sort of sad times but no one would inform him on what this sad time was. They only told him that there was a sad time before this new age and that his exploration of time travel should not go looking into the sad times. In the end Asimov says that he the first humaniform and his task is to make sure the future is informed of the time travel and to welcome the robot when he appears but to only tell him a limited amount of information on the new age. The thought of robots being so human like that its hard to tell the difference if even possible is a scary thought and something that humans should not attempt to create. Yes according to these stories the robots will be totally controlled by humans and made to only obey their orders, there is still the chance or rather almost guarantee that one or more of the manufactured robots will be of erroneous quality and that error could cost lives or several other consequences that doesn't need to be risked. I do believe in robots being helpful and making things easier for human beings but I don't think that they should be so advanced that they feel or learn from things that they do. With the continuing advancements within robotics it's only a matter of time before robots become more human like and the risk of these advancements are hopefully not as bad as what many stories portray.
blog 8
Robot visions was a very interesting read. The story has some aspects that really had me thinking. One was that the temporalists were extremely pessimistic. They were certain that the world is 200 years would be completely in ruins. They had no hope for humanity and the world in the future. I wonder if Asimov is as unhopeful for the future as the Temporalists were. Are the Temporalists acting as his belief? And are there really people out there who are that pessimistic about the world? I personally am very optimistic about the future. It was kind of depressing reading about how there might be people who think that the human race will be completely wiped out by either robots or their own doing.
Another aspect was that robots become so humanlike that they can't be distinguished from one human to another if people don't know that "human" is really a robot. Along those same lines is that will robots really be the ones inhabiting the earth? Depending who you talk to, some will say having robots so humanlike will be good, while others say that will be bad. I think it would be bad if robots have the same characteristics as humans. Every human is unique, and by giving robots human characteristics, I think that would be getting rid of the uniqueness of being human. Also, without humans, robots would not exist. So it have robots controlling the earth, would be far fetched. In order to have robots and give them human characteristics, you need humans to program them. To have all robots and no humans seems impossible to me.
Another aspect was that robots become so humanlike that they can't be distinguished from one human to another if people don't know that "human" is really a robot. Along those same lines is that will robots really be the ones inhabiting the earth? Depending who you talk to, some will say having robots so humanlike will be good, while others say that will be bad. I think it would be bad if robots have the same characteristics as humans. Every human is unique, and by giving robots human characteristics, I think that would be getting rid of the uniqueness of being human. Also, without humans, robots would not exist. So it have robots controlling the earth, would be far fetched. In order to have robots and give them human characteristics, you need humans to program them. To have all robots and no humans seems impossible to me.
Monday, March 26, 2007
Blog #8 Robot Visions
Issac Asimov begins his science fiction book by defining what the word robot means. The first definition presented for a robot is, "an artificial object that resembles a human being." Even though this implies that robots must have similar appearances and physical characteristics to human beings, the majority of robots which have either been developed or described in literature are completely different than human beings. The stereotypical image of a robot is some awkward, artificial figure made of metal plates, electric wires and gears. This view of a robot, as separate or isolated from the life of living organisms is what, is what most frequently appears in literature spanning from ancient times to the 1800's. As human beings, our natural fear of robots destroying or taking over the human race incorporates some form of this primitive, machine-like view. Rather robots were designed to perform activities and tasks to make man's life simpler and less demanding. Only after much scientific research and mental advancements occur ed did the human race begin to wonder if the robotic world could someday replace man. Many philosophers and members of the scientific community warned scientists in the field of artificial intelligence of the possible dangers in exploring and advancing technology. Could our natural curiosity lead to our ultimate demise? It is believed by many that there are certain things not meant for man, which must be left alone for the well being of the human race. In modern times, when what was considered "the impossible" has been achieved and surpassed, the fine line between the role of god and human beings seems to be fading. Although there are many benefits to further research and invention of AI devices, technology comes with a risk. The risk has an irreplaceable price, the destruction and demise of the modern race of men.
Blog 8: Robot Stories/Bicentennial Man
After reading "Bicentennial Man", I was thinking over the situation in my head. Should robots have the law to protect them as if they were human? Any thoughts that revealed itself in support or contrary to this notion would be knocked out by a thought supporting that of the opposite. I decided to converse with my friend on the situation in order to see a view that wasn't influenced by any of the discussions in class. The conversation is as follows, and the identity of my friend has been protected. He shall be referred to as "DefenderOfHumanity" to give himself a positive attribute.
Nick: Would it concern you for a robot to be given human rights, under the assumption that the robot has gained the ability to feel, think, and proceed as a human does?
DefenderOfHumanity: no
DefenderOfHumanity: well i think they should not have those rights no matter what
Nick : why shouldn't they have those rights if they feel and think as exactly as we do?
DefenderOfHumanity: because at the end of the day it's not real
DefenderOfHumanity: it's created and can be taken away
Nick: so when and if we eventually get to the point wear we have mechanical and working prosthetics, those can be taken away and thus we become less human, so therefore we forego our rights?
DefenderOfHumanity: i was actually just thinking about something similar to this today for no reason at all. but it's like how PETA wants animals to have rights, i don't want people to be cruel to animals but they're not people. we are the dominant species. same with robots because it would be an inanimate object still
Nick : yet to be inanimate means not to move, where the robot i described would move and think and act as humans do in a human way.....just thinking about this potential has foreseen consequences for both sides
DefenderOfHumanity: the so called feelings that robots could have would only be fake, just set programs executed by extreme machines
Nick: well aren't we an extreme machine? our thoughts are actually nerve impulses conducted along the path through our brain cells, and propagated by biochemical devices
DefenderOfHumanity: but humans have souls
I have ceased to pursue playing the devil's advocate from this point in the conversation, and allow my friend the victory. This is primarily do to my lack of knowledge in the realms of philosophy and theology. However, would the ultimate question to end such debate be a simple yet complex theological one? Would the robot have to have a soul in order to be protected by human law?
Nick: Would it concern you for a robot to be given human rights, under the assumption that the robot has gained the ability to feel, think, and proceed as a human does?
DefenderOfHumanity: no
DefenderOfHumanity: well i think they should not have those rights no matter what
Nick
DefenderOfHumanity: because at the end of the day it's not real
DefenderOfHumanity: it's created and can be taken away
Nick: so when and if we eventually get to the point wear we have mechanical and working prosthetics, those can be taken away and thus we become less human, so therefore we forego our rights?
DefenderOfHumanity: i was actually just thinking about something similar to this today for no reason at all. but it's like how PETA wants animals to have rights, i don't want people to be cruel to animals but they're not people. we are the dominant species. same with robots because it would be an inanimate object still
Nick
DefenderOfHumanity: the so called feelings that robots could have would only be fake, just set programs executed by extreme machines
Nick: well aren't we an extreme machine? our thoughts are actually nerve impulses conducted along the path through our brain cells, and propagated by biochemical devices
DefenderOfHumanity: but humans have souls
I have ceased to pursue playing the devil's advocate from this point in the conversation, and allow my friend the victory. This is primarily do to my lack of knowledge in the realms of philosophy and theology. However, would the ultimate question to end such debate be a simple yet complex theological one? Would the robot have to have a soul in order to be protected by human law?
Blog 8 on Robot Stories
In my eyes there isnt any new technology that has not had some negative effect on something in the world.. The creation of aircrafts are good but also have brough terroist attacks to our country. The automobile was a great invention it helps speed up our traveling but also kills alot of people each year do to wrecks. The real question is what would robots bring us? This is where Robot stories chapter one starts bringing in robots. This chapter says that robots will take over the world after us and do it better of taking care of the world then us. This chapter goes into the past, future, and present. I feel the only way this will happen is if us as humans program these robots to do better becasue we will know how to do it better. The other question is cna robots ever be more intelligent then humans? My opinion is no because it is impossible because us humans will have to make them and we can only make then as smart as we are becasue thats all we know. This topic is very interesting and i enjoyed reading this chapter.
Blog 8- reflections of robot visions
I have to begin by saying that I am very glad we are finally reading a book that really is readable and interesting. The story that I've recently read that I'd like to discuss is the story entitled Reason. I think it is interesting that technology ie)robots could be developed to the point where it becomes aware of itself to the point where it is able to think and reject something told to it by a human. In the story the robot used total reason to come up with the idea that it was superior to humans... even though that wasn't the point of the story, it was still suggested in the story that robots could be "better" than humans (not needing to sleep, eat, and maintain an organic body). Even though a robot body may be more durable, it does not have the ability to think what is not logical at least not in this story. I'm just still very uncomfortable with the idea of using robots to do things that humans do. If robots were to start to do human jobs, what do the humans do? Is humanity's destiny to be controlled by robots while we sip magaritas on the beach in cancun? while that sounds very nice. i'm afraid its like the saying goes, "If it sounds to good to be true it probably is". Robots may one day may control a lot of things, but like all technology they seem to have a shelf -life and humans will always be needed on stand-by that is likely. You definitely wouldn't want a robot driving an airplane and suddenly freeze up because a wire in its positronic brain has burnt up. I think robots would be better served as human assistants (as a human operated technology) never the ones in control. Margaritas on the beach sound nice, but for all practical purposes it just isnt probable or possible in my opinion.
Wednesday, March 21, 2007
Blog 8: Introduction & Robot Visions
There is one quote or line that really struck my interest and stuck in mind : "all devices have their dangers. The discovery of speech introduced communication - and lies. The discovery of fire introduced cooking - and arson... The automobile is marvelously useful - and kills Americans by the tens of thousands each year..." It is interesting to think about all that we have created, that has in turn brought bad side effects. What would Robots bring to the world? What is our fear? Would they turn on us? When we created the speech did we ever think about the negatives that may come from this creation? When we created automobiles did we sit back and think about how many people these were going to kill? "Robot Visions" (Chapter 1) goes into this idea. This story is very interesting in itself. The idea that the robots would take over the world after us and do a better job than us. I found that the story was very clever with the notion of past, present, future. The "humans" of the future had heard that Archie was coming purely because of the present humaniform robot, the first humaniform robot. I found myself reading and just wanting to know how the story was going to end. If we develop Robots can theyh ever be smarter than us? Can they develop emotions? It is all very interesting to consider.
On Robotics
Despite the use of robots in laboratories and factories, there is still something about robots that places them in the realm of science fiction, at least in my mind. The use of robots in everyday life seems both bizarre and unrealistic to me. I feel that I could never imagine a world where robots were a critical part of my life
Asimov however makes me uneasy in how realistically he discusses robots. His Three Laws of Robotics in particular seem like the most probable set of laws to be applied to robots in the future. These laws indicate that robots will be built with these laws in their nature, something that I am doubtful of. I think that robots with these laws built in them would never be possible because there is always going to be one person with the drive and ability to override the robot's installed nature.
In the story, "Archie" did not receive the same maternal attention as Koko or the creator-creation as the M-5. Rather, Archie was treated as a machine with a specific purpose. He was treated as technology that was disposable while the M-5 and Koko were considered valuable to the doctor and Patterson with each willing to give up a part of their lives to "save" them.
Overall I really enjoyed the story and the only reason being was how it ended. Throughout the story I was suspicious of the narrator and even predicted that he may be a robot. Now I'm just bragging. However I really did enjoy the story because the ending seemed to give robots this god-like mentality (the humaniforms) and even hubris. Are robots even capable of hubris? The way the story portrayed the future made me assume the they were at least heading there. If they were made in human likeness, maybe they are even capable of human mistakes.
Asimov however makes me uneasy in how realistically he discusses robots. His Three Laws of Robotics in particular seem like the most probable set of laws to be applied to robots in the future. These laws indicate that robots will be built with these laws in their nature, something that I am doubtful of. I think that robots with these laws built in them would never be possible because there is always going to be one person with the drive and ability to override the robot's installed nature.
In the story, "Archie" did not receive the same maternal attention as Koko or the creator-creation as the M-5. Rather, Archie was treated as a machine with a specific purpose. He was treated as technology that was disposable while the M-5 and Koko were considered valuable to the doctor and Patterson with each willing to give up a part of their lives to "save" them.
Overall I really enjoyed the story and the only reason being was how it ended. Throughout the story I was suspicious of the narrator and even predicted that he may be a robot. Now I'm just bragging. However I really did enjoy the story because the ending seemed to give robots this god-like mentality (the humaniforms) and even hubris. Are robots even capable of hubris? The way the story portrayed the future made me assume the they were at least heading there. If they were made in human likeness, maybe they are even capable of human mistakes.
Tuesday, March 20, 2007
Personhood
Some interesting ideas have been raised by the movie on Koko and the introduction of Donna Haraway. Apes (especially cases like Koko) and cyborgs (ie. OncoMouse) are considered border cases by Haraway. She suggests that we should extend personhood to apes and cyborgs. However, I see a possible slippery slope problem. If we extend personhood, where do we stop. Should dogs and cats also get personhood? Also, I think there is a difference between apes and cyborgs. Apes seem as if they would be restricted in their intelligence, whereas cyborgs seem to be able to be upgraded when needed.
Monday, March 19, 2007
Monkey See, Monkey Do? (Blog #8)
The KoKo documentary was a fascinating piece of film. Watching a person,, namely Penny Patterson, communicate with an animal so effectively was very interesting. Intelligent creatures have always and will always intrigue human beings. Perhaps, as Patterson suggested, we just want to find our equal. She added that we always look to space for some intelligent, alien life, but our intelligent counterparts might be right here on earth. Michael Crichton’s Congo addresses this question, the question of animal rights and humanity’s relationship with nature in general.
Crichton was inspired by Koko the gorilla and her ability to communicate, so he made a gorilla, named Amy, the central character in the book. In the book, Crichton regales readers with a history of primate training and their contact with the West. In one such passage, he tells us, that Samuel Pepys, famous for his detailed description of English life in the 17th century, described a chimpanzee he saw in London. He says that the animal was “so much like a man in most things that…I do believe that it already understands much English and I am of the mind it might be taught to speak or make signs” (44). He quotes another anonymous 17th century source suggesting, “Apes and Baboons…can speak but will not for fear they should be imployed [sic], and set to work” (44).
Crichton’s book remains vague on the issue of animal rights and primate personhood. The character of Amy, however, becomes one of the most human and reliable. It creates the same conundrum that we currently have. The primates appear to have personality, intelligence and even a sense of humor, but it is impossible to be certain of their “personhood”. In much the same way, Amy shows the same traits, but she cannot, and her trainer cannot prove that she has “personhood”. The chimp “theory of mind” articles identify the same problem. We can never have complete certainty; especially with trained primates since the sample is tainted. It might simply be a function of anthropomorphizing the subject—another key feature of human nature. We understand creatures and even technologies according to our own understandings of reality. It is difficult, perhaps impossible for a human being to conceive of something that has no thoughts. Therefore we attribute motives and feelings to things that clearly have none—such as an automobile. It is not so easy to decide, however that a dog or a chimpanzee has no feelings or thoughts and, so, we reach a stalemate.
These questions are fascinating and represent some of the deepest questions in philosophy. In the film, Patterson suggested that our search for other intelligent beings is rooted in our desire to understand ourselves. The creation of artificial intelligence might be grouped in the same search. Why on earth would we want a computer psychiatry program? And yet, they exist. Dr. Sbaitso, quite an old program, allowed users to ask the computer psychiatrist questions and the A.I. would offer answers. Unfortunately, a preliminary use of the program would reveal that the answers were actually quite repetitive. Nonetheless, it reveals the human desire to talk to another entity, for lack of a better term.
It is doubtful we will ever know if Koko, robot dogs or A.I. psychiatrists have personhood, but, perhaps, by examining our interest in such things we can uncover a little more about human nature
Iris
Crichton was inspired by Koko the gorilla and her ability to communicate, so he made a gorilla, named Amy, the central character in the book. In the book, Crichton regales readers with a history of primate training and their contact with the West. In one such passage, he tells us, that Samuel Pepys, famous for his detailed description of English life in the 17th century, described a chimpanzee he saw in London. He says that the animal was “so much like a man in most things that…I do believe that it already understands much English and I am of the mind it might be taught to speak or make signs” (44). He quotes another anonymous 17th century source suggesting, “Apes and Baboons…can speak but will not for fear they should be imployed [sic], and set to work” (44).
Crichton’s book remains vague on the issue of animal rights and primate personhood. The character of Amy, however, becomes one of the most human and reliable. It creates the same conundrum that we currently have. The primates appear to have personality, intelligence and even a sense of humor, but it is impossible to be certain of their “personhood”. In much the same way, Amy shows the same traits, but she cannot, and her trainer cannot prove that she has “personhood”. The chimp “theory of mind” articles identify the same problem. We can never have complete certainty; especially with trained primates since the sample is tainted. It might simply be a function of anthropomorphizing the subject—another key feature of human nature. We understand creatures and even technologies according to our own understandings of reality. It is difficult, perhaps impossible for a human being to conceive of something that has no thoughts. Therefore we attribute motives and feelings to things that clearly have none—such as an automobile. It is not so easy to decide, however that a dog or a chimpanzee has no feelings or thoughts and, so, we reach a stalemate.
These questions are fascinating and represent some of the deepest questions in philosophy. In the film, Patterson suggested that our search for other intelligent beings is rooted in our desire to understand ourselves. The creation of artificial intelligence might be grouped in the same search. Why on earth would we want a computer psychiatry program? And yet, they exist. Dr. Sbaitso, quite an old program, allowed users to ask the computer psychiatrist questions and the A.I. would offer answers. Unfortunately, a preliminary use of the program would reveal that the answers were actually quite repetitive. Nonetheless, it reveals the human desire to talk to another entity, for lack of a better term.
It is doubtful we will ever know if Koko, robot dogs or A.I. psychiatrists have personhood, but, perhaps, by examining our interest in such things we can uncover a little more about human nature
Iris
Wednesday, March 14, 2007
blog on Ape theory of mind and its implications on robots
I just read the articles handed out in class on ape theory of mind and the article on robotics and i have to say i am definitely intriqued by the possibilities that exist in our developing use of robotics and technology. i was also interested in some of the questions brought up by the film in class. What the real question is which presents itself is this: at what point does something distinguish itself as a person? The article on ape theory of mind and the film in class attempted to answer this question. If chimps or gorillas can be found to have a personality and even a theory of mind that can only be influenced by humans but not programmed by, what about robots which ultimately are to be programmed by humans? Just as the article on robotics pointed out, this raises serious ethical questions. If robots are able to advance to the point where they have a theory of mind and are aware of themselves, doesn't this essentially make them a person-robot-being, which ultimately has rights just as we humans have rights within our society? Just as african Americans and women fought for equal rights in our country, robots one day could be doing this as well. If robots are able to advance to be on par with the human brain wouldn't that make them equal or at least shouldn't it? I'm not saying I would feel comfortable with it though. Robots would obviously be made out of wires and metal which is a lot less fragile than the human body... it could probably last a lot longer than a human body especially if you consider updates to the robots body as it ages. what happens to a robot's body though when it finally does meet its demise? would they be melted down and reused or do they get buried in a coffin in a grave like any other human? would churches give them a funeral? And onother thing would it be a crime "murder" a robot? it gets more and more messy the the more you think about it... also say robots get equal rights with humans... would they get the "good" jobs over less durable humans (remember they could work at a company for say 100 years versus say 35 years for the typical human)? also education would put robots over humans since robots could just download the info they need into their "minds" versus it taking 20-30 years in humans. You could see why one could easily become pessimist about having robots. I dont like the idea at all. in my opinion robots should never be developed to the point where they become equals instead of tolls because that is where they become a new force and threaten the prosperity of humanity and our way of life. I say vote no on Robo!
Monday, March 12, 2007
Blog #7- Conclusion Ihde's "Technology and the Lifeworld"
As Don Ihde's literary work on technology relationships draws to a close, a comparison is created between Renaissance discovery and modern Big Science. Both involve numerous embodied instrumental technologies which allowed the scientists like da Vinci of a specific age to perceive the world in a altered manner. However the difference lies in the fact that today's scientific community is supersaturated with technology and demands more significant financial funding. As time passes and the world grows more modernized, technologies become even more ambiguous and non-neutral. This obsession with high and advanced technology and dependence society has created has made it theoretically and physically impossible to revert back to the past. So the only remaining option is to find some solution to save the only world the human race has ever been given. However, an answer cannot be reached until the idea of nature as a "huge pool of resources" for man to use at his will is completely abandoned. Ihde emphasizes the point that it is not logical nor fair to lay the majority of the blame on Western civilization. Rather all nations and areas of the globe have contributed to this technological takeover, therefore all must take due responsibility. The social, religious, and cultural values and ideas of the world's societies must be understood in order for world governments to cooperate to improve the situation. Overall, the bottom line is that nature is irreplaceable, and therefore holds priority over man-made technology. Although many philosophers are opposed to Heidegger's belief that only God can save the world, there is a unified consensus that some all powerful and universal body of authority is necessary. Change must occur at even the small microperceptual level in regards to human-technology relations, and macroperceptual alterations will follow. Ihde points out that the gap separating human beings and animals is quickly closing , and as a result society must even further take into account the needs of other species besides our own. The increased erosion and the pollution of nature in the environment should serve a a warning sign that something must be done to change, not control, the way humans use and exploit technology. It may take using technology itself to spark significant change in the world. The responsibility lies in the citizens of the world today to create and establish a valid and efficient solution in order to preserve the earth for the generations to come. However, as with any worldwide action, complete convergence of effort and a long period of time is necessary.
Friday, March 09, 2007
Brain scanning update
I saw this article on Ars Technica, thought I'd share it: "Scientists read intentions in the brain; thought police still a ways off"
Monday, March 05, 2007
You have nothing to fear, but...
I really liked this Star Trek episode because it shows the fears that humans have from robots even today. You have the M5 computer system coming in to take over the job of the crew. That is something that we’ve all learned to accept by now. We know that computers can perform task faster and more efficient than the human mind can, but the M5 is there for a different reason. It also replaces Captain Kirk and Spock. I think we all get emotional when we realize the great Captain Kirk is going to be replaced by a machine. It will no longer be the “final frontier” for humans. Instead the computer will be getting the experience. Our first inkling of danger is when the M5 begins shutting down the power and life support systems on different decks of the ship. “Oh no!” I ask myself. What happens if the M5 shuts down life support systems where I’m at right now?
We have also accepted that computers will start making life and death decisions for us. We have planes that run on auto pilot as well as unmanned aerial vehicles now. But now the M5 is making decisions that attack the other star ships when provoked during drills. Eventually, we will see that they are able to make the computer commit “suicide” when it realizes that it is responsible for killing humans. Here’s the real scary part. As of right now, we know that the computer can’t feel. It can only do. If the computer is programmed wrong, it will have the ability to murder us, but it won’t have the conscience to turn itself off and “die”. In the end, it is us humans who must do the dying so the computer can live. Technology doesn't die after all, but we do.
We have also accepted that computers will start making life and death decisions for us. We have planes that run on auto pilot as well as unmanned aerial vehicles now. But now the M5 is making decisions that attack the other star ships when provoked during drills. Eventually, we will see that they are able to make the computer commit “suicide” when it realizes that it is responsible for killing humans. Here’s the real scary part. As of right now, we know that the computer can’t feel. It can only do. If the computer is programmed wrong, it will have the ability to murder us, but it won’t have the conscience to turn itself off and “die”. In the end, it is us humans who must do the dying so the computer can live. Technology doesn't die after all, but we do.
Subscribe to:
Posts (Atom)