Monday, April 30, 2007

Human brain and robot body becoming a reality?

Found this artical during my research for the paper, and thought I should share it.
http://discovermagazine.com/2002/nov/featbionic/

Tuesday, April 17, 2007

blog 9

The story of reason was one of the more interesting readings for me. It was interesting that the robot cutie at first seemed as if it wasnt performing the tasks that it was asked to do; however, later we see the tasks are performed just in different ways the humans intended them to be. This shows that the robot was not following its orders which in my mind would be scary even if it did not harm another. This would put the question in my mind that it may not folow any of my orders one day. Even thought the robot is efficient in what it does it still was flawed in that it did not do everything the humans said

blog 8m-5 computers

Computers are only as smart as their creator and they also lack other human capabilities. This is why technology and computers will never be able to do all the tasks humans will. Technology may make it easier for us to do certain tasks in our day to day activities but we can always will be in controle of our robots, comuters, and other sorts of technologies.

blog 6 brainscanning

Brainscanning in the future seems as if it would be very beneficial to the adult and juvenile court systems; however, Im not to sure I would be in favor of it. We are all flawed humans and different thoughts go through our heads which can seem to be harmful but we would never act upon them. I might once think to steal something however never commit the crime because of my conscience. I also think brainscanning woudl violate a person. My thoughts are mine and no one elses. There are two stages a person goes through a front stage and back stage. The front stage is public which all see and develope impressions of you and the back stage is private which consumes your own thoughts that no one els can have. I like the idea of having something that no one els can have.

Final Blog






It seems that the posting problems have been solved, most of the time I would just give up without any hope. But today it seems the computer has taken pity on me...or it is trying to lure me into a false sense of security. So anyway without further ado...

What really got me thinking in our last class today was the discussion surrounding Fake vs. Authentic art and music. If a cyborg with a human brain placed in a robotic body created something, I would still consider the art created to be authentic. However if a cyborg with a robotic (positronic?) brain placed in a human body created art, I would not consider it authentic. (A robotic brain belonging to a special case such as Andrew in Bicentennial Man being the only exception). The reason would be that the human brain is the seat of our personality. If we change the brain we would be making a fundamental change from who we are - even if we do become more perfect.

I really love music, espicially classical pieces that feature either the Piano or the Violin. This is what makes Andrew Bird particularly enjoyable, the sound of just him and his violin is enough to send me into euphoria. If I were to find out that either Andrew Bird or Michael Nyman (the writer of my favorite piano piece "The Heart Asks Pleasure First") were cyborgs, I would be extremely disappointed. Even if they created this music through some sort of brain wave enhancing device, I would still sulk around for weeks (or until I found a great collection of Vivaldi's works laying around my house). I feel that these artists have taken their experiences and transformed them into music, music that they are willing to share with me. Live performances would also be pretty much shot to hell. Its hard to get a robot to break bow strings like Andrew Bird does when he plays with so much passion. I feel that a robotic brain or enhancer would cheapen the meaning and the music would lose all saliency.

In the end I just really hope that Nyman or Andrew Bird are not cyborgs...I'm more than a little paranoid now. Andrew Bird was looking a bit off that night...

Monday, April 16, 2007

Robot Visions

I think that Asimov's collection of robot stories are very interesting and intriguing. They leave you wondering what the future could really hold for us and how large of a role technology really could play in our lives. The idea of making robots so sophisticated and highly functioning that they are indistinguishable from human beings scares me a little. I feel that robots and machines could be very helpful but I feel that no matter how advanced they may become, they are still a machine and should be under complete control of human beings. The thought of Andrew gaining freedom and given human rights is crazy to me. Robots do not have feelings and emotions or complex thought processes.

Blog 8: M-5 computers

I fear the idea that one day technology will take over our world. I don't think that any piece of machinery, such as robots and computers, could ever function better than human beings. We are too complex and intelligent. I believe that technology can be created to enhance our lives and make tasks easier for us to accomplish, but I don't believe that they could ever completely run things on their own. They will always have to be controlled by human beings. Computers cannot make distinctions and they lack our human instinct and insight into situations. They could never match our thought processing and cognitive functioning skills.

Blog 7

The last section of Ihde's book discussed several interesting topics and I for the first time in reading this text was able to somewhat understand what he was meaning. The main idea i found interesting was the role that culture plays in the way we perceive technology. People in different cultures use pieces of technology for different things. I also found his discussion of first and third world countries interesting. He felt that to be a successful country the country needs to be technologically advanced. To be technologically advanced we must also have a good education, specifically science education. The availabilty of ecucation programs and mainly money is the reason why first world countries are more advanced than third world countries. However this idea is a little unfair because many third world countries want to do more with technology they just don't have the money, resources, or opportunities available to them that some of the more advanced countries have. The last idea I found interesting was that we should control technology. I feel like there are very few things in life that are controllable. You have to have something completely mastered to be able to control and technology is in no means mastered and won't be for a long time if ever.

Blog 6: Brain Scanning

I'm not sure if I think brain scanning in the future is a great idea. I understand that it could have great implications in legal matters in the court system. If you scan a person's brain to find out if they were guilty of a crime or not it would save our legal teams a lot of time, money, and work. Trials these days have become so complicated that it would be great if we could find a way to prove innosence or being guilty however there is the risk that people could be falsely accused because of thoughts they have had, even if never acted on. Many people could look guilty in situations by the thoughts they have had and that is just human nature. In the case of having a negative or offensive thought about a situation or person, are you going to have to prove your innosence from every shady thought in your brain?

My other hesitation about the idea of brain scanning is the fact that our thoughts and ideas are ours and they are private. Who and under what circumstances can someone get into your most private place and ruffle through? It is hard enough to keep things personal without others being able to invade your deepest thoughts and secrets.

Wednesday, April 11, 2007

Reason

The basic premise of the story Reason just may become a reality... The Pentagon’s National Security Space Office (NSSO) wants to look into the possibility of a solar-power collection satelite. Huge arrays of solar panels would collect energy that would be beamed back to earth. Also, the article says that they want to make is powerful enough to be useful, but weak enough to not cause damage on earth. Minus the robots, sounds like Asimov was on to something.

Blog 9- would you like to be a Cyborg?

This essay by Issac Asimov was particularly interesting to me. To me it seems that Asimov was coming back to the question as to what makes a human a human. I don't know why, but the idea of cyborgs seem really cool to me, that is, a cyborg which has a robotic body which encases a human brain. It has been my experience in my life, that the body tends to wear out or get diseased while, the brain is functioning fine. Why not offer these people who no longer have a use for their bodies a second chance on life? Of course science and technology haven't figured out a way to do this yet. The problem I would guess is how do you fuse something biological with something technological. The body would run on electricity and so would the brain, but the brain also needs oxygen and food and so needs blood, which would likely be a hard obstacle to overcome. But say we found a way to do it. Imagine you are 25 and cancer is spreading through your body, the doctors tell you nothing can be done, the cancer is too strong. They tell you you have about a month before it gets to your brain and kills you. Wouldn't you consider letting them do an operation which saves your brain and gives you a new body? I know I would. Think about it, you're only 25, you still have another 70 years use on your brain. And another plus you are freed from the confines of a human body for the most part. The only restriction is you still have to sleep since you still have a human brain. And another plus the new body you will inherit for the most part looks very much human and can be custom made so to mimic a perfect you. Oh you would never age! The average person would never know the difference! Maybe they could keep the brain transplant procedure confidential and you could just tell your family and friends you were cured... if you wanted to look like you were aging you could just go in for some maintenance and have them work a few wrinkles in and maybe put a few gray hair in. You would be the envy of your family, never getting sick. You could go strong till the day your brain surrendered...
Some people might argue that by doing this, you have lost a piece of your humanity by never getting sick or dieing when you were supposed to. What if they had been able to give Mozart a new body when he was dieing at such a young age? think of all the more additions to classical music he could have made!
I think such a technological advance would enrich humanity, not hurt it...
after all, I think most people would agree, is that your brain more than any part of your body, is the part that is human, more than anything else.

Monday, April 09, 2007

Blog # 9 Reason

The story reason is very interesting and keeps you interested. The robot Cutie as they call him at first seems to be malfunctioning and not doing the jobs in which it was intended to do. Though later in the story it becomes obvious that the robot does the perform the task it was made for with great perfection. Its just that the robot follows a different set of standards for performing its task. Something that strikes me as being a defect within the robot is that it sees its self as superior to the humans and it won't follow the orders in which they give it. Although the robot won't harm them and won't allow harm to come to them it still won't follow their orders which to me is a very serious problem. Not only did Cutie not follow thier orders and keep them confined to their quarters but he also turned all the other robots against the two men to were they also wouldn't follow the orders of the two men who was running the station before the creation of cutie. As long as cutie does the job he is intended to and no problems arise he will be fine in running the station but if any problems do arise that's when Cutie will become a problem because of not following the orders of humans. To me they should fix cutie were he follows the orders of humans and does the jobs that he was intended to do before they leave him in complete control of the station.

Who is the better person?

Galley Slave turned out to be better than what I initially thought when I started reading this. I liked the idea that you could tell the robot to keep it's mouth shut even when you had done something illegal. Can you imagine? You have this enormous machine with considerable strength that you could make it do all kinds of illegal things and then make it lie for you too. Criminals would be buying this thing left and right. And then best of all, the manufacturer can't even find out the truth because the machine refuses to talk to them too. My computer does this to me today. I tell it to do something and it does it slower than I want it to do, or it just doesn't do it the right way. Of course, I am the one who decides whether it is right or wrong. Of course, we find out in the end of the story that the professor is lying and the robot, being the morally superior being, comes to the rescue of the professor on the stand. Every time I start feeling that the human race is falling behind and the robots have our best interest in hand, I pull out the old Terminator I movie and watch that bad old robot trying to kill the good humans. It makes me feel good that we're still better than the robots. Of course, having a fire inside your TV set is kind of cool too. I may have to try that some day.

Reason? Is there a reason for you?

I really liked this story for several 'reasons'. I thought it was funny of the old sci-fi shows where the robots take over, but it also had the hidden meaning of course. I know that in the end Cutie (QT-1) was simply protecting the humans by performing all of the functions itself and keeping the humans out of the control room, but the religious fervour of the robots kind of scared me. If the robot ever got to the point where IT decided what was the best way to protect humans, then I'd say that was the time to pull the plug. What if the converter failed and Cutie started looking at the humans as mischievous or dangerous? You know what happens next. The humans have got to go. Donovan and Powell are already in trouble because they have laughed at Cutie and the 'Master'. It was good of Cutie to keep feeding the humans until they left, but what about Muller? will he get fed too? Will he offend the Master?

Saturday, April 07, 2007

Blog 9: Reason

I really enjoyed reading Reason. In my opinion, the robot Cutie almost appeared to outsmart the humans throughout their stay on the station. At times, Powell and Donovan questioned Cutie's theories and wondered if he might be speaking truth.

The one question I have is how Cutie was able to defy the Laws of Robotics. I always think of the movie "I Robot" and whenever a human told his robot to do something, it was done (Unless it defied another law). Cutie ignored Powell and Donovan's commands and actually took charge with the other robots.

The other thing I liked from this story was the way Cutie was able to comprehend things so easily. After all of the doubts Powell and Donovan had of him, it was amazing to see how Cutie was able to control the beam perfectly during the storm. A robot that was simply created to carry out tasks from humans turns out to be, in my opinion, smarter than the people in charge of him. Any time I read these stories, it makes me think about the future more and if these situations will ever occur.

Wednesday, April 04, 2007

Blog 9

As i read Robot Stories i am liking all of these stories alot. I think this is becasue i understand them more and i cna kind of tell what is going to happen in the end. In the Story Reason they havea robot that to the reader seems is trying to take over, which it is but not as much as the story leads you to believe. Throguhougt this story it duscusses the thrre laws that go along with robot life. The question that comes to mind as you read is if the robot is ignoring these laws as he goes along with his actions? The other two are if the robot obeying the rules of the humans and is he avoiding humans from being harmed? From cuties expirience with the storm you can tell that he seems to be the best person for controlling the station. By the end of the story he savesd the earth and mankind. After this he thought he was far more superior than humans becasue of his accomplishment, but did he really know what he did? He was just doing his job. He felt that he was superior only becasue he is a reasoning being. The two creators of the robot try to tell him that they created him but again he makes no effort to believe either of them. Cutie was made of parts of a master. This is a story that illistrates a robot that can be created with a very different mind than humans and have a min d of its own.

Should Chimps Have Human Rights?

Another tidbit from the corners of the internet, this time from Slashdot: Should Chimps Have Human Rights?

Tuesday, April 03, 2007

Blog 9

The stories by Isaac Asimov have been pretty interesting and has make me think about what the world would be like with the robots he portrays in his stories. The story reason had me a little uneasy when I was thinking about the world with that type of robot. In the end, everything is fine and that the robot did was it was told. But leading up to that was filled with the robot taking control and thinking it is superior. Even though everything was fine at the end, if a robot really thought it was superior, would it have a happy ending every time? With the robot clearly having its own mindset, who knows what would happen if that robot were on earth or even put in the same situation. I think that even if there were happy endings, there would be some endings that didn't end to well for the humans. If i robot were to show that independent thinking along with the thought they are superior, they would eventually be taking the humans spot and replacing them. This was a very fun and interesting story to read, but if it were to be in real life, the ending might not be as happy.

Monday, April 02, 2007

Star Trek remains


The ashes of Star Trek's Scotty (and also astronaut Gordon Cooper) are going to be launched into space before being laid to rest on Earth.

Beam me up, Scotty.

Insufficient Data (Blog #9)

Star Trek: The Next Generation frequently raises interesting questions about human relationships with technology, but never more obviously than in the case of Data. The most interesting observation about the episode we viewed in class, was that Commander Data, an android devoid of emotion, appears to have feelings. His actions seem compassionate, kind and sympathetic, but he should not be able to behave in this manner if he has no emotion, should he? Data is also one of the most popular characters n the Next Generation universe which also seems anomalous given his inability to reciprocate emotion.

What could be producing this perception of emotional capacity? It might be that Data simply appears to have emotions because, as feeling people, we can’t imagine any animate thing not having them. In other words, we tend to anthropomorphize and project our emotion into Data’s void. Or, perhaps, Data may actually be developing emotion. As a final possible, quite cynical, reason, Brent Spiner, because he is a human being, might be creating emotion where there should be none.

It is my opinion that Data is not capable of emotional responses and does not display them. Yet, he is still my favorite Next Generation character, largely because he acts the most humanely, and sympathetically, of any crew member. Inability to emote does not exclude Data from acting in a way that human beings understand as good. Moral actions, any actions involving free will, in fact, do not necessarily require emotional support. Spock proves this point (despite McCoy’s objections). He acts according to logic, yet he can still befriend, love and act morally. Perhaps Spock is just a hopeless hypocrite (that would be illogical though), but some philosophers believe that rationality, not emotion, is what makes us truly human—Kant is one such philosopher. For the sake of brevity, I will confine myself to the Kantian argument.

As we mentioned in class, Kant believed that human dignity, or personhood, is rooted in our rational intellect and ability to set our own goals and make decisions.
If we can accept this, then it is more pertinent to prove whether or not Data can act rationally and freely, to determine his “humanness”. Unfortunately, philosophers, such as Sartre (we are not free not to choose), have questioned whether or not human beings have this ability, so it becomes even more difficult to prove the same for robots. For the purposes of this discussion, however, I will not question whether or not human beings can act freely and just assume that they can thereby creating the fundamental difference between say, Captain Picard, and you average robot vacuum. Data exhibits what appears to be autonomous behavior. He spends time learning human arts, reading and even practicing sneezing. No one commanded Data to engage in these tasks. Data has the ability to reflect on his own existence and recognize the difference between himself and humans. He has self awareness.

Does this mean he is an autonomous, rational being? Not necessarily. Unfortunately, much like with chimpanzees and gorilla, we cannot fully comprehend the interior life of a complex android. What appears like autonomy might just be an epiphenomenon. For instance, MyTMC (aka Jenzabar) frequently acts independently of instruction. It deletes documents, closes windows and even grades exams. A more computer savvy individual might be able to explain Jenzabar’s actions more accurately, but I wouldn’t call it autonomous action. It fails to indicate the ability to think rationally in addition to its vague autonomy.

Data, however, behaves in a manner that could be considered rational. He reasons quite effectively. For instance, when Picard referred to Data’s “brother” Lore as an “it”, Data immediately transferred the appellation to himself. If Picard’s statement t is followed to its logical conclusion, then he would have to call Data an “it: instead of “he” as well. Data had to employ logic and a keen understanding of language to recognize this.

Thus far, following Kant’s lead, we would probably have to grant Data personhood. Something seems wrong with this, however. First, Data’s rational autonomy might still be anthropomorphism. If this is the case, then Data is once again a machine. Based on the past evidence, this seems less likely. Another problem enters in, though, when Data’s origin is considered. He was created by a human, a genius, but still a human. Soong, Data’s creator, endowed him with very human qualities, sans emotion (which seems to have been a response to Lore’s instability). Bec ause of this, Data is limited by the intellect of his creator who cannot give more than he has.

Soong may have failed to include crucial aspects of human brain function that make full autonomy possible. The lack of real emotion, for example, may be detrimental for Data’s freedom. He does later acquire an emotion chip, but this too is more programming—more simulation. All of these factors serve to limit Data’s ability to set goals and pursue them freely. In addition, it is doubtful that Data has an understanding of moral right and wrong. I believe it would be impossible for Soong to impart knowledge of good and evil—metaphysical concepts that cannot be reduced to a program. This, I think, would make it very difficult for androids, not matter how advanced, to become moral agents. This drastically limits Data’s autonomy. It essentially would bar true free will. The ability to choose good, or choose evil. As a disclaimer, I will not approach the question of the existence of good or evil, since both have been challenged. There is not enough space here.

Another important difference between androids and humans is the fact that androids remain in-organic. In Bi-Centennial Man, Andrew managed to give himself many prosthetic human organs, but he was never organic. Importantly, his brain was never organic. If we believe Descartes’ odd proclamation that the pineal gland is the point at which the soul connects to the body, we would have to exclude Andrew from having a soul--or at least a spiritual-corporeal unity. Granted, Descartes’ statement is wildly implausible, but it does present, one reason why an organic brain might be a crucial part of human uniqueness and personhood (since Descartes believed the mind-soul was true self). In addition, robots, of every kind, lack natural life. They are not born, they do not die (without some kind of alteration). Thus, it is possible (as we saw in the TNG episode) to rebuild an android and save its positronic brain. Humans, animals and plants, by contrast, are not so easily restored. The frailty of life is another reason for human sacredness.

For these reasons, I am loathe to grant an android even my favorite one, Data, personhood. This does not mean we can treat androids, or robots as we please. As we discussed in class, a very good argument can be made for secondary duties towards artificial humans. Because of their closeness to humanity in appearance and activities, it would degrade human beings to mistreat them. For instance, of Commander Riker were to hit Data repeatedly with a baseball bat for jollies, he might be more inclined to do so to a human. It would also deform his own nature and sensibilities in much the same that way that pornography might. Data might also be considered a work of art. He should therefore be treated with the same respect due to such a creation.

It is interesting. Human beings agonize constantly over their origins. Data, and other androids, know that humans have created them (although Cutie in “Reason” by Asimov challenges this). The Christian tradition recognizes that human beings were created in the image and likeness of God. The “Imago Dei” principle is the foundational principle in the Christian moral tradition and the chief source of human dignity. Unfortunately, Data was created in the image and likeness of human beings. We are incapable of giving androids natural life. Data’s lack of natural life is exemplified in his relationship with his cat, Spot. Data is an incredibly complex piece of robotic science yet he still lacks something so basic, yet impossible to attain. Something Spot the cat has—life.



Iris

Blog 9: Reason

I am finding these stories within Isaac Asimov's "Robot Visions" extremely interesting. It is nice to be reading through something like this and really want to know how the story is going to end. The current story chosen, Reason, had a bit of a different twist to it. The story has you believing that this Robot is truly evil and trying to take over, which it is, but not to the extent that you believe. In the story it discusses the three laws. Is this Robot completing ignoring the three laws implanted in him? Or is Cutie simply following the rules by controlling the station? Is Cutie in the end obeying orders that humans allow and is Cutie actually preventing humans from being harmed? It is obvious from the incident with the storm that Cutie is the best to date at controlling the station. He in the end saved Earth and mankind. Now whether he really knew that's what he was doing, probably not, but he was taking care of the job at task. Cutie believed that because he is a REASONING being, he is in fact superior to humans, with their intelligence. Powell and Donovan try to show this robot that they are superior and that they actually created him, but he did not buy anything that they brought his way. All the books were simply written for humans. The robot they put together, was simply a bunch of parts made by the Master (they didn't really make it!). This story illustrates the possiblity of making a being that has a mind of its own. In this case with the use of reason, the robot had a mind of its own and there was no changing it. Now the questioin is whether Powell is right about Cutie? Is he really okay to be left to control? As long as he follows the Master will all be ran smoothly?

Sunday, April 01, 2007

Blog #9 The Bicentennial Man

Andrew Martin is truly a representation of futuristic dreams and technology in the field of robotics. However this age of technology in actuality may not be as distant as originally speculated. Asimov would most certainly be amazed that the scenarios depicted in his many famous science fiction books may serve as frameworks for reality. Just the simple fact that conventions, such as the recent European Robotics Network held in South Korea, are being planned and presented proves the urgency of this issue in modern society. Great leaps have been made by robotic engineers and AI scientists, which have produced promising robots and devices capable of performing tasks and demonstrating certain abilities. Robots have been invented which appear to be helpful and efficient for caring for the elderly and completing household chores. Fifty years ago, the philosophical and ethical question of whether robots should receive equal and similar rights, would be considered completely ridiculous and unfeasible. It is actually quite amazing that extensions in scientific inquiry could lead to the establishment of a different societal ideal of what designates a human being. The story "Bicentennial Man" presents a more personal viewpoint of the possible implications of robot rights within modern society. Asimov is successful at evoking sympathy and understanding within his readers of Andrew's desires and motivations. Instead of describing robot infiltration within society as a threat to humanity, Asimov takes the optimistic standpoint. Robots are considered a benefit and advancement for the human race, making life easier and less stressful. Andrew is bestowed human qualities and personality traits especially the ability to be creative. These human-like characteristics strengthen the case that Andrew is a "person." In reality, the more advancements that are made in robotics, the more human these technologies appear. The ancient idea of a robot created and embellished on in literature, of a separate machine-like device made of bolts and metal, has disappeared. The revolutionized 21st century robot is considered equivalent to a human being, and therefore is deserving of equal protection under the law.

Saturday, March 31, 2007

Two articles

Two articles from Gizmodo for people to ponder:

Wednesday, March 28, 2007

Blog #8 Robot Visions and Bicentennial Man

I must say that I'm so very glad that we have moved into some reading that has really excited me. I am reading more of these stories than we have to. I may have found my new favorite author. I hope this guy wrote a lot of books. I liked Robot Visions. I like how this short story starts with us finding out that our narrator is a member of a group that has discovered time travel, but we could never understand the mathematics of it all if we had not had Temporalist training. It's perfect and what's even more perfect is the end of the story when we find out that our narrator is a more advanced robot and actually tells Archie what to say in case he is asked by the humans of the group. They designed a robot that ultimately made the decisions that would ensure that human life ended and robot life continued.

The next story also interested me too. I found it amazing that the robot had began to have the same kind of feelings that we as humans have. Andrew slowly became more and more human. He even got to the point where he wore clothes, gave orders to humans, and wanted to die. HEY WAIT! That's not what we humans want to do, is it? Don't we still have nudist colonies? Don't we want to talk issues over and just get everyone to agree? Who wants to be the bad guy and make everyone work? And last, but certainly not least. Don"t we want to live forever? That Andrew sure had it wrong. How could he want to do all the things that we humans shy away from? Or do we? I thought about this some more after we discussed it in class. Andrew wanted to die. He made an individual decision. I remember watching the video in class before where they were going to save the guys brain. He also just wanted to die. Maybe that's it. Maybe Andrew was human after all. He made an individual decision. We all make these kinds of decisions and sometimes other people think we're crazy too. I wonder if someone has looked at the decisons I made and thought they were crazy too. I'm not crazy! Am I?

To blog or not to blog, that is...

I'm starting to like this blog thing. I hate to digress, but I feel I must. I really think it's interesting that I took a course that is on Ethics and Technology. The main book was Technology And The Lifeworld. At the end of the book, it's clear that Don Ihde is against science, the military, and maybe even the government. The point here is that technology and science might not be all that good and I have done nothing but learned new ways of using the internet and doing this blog thing. Ihde may have wanted us to turn our backs on technology and science, but this course has made me use technology more and made me realize how ingrained we are in it.

Blog # 7 Why do we hide who we really are?

There are many times when I have been reading this book and I have been so lost and confused. But after reading through the final conclusion a couple of times and then looking back through the book, I have found that I missed what Ihde was trying to say all along. I should have taken the time to read the ending so I could understand what he was trying to tell us all along. His interests seem to truly lie in the fact that he's a conservationist. No that's might be wrong too. Maybe he's an animal rights activist. I hear him extolling the virtues of several different areas in this epilogue and i wonder what his true beliefs might be. I seem to pick up that whales are more important than the nuclear reactor and we need to find other ways of producing power for our society. I wonder about what he has done for our society. Has he conserved energy or raw materials in some way? Was this book made from recycled paper? Did he insist that the only energy used to make this book come from windmills? I did find something that seemed to bother him and that was science. It's the evil that we all need to beware of in our lives. Even the government favors science. The government gives all this money to science and the poor arts and humanities fields must do without the money it wants. I must in all honesty say that I believe that must truly reflect on the things that are important, but what had Don Ihde 'created' when he wrote this book? Did he leave something that will change our society for the better? Will we compare this book with the invention of the the telephone? How important will this book be fifty years from now? I think that Mr. Ihde would have been better served to keep his politics out of his book when he wrote it. We find what's really important when he talks about Galileo II and brings up that "funding agencies geared to social needs, economically just wealth distribution, issues of human nature, and conservation of planet earth...would be a nice daydream for tomorrow night." 218 (I must thank Dr. Easley for hammering home the proper way to do a quote). Mr. Ihde, technology can't be stopped. It can only be controlled. I have come to believe that we are embodied with technology and can never go back. I once told a story in the classroom that said everyone can go back if they want to and leave technology behind. I consider it the ultimate in leaving technology behind. Take all your money and buy land wherever you want. Drive up to the edge of your new property and get undressed. Leave your car, the clothes on your back, and stroll back into Eden. Leave technology behind and become a truly free person.

Tuesday, March 27, 2007

Fear the Robots!

It is a wonder what one can do with Lite Brite to convey a warning against robots:
http://www.youtube.com/watch?v=YxZJYbVd1hE

Blog #8 Robot Visions

In Asimov's ideas and thoughts are very interesting and leave you thinking more and more about the future and what is to come. Rather it be some sort of natural disaster or the creation of the robot taking of mankind, according to the Temporalist there would be some sort of disaster that would end mankind. In sending the robot back in to time they found that mankind was still in existence or rather it was a humaniform that resembled humans so much that the robot sent back in time was unable to tell a difference. Yet the robot did report that there was some sort of sad times but no one would inform him on what this sad time was. They only told him that there was a sad time before this new age and that his exploration of time travel should not go looking into the sad times. In the end Asimov says that he the first humaniform and his task is to make sure the future is informed of the time travel and to welcome the robot when he appears but to only tell him a limited amount of information on the new age. The thought of robots being so human like that its hard to tell the difference if even possible is a scary thought and something that humans should not attempt to create. Yes according to these stories the robots will be totally controlled by humans and made to only obey their orders, there is still the chance or rather almost guarantee that one or more of the manufactured robots will be of erroneous quality and that error could cost lives or several other consequences that doesn't need to be risked. I do believe in robots being helpful and making things easier for human beings but I don't think that they should be so advanced that they feel or learn from things that they do. With the continuing advancements within robotics it's only a matter of time before robots become more human like and the risk of these advancements are hopefully not as bad as what many stories portray.

blog 8

Robot visions was a very interesting read. The story has some aspects that really had me thinking. One was that the temporalists were extremely pessimistic. They were certain that the world is 200 years would be completely in ruins. They had no hope for humanity and the world in the future. I wonder if Asimov is as unhopeful for the future as the Temporalists were. Are the Temporalists acting as his belief? And are there really people out there who are that pessimistic about the world? I personally am very optimistic about the future. It was kind of depressing reading about how there might be people who think that the human race will be completely wiped out by either robots or their own doing.

Another aspect was that robots become so humanlike that they can't be distinguished from one human to another if people don't know that "human" is really a robot. Along those same lines is that will robots really be the ones inhabiting the earth? Depending who you talk to, some will say having robots so humanlike will be good, while others say that will be bad. I think it would be bad if robots have the same characteristics as humans. Every human is unique, and by giving robots human characteristics, I think that would be getting rid of the uniqueness of being human. Also, without humans, robots would not exist. So it have robots controlling the earth, would be far fetched. In order to have robots and give them human characteristics, you need humans to program them. To have all robots and no humans seems impossible to me.

Monday, March 26, 2007

Blog #8 Robot Visions

Issac Asimov begins his science fiction book by defining what the word robot means. The first definition presented for a robot is, "an artificial object that resembles a human being." Even though this implies that robots must have similar appearances and physical characteristics to human beings, the majority of robots which have either been developed or described in literature are completely different than human beings. The stereotypical image of a robot is some awkward, artificial figure made of metal plates, electric wires and gears. This view of a robot, as separate or isolated from the life of living organisms is what, is what most frequently appears in literature spanning from ancient times to the 1800's. As human beings, our natural fear of robots destroying or taking over the human race incorporates some form of this primitive, machine-like view. Rather robots were designed to perform activities and tasks to make man's life simpler and less demanding. Only after much scientific research and mental advancements occur ed did the human race begin to wonder if the robotic world could someday replace man. Many philosophers and members of the scientific community warned scientists in the field of artificial intelligence of the possible dangers in exploring and advancing technology. Could our natural curiosity lead to our ultimate demise? It is believed by many that there are certain things not meant for man, which must be left alone for the well being of the human race. In modern times, when what was considered "the impossible" has been achieved and surpassed, the fine line between the role of god and human beings seems to be fading. Although there are many benefits to further research and invention of AI devices, technology comes with a risk. The risk has an irreplaceable price, the destruction and demise of the modern race of men.

Blog 8: Robot Stories/Bicentennial Man

After reading "Bicentennial Man", I was thinking over the situation in my head. Should robots have the law to protect them as if they were human? Any thoughts that revealed itself in support or contrary to this notion would be knocked out by a thought supporting that of the opposite. I decided to converse with my friend on the situation in order to see a view that wasn't influenced by any of the discussions in class. The conversation is as follows, and the identity of my friend has been protected. He shall be referred to as "DefenderOfHumanity" to give himself a positive attribute.

Nick: Would it concern you for a robot to be given human rights, under the assumption that the robot has gained the ability to feel, think, and proceed as a human does?

DefenderOfHumanity: no

DefenderOfHumanity: well i think they should not have those rights no matter what

Nick: why shouldn't they have those rights if they feel and think as exactly as we do?

DefenderOfHumanity: because at the end of the day it's not real

DefenderOfHumanity: it's created and can be taken away

Nick: so when and if we eventually get to the point wear we have mechanical and working prosthetics, those can be taken away and thus we become less human, so therefore we forego our rights?

DefenderOfHumanity: i was actually just thinking about something similar to this today for no reason at all. but it's like how PETA wants animals to have rights, i don't want people to be cruel to animals but they're not people. we are the dominant species. same with robots because it would be an inanimate object still

Nick: yet to be inanimate means not to move, where the robot i described would move and think and act as humans do in a human way.....just thinking about this potential has foreseen consequences for both sides

DefenderOfHumanity: the so called feelings that robots could have would only be fake, just set programs executed by extreme machines

Nick: well aren't we an extreme machine? our thoughts are actually nerve impulses conducted along the path through our brain cells, and propagated by biochemical devices

DefenderOfHumanity: but humans have souls

I have ceased to pursue playing the devil's advocate from this point in the conversation, and allow my friend the victory. This is primarily do to my lack of knowledge in the realms of philosophy and theology. However, would the ultimate question to end such debate be a simple yet complex theological one? Would the robot have to have a soul in order to be protected by human law?

Blog 8 on Robot Stories

In my eyes there isnt any new technology that has not had some negative effect on something in the world.. The creation of aircrafts are good but also have brough terroist attacks to our country. The automobile was a great invention it helps speed up our traveling but also kills alot of people each year do to wrecks. The real question is what would robots bring us? This is where Robot stories chapter one starts bringing in robots. This chapter says that robots will take over the world after us and do it better of taking care of the world then us. This chapter goes into the past, future, and present. I feel the only way this will happen is if us as humans program these robots to do better becasue we will know how to do it better. The other question is cna robots ever be more intelligent then humans? My opinion is no because it is impossible because us humans will have to make them and we can only make then as smart as we are becasue thats all we know. This topic is very interesting and i enjoyed reading this chapter.

Blog 8- reflections of robot visions

I have to begin by saying that I am very glad we are finally reading a book that really is readable and interesting. The story that I've recently read that I'd like to discuss is the story entitled Reason. I think it is interesting that technology ie)robots could be developed to the point where it becomes aware of itself to the point where it is able to think and reject something told to it by a human. In the story the robot used total reason to come up with the idea that it was superior to humans... even though that wasn't the point of the story, it was still suggested in the story that robots could be "better" than humans (not needing to sleep, eat, and maintain an organic body). Even though a robot body may be more durable, it does not have the ability to think what is not logical at least not in this story. I'm just still very uncomfortable with the idea of using robots to do things that humans do. If robots were to start to do human jobs, what do the humans do? Is humanity's destiny to be controlled by robots while we sip magaritas on the beach in cancun? while that sounds very nice. i'm afraid its like the saying goes, "If it sounds to good to be true it probably is". Robots may one day may control a lot of things, but like all technology they seem to have a shelf -life and humans will always be needed on stand-by that is likely. You definitely wouldn't want a robot driving an airplane and suddenly freeze up because a wire in its positronic brain has burnt up. I think robots would be better served as human assistants (as a human operated technology) never the ones in control. Margaritas on the beach sound nice, but for all practical purposes it just isnt probable or possible in my opinion.

Wednesday, March 21, 2007

Blog 8: Introduction & Robot Visions

There is one quote or line that really struck my interest and stuck in mind : "all devices have their dangers. The discovery of speech introduced communication - and lies. The discovery of fire introduced cooking - and arson... The automobile is marvelously useful - and kills Americans by the tens of thousands each year..." It is interesting to think about all that we have created, that has in turn brought bad side effects. What would Robots bring to the world? What is our fear? Would they turn on us? When we created the speech did we ever think about the negatives that may come from this creation? When we created automobiles did we sit back and think about how many people these were going to kill? "Robot Visions" (Chapter 1) goes into this idea. This story is very interesting in itself. The idea that the robots would take over the world after us and do a better job than us. I found that the story was very clever with the notion of past, present, future. The "humans" of the future had heard that Archie was coming purely because of the present humaniform robot, the first humaniform robot. I found myself reading and just wanting to know how the story was going to end. If we develop Robots can theyh ever be smarter than us? Can they develop emotions? It is all very interesting to consider.

On Robotics

Despite the use of robots in laboratories and factories, there is still something about robots that places them in the realm of science fiction, at least in my mind. The use of robots in everyday life seems both bizarre and unrealistic to me. I feel that I could never imagine a world where robots were a critical part of my life

Asimov however makes me uneasy in how realistically he discusses robots. His Three Laws of Robotics in particular seem like the most probable set of laws to be applied to robots in the future. These laws indicate that robots will be built with these laws in their nature, something that I am doubtful of. I think that robots with these laws built in them would never be possible because there is always going to be one person with the drive and ability to override the robot's installed nature.

In the story, "Archie" did not receive the same maternal attention as Koko or the creator-creation as the M-5. Rather, Archie was treated as a machine with a specific purpose. He was treated as technology that was disposable while the M-5 and Koko were considered valuable to the doctor and Patterson with each willing to give up a part of their lives to "save" them.

Overall I really enjoyed the story and the only reason being was how it ended. Throughout the story I was suspicious of the narrator and even predicted that he may be a robot. Now I'm just bragging. However I really did enjoy the story because the ending seemed to give robots this god-like mentality (the humaniforms) and even hubris. Are robots even capable of hubris? The way the story portrayed the future made me assume the they were at least heading there. If they were made in human likeness, maybe they are even capable of human mistakes.

Tuesday, March 20, 2007

Personhood

Some interesting ideas have been raised by the movie on Koko and the introduction of Donna Haraway. Apes (especially cases like Koko) and cyborgs (ie. OncoMouse) are considered border cases by Haraway. She suggests that we should extend personhood to apes and cyborgs. However, I see a possible slippery slope problem. If we extend personhood, where do we stop. Should dogs and cats also get personhood? Also, I think there is a difference between apes and cyborgs. Apes seem as if they would be restricted in their intelligence, whereas cyborgs seem to be able to be upgraded when needed.

Monday, March 19, 2007

Monkey See, Monkey Do? (Blog #8)

The KoKo documentary was a fascinating piece of film. Watching a person,, namely Penny Patterson, communicate with an animal so effectively was very interesting. Intelligent creatures have always and will always intrigue human beings. Perhaps, as Patterson suggested, we just want to find our equal. She added that we always look to space for some intelligent, alien life, but our intelligent counterparts might be right here on earth. Michael Crichton’s Congo addresses this question, the question of animal rights and humanity’s relationship with nature in general.

Crichton was inspired by Koko the gorilla and her ability to communicate, so he made a gorilla, named Amy, the central character in the book. In the book, Crichton regales readers with a history of primate training and their contact with the West. In one such passage, he tells us, that Samuel Pepys, famous for his detailed description of English life in the 17th century, described a chimpanzee he saw in London. He says that the animal was “so much like a man in most things that…I do believe that it already understands much English and I am of the mind it might be taught to speak or make signs” (44). He quotes another anonymous 17th century source suggesting, “Apes and Baboons…can speak but will not for fear they should be imployed [sic], and set to work” (44).

Crichton’s book remains vague on the issue of animal rights and primate personhood. The character of Amy, however, becomes one of the most human and reliable. It creates the same conundrum that we currently have. The primates appear to have personality, intelligence and even a sense of humor, but it is impossible to be certain of their “personhood”. In much the same way, Amy shows the same traits, but she cannot, and her trainer cannot prove that she has “personhood”. The chimp “theory of mind” articles identify the same problem. We can never have complete certainty; especially with trained primates since the sample is tainted. It might simply be a function of anthropomorphizing the subject—another key feature of human nature. We understand creatures and even technologies according to our own understandings of reality. It is difficult, perhaps impossible for a human being to conceive of something that has no thoughts. Therefore we attribute motives and feelings to things that clearly have none—such as an automobile. It is not so easy to decide, however that a dog or a chimpanzee has no feelings or thoughts and, so, we reach a stalemate.

These questions are fascinating and represent some of the deepest questions in philosophy. In the film, Patterson suggested that our search for other intelligent beings is rooted in our desire to understand ourselves. The creation of artificial intelligence might be grouped in the same search. Why on earth would we want a computer psychiatry program? And yet, they exist. Dr. Sbaitso, quite an old program, allowed users to ask the computer psychiatrist questions and the A.I. would offer answers. Unfortunately, a preliminary use of the program would reveal that the answers were actually quite repetitive. Nonetheless, it reveals the human desire to talk to another entity, for lack of a better term.

It is doubtful we will ever know if Koko, robot dogs or A.I. psychiatrists have personhood, but, perhaps, by examining our interest in such things we can uncover a little more about human nature


Iris

Wednesday, March 14, 2007

blog on Ape theory of mind and its implications on robots

I just read the articles handed out in class on ape theory of mind and the article on robotics and i have to say i am definitely intriqued by the possibilities that exist in our developing use of robotics and technology. i was also interested in some of the questions brought up by the film in class. What the real question is which presents itself is this: at what point does something distinguish itself as a person? The article on ape theory of mind and the film in class attempted to answer this question. If chimps or gorillas can be found to have a personality and even a theory of mind that can only be influenced by humans but not programmed by, what about robots which ultimately are to be programmed by humans? Just as the article on robotics pointed out, this raises serious ethical questions. If robots are able to advance to the point where they have a theory of mind and are aware of themselves, doesn't this essentially make them a person-robot-being, which ultimately has rights just as we humans have rights within our society? Just as african Americans and women fought for equal rights in our country, robots one day could be doing this as well. If robots are able to advance to be on par with the human brain wouldn't that make them equal or at least shouldn't it? I'm not saying I would feel comfortable with it though. Robots would obviously be made out of wires and metal which is a lot less fragile than the human body... it could probably last a lot longer than a human body especially if you consider updates to the robots body as it ages. what happens to a robot's body though when it finally does meet its demise? would they be melted down and reused or do they get buried in a coffin in a grave like any other human? would churches give them a funeral? And onother thing would it be a crime "murder" a robot? it gets more and more messy the the more you think about it... also say robots get equal rights with humans... would they get the "good" jobs over less durable humans (remember they could work at a company for say 100 years versus say 35 years for the typical human)? also education would put robots over humans since robots could just download the info they need into their "minds" versus it taking 20-30 years in humans. You could see why one could easily become pessimist about having robots. I dont like the idea at all. in my opinion robots should never be developed to the point where they become equals instead of tolls because that is where they become a new force and threaten the prosperity of humanity and our way of life. I say vote no on Robo!

Monday, March 12, 2007

Blog #7- Conclusion Ihde's "Technology and the Lifeworld"

As Don Ihde's literary work on technology relationships draws to a close, a comparison is created between Renaissance discovery and modern Big Science. Both involve numerous embodied instrumental technologies which allowed the scientists like da Vinci of a specific age to perceive the world in a altered manner. However the difference lies in the fact that today's scientific community is supersaturated with technology and demands more significant financial funding. As time passes and the world grows more modernized, technologies become even more ambiguous and non-neutral. This obsession with high and advanced technology and dependence society has created has made it theoretically and physically impossible to revert back to the past. So the only remaining option is to find some solution to save the only world the human race has ever been given. However, an answer cannot be reached until the idea of nature as a "huge pool of resources" for man to use at his will is completely abandoned. Ihde emphasizes the point that it is not logical nor fair to lay the majority of the blame on Western civilization. Rather all nations and areas of the globe have contributed to this technological takeover, therefore all must take due responsibility. The social, religious, and cultural values and ideas of the world's societies must be understood in order for world governments to cooperate to improve the situation. Overall, the bottom line is that nature is irreplaceable, and therefore holds priority over man-made technology. Although many philosophers are opposed to Heidegger's belief that only God can save the world, there is a unified consensus that some all powerful and universal body of authority is necessary. Change must occur at even the small microperceptual level in regards to human-technology relations, and macroperceptual alterations will follow. Ihde points out that the gap separating human beings and animals is quickly closing , and as a result society must even further take into account the needs of other species besides our own. The increased erosion and the pollution of nature in the environment should serve a a warning sign that something must be done to change, not control, the way humans use and exploit technology. It may take using technology itself to spark significant change in the world. The responsibility lies in the citizens of the world today to create and establish a valid and efficient solution in order to preserve the earth for the generations to come. However, as with any worldwide action, complete convergence of effort and a long period of time is necessary.

Friday, March 09, 2007

Monday, March 05, 2007

You have nothing to fear, but...

I really liked this Star Trek episode because it shows the fears that humans have from robots even today. You have the M5 computer system coming in to take over the job of the crew. That is something that we’ve all learned to accept by now. We know that computers can perform task faster and more efficient than the human mind can, but the M5 is there for a different reason. It also replaces Captain Kirk and Spock. I think we all get emotional when we realize the great Captain Kirk is going to be replaced by a machine. It will no longer be the “final frontier” for humans. Instead the computer will be getting the experience. Our first inkling of danger is when the M5 begins shutting down the power and life support systems on different decks of the ship. “Oh no!” I ask myself. What happens if the M5 shuts down life support systems where I’m at right now?

We have also accepted that computers will start making life and death decisions for us. We have planes that run on auto pilot as well as unmanned aerial vehicles now. But now the M5 is making decisions that attack the other star ships when provoked during drills. Eventually, we will see that they are able to make the computer commit “suicide” when it realizes that it is responsible for killing humans. Here’s the real scary part. As of right now, we know that the computer can’t feel. It can only do. If the computer is programmed wrong, it will have the ability to murder us, but it won’t have the conscience to turn itself off and “die”. In the end, it is us humans who must do the dying so the computer can live. Technology doesn't die after all, but we do.

Wednesday, February 28, 2007

M-5 Computer

As technology becomes bigger, better, and smarter there is this thought that it may be able to do human activities and possibly even over power the human kind, as the M-5 was doing in the episode. As we all know technology is a big part of mostly every ones life and it makes jobs easier, quicker, and much more efficient. computers and technology have even taken over some of the jobs that humans use to do such as that of the factory work. Although these jobs were lost to computers they are still ran by humans and maintained by humans. As far as the M-5 goes I don't think that computers will be a great comparison to the human mind and being able to make critical decisions. Take war equipment for example, No one wants soldiers to die and would love to have technology cable of going in and taking care of the missions themselves. One of the flaws of such technology would be exactly what happened in the Star Trek episode the technology would not be able to tell the difference between what was an enemy and what was a innocent human being or ship as in the episode. Technology cannot make distinctions and doesn't have the human instinct when it comes to such matters as telling the difference between that in which will harm you and those who are completely innocent. I believe that technology will continue to grow and become more advanced but I don't believe it will over take the human race because all technology takes mankind to run it and fix it when it is broken.

Star Trek and the Ultimate Computer

Ihde repeatedly says that people see technology in two ways: they want the convenience technology provides, but they are afraid that technology might start to control them. This contrast is evident in the Star Trek episode. Daystrom has good intentions in building M5, allowing ships to run fairly autonomous, resulting in less people being in danger. Kirk worried that he would lose his job to the "ultimate computer". The computer ends up taking over the Enterprise, realizing many people's fears.

Unrelated, I found a funny Star Trek picture.

Tuesday, February 27, 2007

Blog 7

This chapter went much deeper into the technology and how it relaties to our society of today then any of the other chapters. This chapter to me seem to show us how technology relates to humans cultures. It talks about how when a technology advice is used in a culture the culture itself changes as a whole. One example from the text is the australians sardine cans. They use these as a center piece for their headwear. they used it for headgear on the pure fact of its shape. To them the can is only a object not a can and they percieve that it is used for your head because it is circular. Another example from the text was the different uses of the clock. We use the clock as a time keeper where as in the text the chinese clock is used as a astromonical calender. It is used to keep what we call a calender. After this Ihde starts talking bout how first and third world countries differ. His mai point was in order to become a very sucessful country you have to be technologically advanced and in order to do that you have to be educated. This is why the first world countries are so much more advanced then the third world coutnries.

Blog 6

If brain scanning did come into action someday it would have very good effects but also very bad effects also. It would be a great factor in trying to find out who was guilty in the court room for crimes and so forth. Also it would help mentally ill patients maybe be better benefitted by a brain scan so that the doctors would have a better idea of what is wrong with that patient. The negative effects would also have a very big impact on todays society. If we scan a humans brain and we find negative thoughts about someone from awhile back that meant no harm but yet the judges still see the bad thought and thta person gets accused of the crime adn they are really inoccent this could be a huge problem. People of today might hink about a person bad one day and the next feel totally different. This is why these brain scans would be literally impossible to have done because we would never find out which bad thoughts of a human were seriously bad or just a bad thought that day because they made them mad.

Blog 5

This chapter of Ihde was still confusing but i was still able to come to some conclusions. My first observation is my interraction with the glasses. When we put the glasses on we see the world differently ten when we dont have them on. These glasses reshape the world for us. Through embodiment we dont ever relly notice these things as objects unless the object encounters a problem. This is very similar to heideggers hammer. Another analogy i can make is when we drive a car. We do not think of he car as a objedct we feel one with it. The only way we see the car as a object is when we wreck of something with the car goes wrong then we see that us adn the car are really not one and the same. Ihde mentions that our technologies relate us to our world. This is because without technology we hardly would be able to do anything in todays world.

Reading Minds

The topic of mind reading seems to be a continual area of interests. Of course, there are a few factors to consider in this movement. The first of which is outside the realm of ethics, and questions whether or not such technology would even be possible. The age of exploration is far from over. Part of this journey, I would assume, would tap into the deep levels of consciousness that remain, as of now, undiscovered. The fact that we do not understand, in its completeness, the way the mind works, leads me to question the validity of the ethics of this potential technology. On one hand, I do not believe this particular technological advance is beyond the reach of humanity; and thus, I believe we will one day need to seriously question our motives at the advent of its discovery. On the other side of the spectrum, it has been suggested that our culture cannot compete with technological advances. Remembering Idhe's philosophy, we cannot approach technology with the question "can it be controlled?"; instead, we must acknowledge the real issue of "can we control ourselves?" This draws us into more ethical issues. The ethics of reading minds is something that partially humors me. Imagine the response from one receives from a group of people (perhaps two or three) who suddenly realize that their intimate conversation has been invaded by you. I have read some of the posts and the issue of privacy has always come forward. Our thoughts, our feelings, our fears, our desires; they are all ours to have and no one else. Another issue is that of community. My faith directs my call to remind others that people are ends in themselves and part of the body as a whole. This delicate balance can only be held while recognizing the dignity of every human life. I would maintain that this involves every aspect of human life, especially our thoughts. The episode of Robot Stories involving mind scanning still has an effect on my viewing of this situation. One character conveyed that notion that it would be illegal not to go through the procedure. Why? This cannot just be viewed as a political problem. The philosophical concerns are prevalent, but also the theological concerns of religious authorities. The characters exhibited greed. There was a deeper need that lay beneath their motives, which hid their pride. They could not stand the thought of being without someone in their "collective". I could foresee the same problem with any advance in technology that would lead to further our ability to read minds. I do not believe that we could inevitably limit ourselves. Inevitably, every man, woman, and child would be called to join in this new creation - "a brave new world." And I could foresee the persecution of those who would not conform, leading to an ever downward spiral. For the sake of tolerance, humanity often becomes intolerant. My main objection centers on the loss of individuality. The action of becoming less than human, taking place by any means that emphasizes we are means to an end, and not ends in ourselves. We are part of the whole, but that identity should never over arch this position.

Star Trek

The episode of star trek was very interesting. The idea of a computer taking over and controlling everything in the future could be true to a certain extent. We may allow computer's to do many of those jobs in which the computer in star trek did. For example, we have planes who use computers when flying on auto pilot. When it comes to the actual flying however, it is up to the pilot because the pilot ultimately is the one putting the destination and information into the computer. Auto-Pilot just does the easy part of the flying, but when it comes to piloting through turbulance and taking-off and landing, we still entrust that to the human person. I think as society we will never let it get to the point where computers run everything about us. We won't be taking orders from computers and computers won't be controlling our lives. Yes people may say they already control oour lives with the internet, ipods and cell phones, but that is because we allow them. In star-trek, it was involuntary. I am optimistic that as a human society we will never let computers get to powerful and smart that they control us, instead of us controlling them.

Monday, February 26, 2007

Servants or Masters? (Blog #7)

In the opening chapters of the book, Don Ihde postulated a technology free society, the “New Eden”. In the closing pages of chapter five, however, Ihde hypothesizes a technologically “totalized” world-in other words, the opposite of the New Eden. Totalization does not necessarily refer to technology. Ihde offers the Aboriginal hope of realizing the “Dreamtime” envisions a spiritually, naturally totalized world. What interests our class most, however, is the threat, or promise of a technically totalized world. The beginnings of such a scenario are seen plainly in the fascinating (sorry Dr. McCoy, I had to say it) episode of Star Trek: “The Ultimate Computer”.



Dr. Daystrom’s incredible computer invention the “M-5” represents the climax of his research. It is so advanced that it can run a starship that normally requires up to a thousand man crew. Kirk, McCoy, Scotty and even Spock have some reservations about such a technology. Kirk first questioned his own motivations believing, perhaps, his hesitation resulted from vanity and pride. Yet a feeling he amusingly describes as a “Red alert” in his head (only Kirk) remains. It quickly becomes obvious; however, that M-5 has some ulterior, or contradictory motives. It starts taking control of the Enterprise, not simply working with its minimal crew. When the M-5 claims the life of one of Kirk’s crew, the captain decides to pull the plug, but it becomes quite difficult. The scenario is now familiar to us through a multitude of sci-fi offerings, most notably 2001: A Space Odyssey. The technology was becoming totalized. It had total power over the ship and, consequently, the crew’s lives. The M-5 thinks on its own, but not logically (as Spock noted). Daystrom eventually reveals that the computer has been imprinted with his own human brain patterns, to make it think more like a human. Daystrom solved the common criticism of his machines that they couldn’t “think like men”. Unfortunately, though, M-5 was still a machine and with its illogical component, a very dangerous one; one that ultimately claims many lives.



What I found most interesting in this episode was Daystrom’s reason for inventing M-5. He wanted it to protect human life by taking dangerous work from humans and allowing them to engage in more important pursuits (philosophy perhaps?). Like so many inventions designed to facilitate peace and happiness, it actually created disaster. Daystrom may seem deranged and extreme, but we all, in some way, have something in common with him. The “promise” of technology we have discussed, envisions a future in which technology has become so advanced and pervasive that human beings can live like Adam did—perfection, contentment, no labor and no disease. It is ironic, then, that both the technologically totalized world and the un-technical world both converge in Eden. At least for Daystrom, the ultimate goal of technology is to free humans and allow them to live a life of contentment. As the Star Trek episode suggests, this is not possible.



The M-5, for all of its technical brilliance, failed to achieve Daystrom’s utopian goal and actually created a dystopian situation. In his treatment of alterity relations, Don Ihde makes it clear that it is impossible for technology to become a true “other” as the M-5 almost did. Ihde believes that it is impossible for technology to become both human and technological. In other words, there is a point at which technical advancement must halt. Or, at least, it will become impossible to tell the difference between humans and technology. Hopefully, this is not possible. As Spock so aptly noted, computers, and technology in general, “make excellent and efficient servants”, but no one would like to “serve under them”.

Blog 7:

As we have learned there are three relations to technologies (or at least this is what I think I picked up in class, I blame technology for scrambling any information I have acquired).
1)Embodiment - technological artifact becomes a material extension
2)Hermeneutic - technology is a means for interpretation, presents information for an idea that cannot be physically seen. (ex. understanding of how cold it is outside from reading the thermometer)
2) Alterity - treating the technology as am object itself, a quasi-other, anthropomorphism

The super computer in the star trek episode can be classified according to one of these theories/relations. We can see that the super computer was not an emodiment relation as no personel used it to extend their bodily functions. The machine had a part in hermeneutics, as it relayed information about the status of the ship and its surroundings to the crew. The best area for the super computer can be characterized by an alterity relation. The computer took on a free will, exhibiting human thought by using the intelligence and programming imparted from the humans themselves. Thus, the technology would fall under the formula:
Human --> SuperPC-(-World). However, this means the technology is a quasi-other. It is not a complete human form, giving some assurance it can be turned off. The presentation in star trek showed the super computer as a thinking, calculating, and life fearing individual. It did not want to be turned off for the choices it made. This unique situation has brought up the thought that perhaps the computer is the one using the technology now, as under embodiment relations. It could be formulated as following: (SuperPC-Starship)-->World. The intelligent computer is embodying the ship as its own body, a sort of complete, overall extension of itself in which to interact with the outside world.

On a completely different note, does technology have to have some sort of intelligence in order to conquer and control humans? We ourselves give technology a certain life that we depend on as seen here:Kinda hard to read but in the first caption the boy is saying "The poor young chap's IBook died! And his IPod! His IPod Nano! His shuffle!! His Blackberry!His gameboy! His web-browsing, instant-messaging, game-playing, musical phone!"
In the last caption the boy says, "We killed him."

Atmospheric Fear of Technology in Star Trek

In our reading, Ihde explains the idea of technology as a cocoon where all aspects of our life are mediated by technology (Eve and her spaceship). In the Star Trek episode "The Ultimate Computer", we see the ultimate fantasy in which humans have taken it to the extreme. In class we discussed that we have a desire to become technology however we are afraid what this would do to humans. We called this concern the "Atmospheric" fear of technology.

"In the future...." are often the most spoken words in science fiction. The reason being of course that there is a broad spectrum of possibilities in the future. This plot has been used a lot in many movies and books today. Apparently the idea of technology "taking over" our lives scares most people. The intentions of using such technology are always good and everything goes well at first. However in almost every plot the computer loses its mind (in a human sense). The computer's logic no longer aligns with our human logic. Eventually of course the humans are able to shut off the computer.

In the Star Trek Episode, a super computer is downloaded into the ship's systems where it controls almost all aspects of the ship's functions. Kirk is of course naturally upset that a computer is taking over his job as captain. One quote that really caught my attention was when Kirk said: "There are certain things men must do to remain men" and that the computer was taking that away. I feel that Kirk (along with most humans) define themselves through their work (what they do). However technology is limiting that and even in some cases, taking it away. As a result we have split feelings when it comes to technology. We enjoy some of the freedoms it provides (when are relationship with the world is transformed) yet we are hesitant because of the way we are defined with the technology.

Star Trek

Having never seen a star trek episode until the other day, I found it very interesting. The idea of a machine, M-5, having the ability to do all that a human could. M-5 detected things before the human brain had even processed them. Within the episode it also showed some of what Ihde is talking about in the book. Ihde discusses the idea of technological totalization and atmospheric fear. The machine took over all aspects of the ship, doing things without permission from humans. M-5 in a sense fought back and was in control over the humans. They tried to turn off the machine but where unable. To think of a future when technology takes over mankind is a bit scary to think of. In Star Trek it touched on the idea that the Captain was losing his job. Many may worry about the idea of losing jobs due to technology, but even a bigger fear comes when we think about technology taking over. There is a constant fear of nuclear annihilation. We continually make technology to beat the prior development and with that we are creating something that could destroy us or take over control.

Sunday, February 25, 2007

M-5 Computer in relation to Toyota

In the recent Star Trek episode we watched in class it talked about the wonderful M-5 computer. This computer basically ran the ship by itself, thus allowing the ship to be run with less amount of people and allowed the ship to do more things. The only reason people were on the ship was to make sure everything was running as it should. Last semester i was fortunate enough to tour the Toyota plant in Georgetown, KY. This M-5 computer reminded me of the plant in Georgetown. Their were robots that actually did the work for the people. The persons job was just to inspect the part after it was finished, to make sure is up to Toyota standards. These robot save a bunch of time in the production of cars. Toyota says they average 30 plus cars rolling off the assembly line every hour. In the episode, the M-5 computer started to mal-function and it was really hard if not imposible to fix. While i was their on of the robots started to mal-function also. Toyota said that everytime on of these robots mal-function it costs them anywhere from 5 to 10 cars in delay time. This is equal to lost money. Another thing i found real interesting, was Toyota had robots called A.G.V.'s, which stands for Automatic Guided Vehicles. These A.V.G.'s were computerized vehicles that take particular parts to different parts of the plant without the operation of a human being. They are computerized so the vehicle knows exactly were to turn and stop. So i felt that was pretty cool.

Brain Scanning

Brain Scanning could be a valuable asset in solving criminal cases. Just as well as being an assest to the law officials it can also violate the privacy of others who are recieving the actual brain scan. So like everything else their are pros and cons to brain scans. On the pro side law officials can scan a suspect's brain, and determine if in fact that particular person committed the proposed crimanl act. This process will actaully cut the cost of the investigation process. All the investigation will be is actually scanning the brain of the suspect. This leads to the reduction of the trial process as well. Their will be no need for trial by jury any longer. With te results of the brain scan your either determined guilty or non guilty. On the con side it invades the privacy of the suspect in my opinion, but really doesn't matter because of the Patriot Act. They can listen into your phone calls, so why not just scan your brain. So the invasion of privacy issue is pretty much ruled out because they already have the right to do just about anything with the implimentation of the Patriot Act. Another reason it can be a good idea is to cut back, or even liminate innocent people that have been found guilty. All in all i feel it would be a good idea.

Saturday, February 24, 2007

"The Ultimate Computer"

Imagine a time in the not so distant future when society is completely dependent on machines. A technology for every human function has been invented. Due to such modernization, the human race has become virtually unneeded, a thing of the past. The fear of all phenomenologists, that technology would make man outdated, has been fulfilled. This episode of Star Trek addresses this atmospheric fear of technology, when the ultimate M-5 computer is installed on the Enterprise. Soon Captain Kirk discovers that the majority of his loyal crew has been replaced by this machine, which is able to think as a human does. The technology begins to override Kirk's commands, and becomes so powerful that no crew member can disable it. No longer does man control technology and use it to make life easier and effort free. Technology has evolved to control men and use the human race as their pawn. In reality there is no technology, no matter how revolutionary, that can completely replace men. A machine can be constructed to perform human motor functions, think somewhat like a man, and perhaps in some cases be more efficient than a man. However, human nature, including free will and complex emotional processes, cannot be reproduced in any artificial intelligence machine, regardless of the technology available. I believe there is a line that must be drawn of how advanced we should allow our technology to become. As society becomes more futuristic, the possibility of the fulfillment of our worst fear draws nearer. The human race is approaching that fine line between man and cyborg.

Thursday, February 22, 2007

Will our emotions betray us

Dear Society Members,
I can see the future as it unfolds around us. No longer do we need the Nazis turning our families against us. Now we can bring people in to the hospital for their yearly checkup and learn what they're thinking and put them into the concentration camp before they do something against the state. We will look at what they are thinking about our government and determine if they have any revolutionary ideas. We can also look into their thoughts about their fellow man. Maybe they are looking with lust at their neighbors wife. We know by our research that 58% of all men who are thinking these thoughts about another man's wife will make inappropriate advances sometime in the future. These advances undermine the very core of our society and we few have looked at the Constitution and have decided that it gives us preemptive status in these matters. We have consulted our lawyers and the best way to handle this matter is by chemical castration during their quarterly vitamin insert. This will ensure that they have no knowledge that an action has been taken place and we will remain a secret society as always. Our forefathers have dreamed about this very thing from the beginning. It is clearly evident in the Declaration of Independence that they wanted us to protect the society from these rogue individuals. We have developed the technology for this very purpose.

Concerning the upcoming election, we may need to focus on the oppositional candidates and bring them in early for their physical to determine whether they need incarceration or cancellation. Although incarceration is the preferred public choice in these matters, the cancellation of their life status will remove any possibility of their infecting the rest of the country.

Thank you for your time in these matters. As always, I remain your faithful servant.

David Honaker
President
Sons of Founding Fathers
PS Don't forget that your yearly checkups are due next month. Have a nice day.

Wednesday, February 21, 2007

Is brain scanning for crimes possible?

First, let us examine the path modern brain imaging has taken. Brain scans (such as CAT, MRI, PET, and others) look at the ways the brain reacts to a stimulus. This is how we came to figure out what parts of the brain deal with each senses. Neuroscientists (for the most part) try to develop more accurate scanning techniques. The problem with this method of examining the brain is that it only deals with regions. Other neruoscientists are working on mapping the connections between neurons. By understanding the way the layers in the neocortex interact, these neuroscientists are discovering the way the brain learns and remembers. By tracing neural networks, neuroscientists can understand how memories form.

Second, let us investigate the claim brain scanning can predict a person's actions. At present, the correlation between brain scan and action is 30%, in our inquiry let us assume the technology will advance to a higher percentage. Is there a necessary relation between the state of the brain at one point (ie. when a scan is done) and some future action (most likely criminal)? If the answer is no, then the brain scan-for-future-events is of no use, since we cannot be sure of what will happen. If the answer is yes, then there is no such thing as free will. Without free will, many problems arise. If a person lacks free will, then they cannot do anything to change the course of events. In this case, the person cannot be held responsible for their actions, since the actions necessarily had to happen. Is a criminal action in this case a societal evil? The answer is not apparent

Also, what effect would brain scanning have on the populous? When would people be scanned? Who would be scanned? If scanning was done randomly (like "random" checks at the airport), the scanned person would be seen as suspicious. The suspicion associated with scanning might lead to discrimination and possibly hate crimes. However, if everyone was scanned, what would happen then? Increased state policing with enough resources to scan every person is a frightening image. What if a person was found to (possibly) commit a crime in the future? The obvious choice for criminals is to send them to jail, but jails would quickly fill. Jails would be a means of keeping the potential-criminal from the rest of the population. Of course, the jail could not be for re-education, since there would be a necessary relation to the action, which can not be avoided. The law would need to change also, since our system of laws take as one of its basis innocence until proven guilty. Another option for what to do with potential-criminals is capital punishment, which would result in a sense of negative eugenics.

What would be the signal of a future crime? Are all possible situations (ie. the movements and ideas involved in the act of a crime) already in the brain, or is merely the thought of a crime enough?

Third, and slightly off-topic, let us briefly examine the relationship between brain scanning and AI. Brain scanning (and relatedly AI) tends to focus on the relationship between stimulus and reaction. By documenting reactions to stimuli, AI developers hope to mimic human reactions. However, in order to reproduce human intelligence, AI developers need to recreate the way humans learn. In this way, an AI machine would be able to learn and respond in the fashion of a human being, not just respond to a set list of stimuli.

I am much indebted to Jeff Hawkins, author of On Intelligence.