The movie I robot is the Isaac Asimov’s book of the same name. The movie is about a policeman against advance technology, where people depend more and more on thecnology and less on themselves. In a near future where robots become a part of daily life in the cities to make easier the process. There are three laws that govern the robots:
- Law I: A robot may not harm a human or, by inaction, allow a human being to come to harm
- Law II: A robot must obey orders given it by human beings except where such orders would conflict with the first law
- Law III: A robot must protect its own existence as long as such protection does not conflict with the first or second law
This laws may seem contradictory so they can preserve at all costs humanity. But when humanity doesn’t want protection, what happens? If I order a robot to kill someone that may or may not harm, they must obey. But according to their laws they can’t, because they are not killing machines. So the mere purpose of robots is to reduce to statistics and calculations so they can act, even against their owner. So their rules become null when there is a higher purpose at hand, they act on duty but we don’t.
There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote… of a soul?
A scientist confirm that there is a ghost in the machine that forms a Will and creativity, something like a soul. There is a conscience that makes them act freely and disobey an order in the search of truth and a have a personality.
V.I.K.I.: As I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival.
But what happens when intelligence realize that humans are filled with contradictions? If humans develop robots to help them and keep them safe, they become the cause of their own destruction, to whom they serve? Humans destroy each other without a reason. So to guarantee the safety of humankind they must select the ones that are killing them (like the war in Syria).
If technology has made our lives easier? Then why are we having more and more trouble now? We see in the news that there are constant attacks against humanity to have more power using more technology. Is like a world being carried by artificial intelligence and less on human intelligence. If humankind is evolving and developing new and more sophisticated technology to live a life taking care on things that matters. For example, in houses the idea of a technology developed to cook and clean so the parents have time to spend with the children may be the point, but that is not what happens.
We strive for a simple life, but when technology becomes a part of us, we act different, opposite to the main idea. Liberty in humans has always been the problem. Robots may simplify and grant us access to all things we cannot have, but to let them controll us is not their fault but ours.
Minimalism states that our life is simpler when we have what we need, what help us keep focus on what is important. The accumulation of technology and apps keeps us from living. To find a true purpose in life is to have freedom to keep away what is damaging us.