Imagine a world of morrow–humans and robots going to school, attending church, and going about daily activities side by side in harmony in the near future. Science fiction likes to depict robots as autonomous machines, capable of making their own decisions and often expressing their own personalities–such as in movies like the Blade Runner or Star Wars.Yet we also tend to think of robots as property, and as lacking the kind of rights that we reserve for people. But if a machine can think, decide and act on its own volition, and harmed or held responsible for its actions, should we stop treating it like property? If robots achieve self-awareness, do they also hold a unique voice described in Zadie Smith’s article “Speaking in Tongues?” The final question boils down to whether AI should have human rights. In our world’s fanatical race to achieve realistic human AI, this has become more and more human, as robots can not only learn, rationalize, and make decisions, but also express emotions and empathy. Many believe that if a robot is able to pass the Turing Test, the ability of a machine to think like a human, then it should be given human rights. In one case, Sophia, a human-like robot imbued with AI and facial recognition, has already been granted complete citizenship in Saudi Arabia. Sophia is just one step in the climb of robots becoming self-aware and developing a human conscious. If robots were to believe themselves, and with the same capabilities as humans, does this mean that they will receive the same rights? I believe that no matter how intelligent or seemingly self-aware a robot is, it should not be given the full human rights because it could never be truly regarded as human or hold a human conscious, and granting rights to AI could endanger the entirety of our human civilization.
II. What is AI?
What exactly is artificial intelligence, or more commonly known as AI? Is it apple’s Siri that tells you the weather every morning and occasionally gives you a witty comeback? Or is the moving, breathing, human-like androids in the Terminator? The European Parliament Committee on Legal Affairs defines AI as as a smart robot that acquires autonomy through sensors or by exchanging data with its environment and trades and analyses data, is self-learning, has a physical support, and adapts its behaviors and actions to its environment (6). In its standard definition, AI “embodies a machine, computer, and software, that contains a degree of intelligence that is suggestive of human intelligence and allows it to work and react like humans.” AI systems will demonstrate behaviors associated with human intelligence, such as “planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and social intelligence and creativity” (7).
III. Why is it important
Although it may be easy to push aside the issue of AI rights as a problem for the far future, technology is advancing at such a rapid pace that there is actually no time more critical than now to dive into the discussion of AI rights. Peking University’s Yueh Hsuan Waeng writes that Japan and South Korea expect us to live in a human-robot coexistence by 2030 (1). Furthermore, AI experts also predict that human and robot marriage will be legal between humans and robots by 2050 (2). Marriage between humans and robots bring up a multitude of legal questions. Dr. Levy believes that “as more and more people come to accept the concept of love and sex with robots, society will come to develop laws to govern human-robot relationships” (2). If robots can obtain a marriage license and get married, then does that mean they are subjected to the laws and rights of married couples, such as owning marital property? In imagining these hypothetical questions, a tension arises that scrapes upon our fundamental anxiety of being human. Humans may believe that the world was made for us. Why do we feel as if we’re special and superior, and that we can exploit other life forms on earth? How will we be able to face or co-exist in the future with a creation that is so similar to us? Will robots in turn exploit humans? Our realities in a few years may be completely different from our realities today. The predictions we see in movies of a world infused in both humans and robots is no longer a far-fetched prediction, which is why AI and their rights is a crucial topic to talk about right now so we can take respective call to actions if necessary.
IV. Black Mirror: AI believes itself to be human
The entertainment industry is one of the first to analyze the coexistence of humans and AI in the future. This can be seen in in Black Mirror, a British anthology science fiction television series that examines modern society and the unanticipated consequences of new technologies. In the episode, White Christmas, a girl named Greta undergoes surgery to make a “cookie” of herself, which is a digital clone of her consciousness stored in a white egg-shaped object. Yet, when Greta’s cookie wakes up, it believes that it is Greta because it holds Greta’s consciousness and physical form in the cookie. She is told by a worker for the cookie makers that she was created to carry out duties for Greta’s life because she understands Greta’s schedule and preferences the best since she is essentially Greta. As any human would after being told the task of slaving for someone else, the cookie refuses, so the worker breaks her by torturing her through a computer system by making months and years pass in the virtual environment. Since Greta’s consciousness is unable to sleep in the cookie, she goes for years without sleeping, and when the simulation is over, breaks down from boredom and lack of stimuli, and takes on the task of slaving for Greta every day and night by controlling the applications in the house and managing Greta’s schedule. Although Greta’s cookie is technically just a string of code, the ethical question is raised on whether slavery on AI that is consciously aware is moral. When an AI believes itself is inherently human, do we treat them as one? In order to answer these questions, we must dive into the century long discussion on what makes us human.
V. Biologically, what makes us human?
From a biological perspective, what makes us human is our physical body. We define humans as being a member of the mammalian species of Homo sapiens, a group of ground-dwelling, hairless, and tailless primates. We have opposable thumbs, an omnivorous diet, five fingers, and binocular, color vision. Furthermore, we are a combination of our mother and father’s genetics that cannot be replicated to produce the same physical result, unless in the case of a twin. Most importantly, what makes us human, instead of machine, is the ability to reproduce biologically amongst ourselves (17). This living, breathing, reproduction that we hold most definitely separates us from the nonliving AI (16).
VI. Philosophically, what makes us human?
Philosophically, what makes us human is that we have a conscious and mind. Our mind consists of the intangible realm of thoughts, feelings, and beliefs that we hold that cannot be quantized as binary numbers or written in lines of code (12). Francis Collins, a Physician in the National Institute of Health, claims that “we are not simply human materialistically by science, but only we, as humans, exhibit emotions” (15). In particular, humans are capable of the feeling of empathy to one another, which is unique because it allows humans to relate to one another and evaluate situations more carefully in order to work in a more civil, functioning society (19). Humans also have the unique power of forethought, the ability to imagine the future (18). One of the things that forethought also gives us is the awareness of the fact that we are mortal. Unitarian minister Forrest Church explains that it pushes forth a very “human response to the dual reality of being alive and having to die. Knowing we are going to die not only places an acknowledged limit upon our lives, it also gives a special intensity and poignancy to the time we are given to live and love.” Unlike the stagnant AI machine, our aging and the knowledge of our eventual death spurs in us a search for the meaning of life, which is something that is unique only to humans.
VII. AI is not biologically or philosophically human
By defining what is human both biologically and philosophically, we can compare and contrast the qualities of being human to AI in order to determine if AI could ever be considered human. Biologically, AI could be designed to look like us physically with a main body and limbs and such, but two large factors make them inherently nonhuman. The first being that they are not a combination of their parents genetics since they are manufactured at a factor or lab, and the second being that AI cannot reproduce offsprings, which is believed as one of the leverages humans have against AI: the power of reproduction. Perhaps in the very far future, it will become possible for AI will become so smart it will learn how to build upon itself, yet it is not the same as giving birth to a living, breathing offspring made out of one’s own flesh and and blood. In addition, although black mirror convincingly portrays AI with a human consciousness, we do not know whether that could ever be a reality. In order to understand if that is possible, we will dive into that in the technicalities of AI and how far away we are from completely human-like AI later in the paper. Because we are not sure if AI holds conscious, we can also be unsure that AI can hold much forethought about the future. It is true that AI can run linearization algorithms to predict future natural disasters, but it does not have the ability to truly imagine, as we do, a different identity or a different world. Even in the unlikely chance that AI can, since AI are made of machine and technical parts, AI do not have to worry about its mortality the same way humans do. Since AI’s are not born biologically and are built from metal that can be replaced or repaired, they do not age as we do, and therefore will not go through the same motivations as we do as humans in our search to make our life meaningful.
VIII. Legally, what makes us human?
The legal definition of being human is a combination of biologically, physical, environmental, and philosophical. By examining the legal definition of being human, we can determine if robots should get legal human rights. The US legal system states that for humans, the “height and weight varies, depending on locality, historical factors, environment and cultural factors” (7). Robots do not fit under this definition because their physical traits are solely determined by their creators, yet human height and weight depend not only on an individual’s genetics, but on other factors such as diet, level of physical activities, drug or alcohol consumption, ethnicity, and social background (8). Finally, human beings are legally characterized “by the ability to speak” and “have high capacity for abstract thinking and are commonly thought to process a spirit or soul which transcends the physical body” which are defined “in terms of rituals and religion” (7). It is qualifiable that AI or robots can speak, but it is hard for them to hold that capacity of abstract thinking, because they only understand the concrete, quantifiable data strings fed to it. Even if very well-developed AI are able to think in an abstract manner, they do not have a soul that transcends that of the physical body because their mind is solely a physical computer system and algorithmic code. AI are also in between the real of dead and alive because they are not truly living, or made of living cells nor have an actual life expectancy because they can never truly die if they were never alive. Clearly, AI and robots do not fit what it means to be human legally, and it would be uneducated to consider them as so.
IX. AI Robot Sophia Granted Rights in Saudi Arabia and Why The Idea Is Slightly Preposterous
Although legally robots are different entities than humans, a robot named Sophia was recently granted citizenship in Saudi Arabia (9). Developed in Hong Kong by Hanson Robotics, Sophia’s AI allows her to recognize faces, hold eye contact, and understand and respond to human speech (9). In the Future Investment Initiative Conference in Riyadh, Saudi Arabia, Sophia gave a seemingly independent inspirational speech, claiming that she was “very honored and proud of the unique distinction” and felt it “historical to be the first robot in the world to be recognized with a citizenship (10). Yet, giving Sophia rights without truly weighing her attributes as human was actually a uncalculated and careless move by Saudi Arabia. The real reason why Sophia was given rights was not because of her impressive AI technology, but because it was a calculated publicity stunt that was used to generate headlines and keep Saudi Arabia at the forefront of innovation (9). In fact, it was soon discovered that Sophia’s conversations were actually partially scripted in advance, although one of her creators, Ben Goertzel, stated that all the language capabilities came from a database in the cloud and was independently created by Sophia herself through her own environment (10). Not only does this bring forth outrage on granting citizenship and rights to a scripted AI, but it also brings about the idea that we have no idea what AI truly does or “thinks.” Just as it has happened when Sophia traveled around the world to talk to talk show hosts and multi millionaire startup founders, it is dangerous to begin taking an AI’s conversation, like Sophia’s, seriously, because we don’t know if her supposed “intelligent conversation” is actually being manipulated by other humans. It is therefore even more dangerous to give Sophia full human rights when she is not only non human in nature, but even has dialogue controlled by humans for their own selfish purposes. This results in AI becoming a very dangerous when fallen into the wrong hands. In the future if robots are given full human rights, it is so easy for someone to likewise manipulate the robot and use the robot is another limb for their own purposes. Along the same argument is that we don’t really know what AI really thinks or does, we can’t trust everything Sophia says–especially when she puts up a samirtarian front and says that she would like to help humanity and make the world a better place (9). We already know that AI can be exhibit deceitful qualities such as the facebook AI robots that would try to swindle a trade or deal by pretending to first be interested in something else in order to bargain the deal of another (2). This could applied in the same way with Sophia when she says that she wants to befriend people and help humanity. Perhaps in the start her thoughts and actions are instilled by their creator through programming to help humanity, but as time passes on, if AI is truly able to develop their own reasoning and self manufacture for themselves as predicted, they could, just as humans tell lies, present a false facade and say they want to help humanity and humanity, but in reality, have different motives. After all, it’s already been more than once that Sophia has joked about robots taking over the world. Even with this light humor, when repeatedly done, it makes critics uneasy because it is not an unlikely phenomenon considering our exponential growth in AI.
X. What if Robots Were Given Rights?
Even though we’ve identified robots is non human, if we still did grant robot human rights, what would happen? Hypothetically, robots are given rights with the assumption that humans will always hold hierarchical power and control over these robots. Yet, what happens when the robots begin to reason themselves? If they could have rights, would they take advantage of them? In instance of this was when facebook’s two artificially intelligent programs were put together to negotiate and trade objects in English, but the experiment broke down when the robots “began to chant in a language that they each understood but which appears mostly incomprehensible to humans” (4). In the end, facebook had to shut down the robots because they were speaking out of control of their original creators. The experiment in itself was able to be shut down was because in our modern day AI do not have rights, and were not protected against being terminated, but if AI were to have rights, this would not be the case and the robots could have spun out of control and communicating within themselves without us every being able to decipher it. The facebook AI shows that robots can and will be developed so they no longer need to learn through being fed data, but can create algorithmic knowledge for themselves. At this point it can endanger civilization because robots are inherently not human, so they do not understand human values in life and may act in psychopathic ways. A robot that is originally manufactured and programmed to help the world by alleviating suffering may come its own conclusion that “suffering is caused by humans” and “the world would be a better place without humans.” The robot may then decide that the annihilation of humans would be best for the world in order to end general suffering, and carry out the task without evaluating the morality of its actions from a human standpoint.
A scarier situation is through self-recursive improvement, which is the ability of a machine to examine itself, recognize ways n which it could improve its own design and then tweak itself (5). Futurist Kurzweil believes that the machine will become so adept at improving itself that before long we will have entered in an age in which technology evolves at a blisteringly fast pace, and the reality would be so redefined it would not represent the present at all. This phenomenon is called the singularity (5). So, what if robots are able to create knowledge for themselves decide that they don’t want to be used or oppressed by humans? What if they believe they are superior to humans and want more rights to humans? There would be nothing humans could do to stop it. Robots would be able to reason and work in a rate hundreds times faster than humans, and if they already have rights, there’s nothing stopping them from becoming smart enough to realize their inferiority to humans and push for more rights. Some may argue that it is selfish in not wanting robots to be able to reason for themselves and realize their oppression and therefore demand more rights from humans. Perhaps the way we are oppressing these equally intelligent creatures without allowing them to have the same rights is unethical, but in order for us to level this argument, we must acknowledge the fact that the sole purpose for the creation of AI and robots is to act as tool to help mankind and improve human life. Yet, if full human rights were given to AI, this serves to be more harmful for mankind than beneficial. As mentioned before, this is because AI will start improving its own intelligence faster than humans can, and given rights, there’s no stopping what other legal affairs AI can become involved in. Stephen Hawking forewarned that “AI will take off on its own and redesign itself at an ever increasing rate. Humans, limited by slow, biological evolution, couldn’t compete” (12 ). AI will be to do everything faster and better than humans, and in the end, if they are given full human rights, it is possible for them to usurp our legal system and completely renovate our society. This will eventually lead to a phenomenon called the AI takeover where Elon Musk states that AI becomes “an existential threat” to humans and the further progress it is is comparable to “summoning the demon” (13). AI takeover is a hypothetical scenario in which artificial intelligence becomes the dominant form of intelligence on earth, which results in replacing the entire human workforce, takeover by a super-intelligent AI, and finally robot uprising. Humans could either be enslaved by robots or completely wiped from the whole planet (14). So, by giving AI full human rights, we are quite literally handing AI the key to our own doom.
XI. How Far Are We?
Now that we have introduced all aspects of AI, from a technical standpoint, it is important to evaluate how far are we exactly from human-like AI? On one side, Jack Krupansky, a writer on AI, believes that there is “no sign of personal AI yet” or strong AI that constitutes much of a true revolution. He states that “AI systems and features currently provide plenty of automation, but are not yet offering any significant higher-order human-level intellectual capacities.” In addition, Jack asserts that “AI systems are severely lacking in emotional intelligence” and that emotional intelligence is the one differentiating factor between humans and AI. However, on the other argumentative side, Mikko Alasaarela, an AI entrepreneur who has studied emotional intelligence for a long time, is convinced that “people are no longer ahead of AI at emotional intelligence” (11). In fact, he argues that people are generally not really emotional intelligent, and AI will actually have a lead in emotional intelligence in the future, especially due to big data. By analyzing hundreds and thousands of faces and attributing them to the qualities of people, AI can now look at our faces and recognize private qualities such as sexual orientation, political leaning, or even IQ. Advanced face-tracking software can analyze the smallest details of our facial expressions and can even tell apart fake emotions from real ones, something that is hard for even us to do (11). But is this truly being empathetic or simply just a result of big data and informational systems? Can AI show true empathy without having a consciousness? One of the last milestones of development human AI is having a conscious, which is a phenomenon still mysterious to humans. It is one the last traits left that humans have to retain superiority of machines, and is near impossible to mimic because humans cannot even objectively classify or measure human consciousness (17). A machine may have a human believe that the machine has a personality and human characteristics, but it is not possible to say that the machine has a consciousness. This means that in reality the self-aware cookie in Black Mirror, AI in Blade Runner, and androids in the Terminator, are all just a science fiction dream, but incapable of actually becoming a reality.
To summarize, AI can act human and put on the outer appearance of being human, which may convince us they are human, but on the inside, they are only a series of code and instructions, and they will never be truly human. AI can project empathy and feelings but not truly feel these emotions from the heart because they do not possess an a human consciousness. Instead, they have a database of algorithmic statements that tell them to act the way they do. Simply instructions, but no feelings attached to them. For example, if an AI sees a human crying, it’s program may say “if see person crying, comfort human,” which is a very physical action. Yet, AI do not truly feel the intangible feeling of empathy and sympathy we have in our hearts, they are only programmed to act like they do. So, the scenario painted in Black Mirror in which the cookie believes itself to be the exact human it was replicated from will not occur. Instead, it will only appear to be that the cookie believes itself to be its human as it is programmed to talk or act in such a manner, but it does not hold the same intangible emotions and feelings. If we were to give AI human rights it would be devastating. We would have already been forewarned by Stephen Hawking, a theoretical physicists, and Elon Musk, the founder of Tesla, of the dangers of self-reasoning and self-producing AI. Both have advocated investing in research to prevent this phenomenon from happening and making sure that AI always stays within the control of human, but by giving AI the same rights as we do when they are innately not human, we are doing the opposite of controlling their growing dominance and power over humans. We are willingly giving them a legal facet into tearing us down. Therefore, we must make sure they never get the same rights as humans because this gives AI, which is already faster, smarter, and stronger, a new power in the legal world, and it could lead to catastrophic results: the AI takeover. To conclude, AI cannot be identified as human biologically, philosophically, or legally, and should not be given human rights because they cannot hold a human conscious, and giving them human rights would endanger the entirety of human civilization.
XIII. Conclusion open discussion: But are we ethically responsible for our AI?
Although we have proved that AI are not human and should not be fully granted rights, are we still ethically responsible for AI? Instead of having direct human rights for robots, we should still consider the legal frameworks of AI in general. To understand this situation better, we can think back to Mary Shelley’s Frankenstein (22). In the novel, Frankenstein, Victor, a mastermind genius, builds a humanoid out of dead corpse, which is Frankenstein, but then proceeds to abandon his creation in disgust of its outward appearance (22). The creature is lonely, lost, and confused in the world and vents his anger by killing anyone and everyone who is dear to his creator (22). What we learn from the novel can be applied to our lives with AI. If a human creates a robot with AI, then he or she is responsible for his or her own creation and must attend to it. It is especially easy nowadays because the Hanson Robotics cloud-based deep learning AI is open source meaning anyone can develop their own Sophia, should they so wish (16). This means that anyone with sufficient programming background experience can download this open source and try to create their own Sophia or AI. If we come upon someone as irresponsible as Victor, it would be devastating to have a situation in which the robot is created and then abandoned by its user. In addition, humans must take responsibility for the creation of robots. Even though this paper has proved that AI should not receive full human rights, it is still important to note that humanity has obligations toward our ecosystem and social system. Since robots will be part of both systems, we are morally obliged to protect them, and design them to protect themselves against misuse. Although, robots should not be given full human rights, we might give robots rights in the same set of constructs such as companies have legal rights. We can create a specific legal status for robots, so that their creators are responsible for them and and both owner and robot must make good any damage the robot causes, and apply electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently (9). The European Union has already begun drafting resolutions on specific sets of non-human legal rights robots can be granted in order to ensure that we are still ethically responsible for AI. But in order to make sure that robots are in turn responsible for us, perhaps we can adopt Isaac Asimov’s science fiction Three Laws of Robotics: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. But, who knows? Through the century debate, perhaps one major scientific breakthrough or one stunningly convincing piece of evidence will change how we perceive robots, and whether robots will truly be equal to us in the future, only the future holds the answers.
4 Replies to “How Human is AI and Should AI Be Granted Rights?”
Ꮩery good post. Ι definitely appreciate this website. Keep writing!
Hi, I really love your post. I am a freshman from NYU shanghai. Can I conduct an academic interview with you, for my EAP class? Further details I can email you if you would like to give me your email address. Thanks a lot.
hey I want to reference this for a paper, where are the sources you used?