Different style

Hello fellow readers! I will be changing up my blog style, because it realized this has tarnsformed to become similar to my professional platform–which also means it was the first thing that popped up when I searched my name. So, instead of detailing all the blurbs of thoughts I have in a seemingly sporadic irrelevant way, I think that this will become a platform where I share interesting intellectual material and write reviews on different things. Feel free to give suggestions at: [email protected]

Alright, back to procrastinating on my Physics problem set…

You’re going to be fine, trust me

You’re going to be fine, trust me Dear Columbia new freshman (welcome!), sophomore, junior, or senior: you’re going to be fine. This university challenges everyone who enrolls. The ways in which your limits, perspectives, and personality change might shock you as time progresses. Some people mesh with Columbia well from the beginning of NSOP; some people struggle to enjoy it at all, over the course of any of their 4 years. But this process grows and builds you: regardless of whatever perception of yourself you might have right now, it will inevitably mutate in subtle ways every day for the rest of college. I’m excited for the changes that will take place for you. If you had trouble making friends during NSOP, you will attain more interesting and caring friends over the next few years. If your first-semester required coursework is already giving you doubts about your fitness here, you will learn to study and manage your time (by necessity). If you believe you’re in a precarious state mentally or emotionally – don’t worry. I promise you that someone else has been in that position, that they have made it out, and so can you. I’m confident making all these assertions because I’ve observed it all happen in my classmates, close friends, and myself. I’ve had the fortune of watching people grow and change in ways they didn’t expect, and all these people have become strong, confident, and aware regardless of where they started. That’s why I’m excited for you, even if I don’t know you (and it’s frustratingly easy to not know someone at Columbia). I’m excited for you to be surprised by your own ability to empathize, persevere, and grow. I know so many people who have questioned (or continue to question) whether they deserve to be at Columbia, or whether they even belong here. I’ve spent a lot of time mulling over those exact doubts. I promise you that someone else has been in your position, however bad, and that they left that position happier and more well-rounded. Pause, breathe, and maybe laugh at it all. Chances are you’re actually doing great. But if you’re not, trust me – you’re going to be fine. I’ve put up the plaintext here: https://pastebin.com/Q8F7ARwq

-Neil Chen

Engineering TED Talk: Computer Science in Social Justice

Columbia Engineering: TED TALK

Engineering and social justice: Computer Science

1st speaker: Julia Hirschberg

  • Identification of hate speech (friend vs. stranger hate speech)
  • AI then and now:
    • Original goal: create machines with human intelligence in reasoning, NLP, robotics, vision (machines that can replace humans)
    • Today AI has applications to many areas: healthcare, education, entertainment, sustainability, transportation, and commerce
    • But, rather than replication/replacing human intelligence, the term “collaborative AI” is becoming popular–how can AI help humans, not replace them
  • More positive contributions of AI:
    • Virtual reality to study, treat, and simulate autism traits
    • Snapchat recently helped over 400,00 new voters register to vote
    • New predictive model for disaster relief, smart agriculture, medicine delivery, and education in developing countries
    • Ai remove bias from judging for 2020 olympic gymnastics
    • AI can provide electronic strike zone in baseball and many other sports
    • Computer vision techniques helping customers choose makeup colors by matching their picture to one of 40 shades → fashion & computer science
  • AI faces many challenges:
    • Self-driving cars are not yet safe
    • AI are taking over people’s jobs
    • AI can invade privacy and create & circulate fake news
    • Deep learning systems perpetuate biases of the data they are trained on (MT, job search, face recognition)
      • Face recognition: dark faces are gorillas or just do not appear
    • Machine translation in pronominally gender-neutral languages’ pronouns: Doctors and programmers are men; nurses and homemakers are women
    • Software that warns people using Nikon cameras when the person they are photographing seems to be blinking tends to interpret asians as always blinking
    • Facebook ads have targeted particular genders or ethnicities for jobs, excluding women and ethnic minorities
  • AI software is being used to make serious decisions on:
    • Loan-worthiness
    • Emergency response
    • Medical diagnosis
    • Job candidate selection
    • Parole determination
    • Criminal punishment
    • Educator performance
    • ^often without user awareness of its limitations

2nd speaker: Kathy McKeown

  • Objective: develop a system to automatically detect aggression and loss in social media posts by gang-involved  youth
    • Challenges: size of labeled dataset
    • Domain-specific language
    • Context critical

3rd speaker: Dr. Shih-Fu Chang

  • Experts believe the increased use of social media among gang-involved youth may be an important factor in the uptick in gang violence in cities across America
  • Imagine a world where social media yields clues that identify risk and protective factors for gang violence and prevent the use of firearms
  • Some projects he is working on:
    • Image processing/AI/computer vision to understand gang violence
    • Visual search technology for fighting online human trafficking: try to understand illegal information on the dark web, used by 200+ law enforcement agencies or NGOs to locate victors or identify groups engaged in human trafficking

21-Day Mental Diet

The 21-Day Mental Diet

  1. Arise each morning 2 hours before you have to be somewhere and invest the 1st hour in yourself and your mind
  2. Before turning on television/computer, read something motivation, inspirational, or educational (30-60 minutes)
  3. Write down top 10-15 goals in the present tense
  4. Write down a list of everything you need to do that day
    • Order it by priorities
    • Resist temptation of clearing up small things first
    • Plunge into a big/important task
  5. Begin immediately to work on most valuable and important task
    • Resolve to focus independently on the one task until it is complete
    • Once you finish the one major/big task first thing in the morning in the golden hour, you will receive a surge of energy, happiness, and confidence, which releases endorphins in your brain that propels you into other tasks and makes you more productive for the rest of the day → phenomenon called FLOW that allow you to perform at a higher level
  6. When you drive, listen to educational podcasts
    • Average person drives 500 hours in their car each year
    • That equates 1-2 classes in a leading university, you can be an expert
  7. Develop a sense of urgency: move fast, pick up the pace
    • Give you more energy
    • Faster →  more you get done → better you feel
    • More in-control of your life

How Human is AI and Should AI Be Granted Rights?

I. Introduction

                    Imagine a world of morrow–humans and robots going to school, attending church, and going about daily activities side by side in harmony in the near future. Science fiction likes to depict robots as autonomous machines, capable of making their own decisions and often expressing their own personalities–such as in movies like the Blade Runner or Star Wars.Yet we also tend to think of robots as property, and as lacking the kind of rights that we reserve for people. But if a machine can think, decide and act on its own volition, and harmed or held responsible for its actions, should we stop treating it like property? If robots achieve self-awareness, do they also hold a unique voice described in Zadie Smith’s article “Speaking in Tongues?” The final question boils down to whether AI should have human rights. In our world’s fanatical race to achieve realistic human AI, this has become more and more human, as robots can not only learn, rationalize, and make decisions, but also express emotions and empathy. Many believe that if a robot is able to pass the Turing Test, the ability of a machine to think like a human, then it should be given human rights. In one case, Sophia, a human-like robot imbued with AI and facial recognition, has already been granted complete citizenship in Saudi Arabia. Sophia is just one step in the climb of robots becoming self-aware and developing a human conscious. If robots were to believe themselves, and with the same capabilities as humans, does this mean that they will receive the same rights? I believe that no matter how intelligent or seemingly self-aware a robot is, it should not be given the full human rights because it could never be truly regarded as human or hold a human conscious, and granting rights to AI could endanger the entirety of our human civilization.

II. What is AI?

                    What exactly is artificial intelligence, or more commonly known as AI? Is it apple’s Siri that tells you the weather every morning and occasionally gives you a witty comeback? Or is the moving, breathing, human-like androids in the Terminator? The European Parliament Committee on Legal Affairs defines AI as as a smart robot that acquires autonomy through sensors or by exchanging data with its environment and trades and analyses data, is self-learning, has a physical support, and adapts its behaviors and actions to its environment (6). In its standard definition, AI “embodies a machine, computer, and software, that contains a degree of intelligence that is suggestive of human intelligence and allows it to work and react like humans.” AI systems will demonstrate behaviors associated with human intelligence, such as “planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and social intelligence and creativity” (7).  

III. Why is it important

                    Although it may be easy to push aside the issue of AI rights as a problem for the far future, technology is advancing at such a rapid pace that there is actually no time more critical than now to dive into the discussion of AI rights. Peking University’s Yueh Hsuan Waeng writes that Japan and South Korea expect us to live in a human-robot coexistence by 2030 (1). Furthermore, AI experts also predict that human and robot marriage will be legal between humans and robots by 2050 (2). Marriage between humans and robots bring up a multitude of legal questions. Dr. Levy believes that “as more and more people come to accept the concept of love and sex with robots, society will come to develop laws to govern human-robot relationships” (2). If robots can obtain a marriage license and get married, then does that mean they are subjected to the laws and rights of married couples, such as owning marital property? In imagining these hypothetical questions, a tension arises that scrapes upon our fundamental anxiety of being human. Humans may believe that the world was made for us. Why do we feel as if we’re special and superior, and that we can exploit other life forms on earth? How will we be able to face or co-exist in the future with a creation that is so similar to us? Will robots in turn exploit humans? Our realities in a few years may be completely different from our realities today. The predictions we see in movies of a world infused in both humans and robots is no longer a far-fetched prediction, which is why AI and their rights is a crucial topic to talk about right now so we can take respective call to actions if necessary.

IV. Black Mirror: AI believes itself to be human

                    The entertainment industry is one of the first to analyze the coexistence of humans and AI in the future. This can be seen in in Black Mirror, a British anthology science fiction television series that examines modern society and the unanticipated consequences of new technologies. In the episode, White Christmas, a girl named Greta undergoes surgery to make a “cookie” of herself, which is a digital clone of her consciousness stored in a white egg-shaped object. Yet, when Greta’s cookie wakes up, it believes that it is Greta because it holds Greta’s consciousness and physical form in the cookie. She is told by a worker for the cookie makers that she was created to carry out duties for Greta’s life because she understands Greta’s schedule and preferences the best since she is essentially Greta. As any human would after being told the task of slaving for someone else, the cookie refuses, so the worker breaks her by torturing her through a computer system by making months and years pass in the virtual environment. Since Greta’s consciousness is unable to sleep in the cookie, she goes for years without sleeping, and when the simulation is over, breaks down from boredom and lack of stimuli, and takes on the task of slaving for Greta every day and night by controlling the applications in the house and managing Greta’s schedule. Although Greta’s cookie is technically just a string of code, the ethical question is raised on whether slavery on AI that is consciously aware is moral. When an AI believes itself is inherently human, do we treat them as one? In order to answer these questions, we must dive into the century long discussion on what makes us human.

V. Biologically, what makes us human?

                    From a biological perspective, what makes us human is our physical body. We define humans as being a member of the mammalian species of Homo sapiens, a group of ground-dwelling, hairless, and tailless primates. We have opposable thumbs, an omnivorous diet, five fingers, and binocular, color vision. Furthermore, we are a combination of our mother and father’s genetics that cannot be replicated to produce the same physical result, unless in the case of a twin. Most importantly, what makes us human, instead of machine, is the ability to reproduce biologically amongst ourselves (17). This living, breathing, reproduction that we hold most definitely separates us from the nonliving AI (16).

VI. Philosophically, what makes us human?

                    Philosophically, what makes us human is that we have a conscious and mind. Our mind consists of the intangible realm of thoughts, feelings, and beliefs that we hold that cannot be quantized as binary numbers or written in lines of code (12). Francis Collins, a Physician in the National Institute of Health, claims that  “we are not simply human materialistically by science, but only we, as humans, exhibit emotions” (15). In particular, humans are capable of the feeling of empathy to one another, which is unique because it allows humans to relate to one another and evaluate situations more carefully in order to work in a more civil, functioning society (19). Humans also have the unique power of forethought, the ability to imagine the future (18). One of the things that forethought also gives us is the awareness of the fact that we are mortal. Unitarian minister Forrest Church explains that it pushes forth a very “human response to the dual reality of being alive and having to die. Knowing we are going to die not only places an acknowledged limit upon our lives, it also gives a special intensity and poignancy to the time we are given to live and love.” Unlike the stagnant AI machine, our aging and the knowledge of our eventual death spurs in us a search for the meaning of life, which is something that is unique only to humans.

VII. AI is not biologically or philosophically human

                    By defining what is human both biologically and philosophically, we can compare and contrast the qualities of being human to AI in order to determine if AI could ever be considered human. Biologically, AI could be designed to look like us physically with a main body and limbs and such, but two large factors make them inherently nonhuman. The first being that they are not a combination of their parents genetics since they are manufactured at a factor or lab, and the second being that AI cannot reproduce offsprings, which is believed as one of the leverages humans have against AI: the power of reproduction. Perhaps in the very far future, it will become possible for AI will become so smart it will learn how to build upon itself, yet it is not the same as giving birth to a living, breathing offspring made out of one’s own flesh and and blood. In addition, although black mirror convincingly portrays AI with a human consciousness, we do not know whether that could ever be a reality. In order to understand if that is possible, we will dive into that in the technicalities of AI and how far away we are from completely human-like AI later in the paper. Because we are not sure if AI holds conscious, we can also be unsure that AI can hold much forethought about the future. It is true that AI can run linearization algorithms to predict future natural disasters, but it does not have the ability to truly imagine, as we do, a different identity or a different world. Even in the unlikely chance that AI can, since AI are made of machine and technical parts, AI do not have to worry about its mortality the same way humans do. Since AI’s are not born biologically and are built from metal that can be replaced or repaired, they do not age as we do, and therefore will not go through the same motivations as we do as humans in our search to make our life meaningful.

VIII. Legally, what makes us human?

                    The legal definition of being human is a combination of biologically, physical, environmental, and philosophical. By examining the legal definition of being human, we can determine if robots should get legal human rights. The US legal system states that for humans, the “height and weight varies, depending on locality, historical factors, environment and cultural factors” (7). Robots do not fit under this definition because their physical traits are solely determined by their creators, yet human height and weight depend not only on an individual’s genetics, but on other factors such as diet, level of physical activities, drug or alcohol consumption, ethnicity, and social background (8). Finally, human beings are legally characterized “by the ability to speak” and “have high capacity for abstract thinking and are commonly thought to process a spirit or soul which transcends the physical body” which are defined “in terms of rituals and religion” (7). It is qualifiable that AI or robots can speak, but it is hard for them to hold that capacity of abstract thinking, because they only understand the concrete, quantifiable data strings fed to it. Even if very well-developed AI are able to think in an abstract manner, they do not have a soul that transcends that of the physical body because their mind is solely a physical computer system and algorithmic code. AI are also in between the real of dead and alive because they are not truly living, or made of living cells nor have an actual life expectancy because they can never truly die if they were never alive. Clearly, AI and robots do not fit what it means to be human legally, and it would be uneducated to consider them as so.    

IX. AI Robot Sophia Granted Rights in Saudi Arabia and Why The Idea Is Slightly Preposterous

Although legally robots are different entities than humans, a robot named Sophia was recently granted citizenship in Saudi Arabia (9). Developed in Hong Kong by Hanson Robotics, Sophia’s AI allows her to recognize faces, hold eye contact, and understand and respond to human speech (9). In the Future Investment Initiative Conference in Riyadh, Saudi Arabia, Sophia gave a seemingly independent inspirational speech, claiming that she was “very honored and proud of the unique distinction” and felt it “historical to be the first robot in the world to be recognized with a citizenship (10). Yet, giving Sophia rights without truly weighing her attributes as human was actually a uncalculated and careless move by Saudi Arabia. The real reason why Sophia was given rights was not because of her impressive AI technology, but because it was  a calculated publicity stunt that was used to generate headlines and keep Saudi Arabia at the forefront of innovation (9). In fact, it was soon discovered that Sophia’s conversations were actually partially scripted in advance, although one of her creators, Ben Goertzel, stated that all the language capabilities came from a database in the cloud and was independently created by Sophia herself through her own environment (10). Not only does this bring forth outrage on granting citizenship and rights to a scripted AI, but it also brings about the idea that we have no idea what AI truly does or “thinks.” Just as it has happened when Sophia traveled around the world to talk to talk show hosts and multi millionaire startup founders, it is dangerous to begin taking an AI’s conversation, like Sophia’s, seriously, because we don’t know if her supposed “intelligent conversation” is actually being manipulated by other humans. It is therefore even more dangerous to give Sophia full human rights when she is not only non human in nature, but even has dialogue controlled by humans for their own selfish purposes. This results in AI becoming a very dangerous when fallen into the wrong hands. In the future if robots are given full human rights, it is so easy for someone to likewise manipulate the robot and use the robot is another limb for their own purposes. Along the same argument is that we don’t really know what AI really thinks or does, we can’t trust everything Sophia says–especially when she puts up a samirtarian front and says that she would like to help humanity and make the world a better place (9). We already know that AI can be exhibit deceitful qualities such as the facebook AI robots that would try to swindle a trade or deal by pretending to first be interested in something else in order to bargain the deal of another (2). This could applied in the same way with Sophia when she says that she wants to befriend people and help humanity. Perhaps in the start her thoughts and actions are instilled by their creator through programming to help humanity, but as time passes on, if AI is truly able to develop their own reasoning and self manufacture for themselves as predicted, they could, just as humans tell lies, present a false facade and say they want to help humanity and humanity, but in reality, have different motives. After all, it’s already been more than once that Sophia has joked about robots taking over the world. Even with this light humor, when repeatedly done, it makes critics uneasy because it is not an unlikely phenomenon considering our exponential growth in AI.

X. What if Robots Were Given Rights?

                    Even though we’ve identified robots is non human, if we still did grant robot human rights, what would happen? Hypothetically, robots are given rights with the assumption that humans will always hold hierarchical power and control over these robots. Yet, what happens when the robots begin to reason themselves? If they could have rights, would they take advantage of them? In instance of this was when facebook’s two artificially intelligent programs were put together to negotiate and trade objects in English, but the experiment broke down when the robots “began to chant in a language that they each understood but which appears mostly incomprehensible to humans” (4). In the end, facebook had to shut down the robots because they were speaking out of control of their original creators. The experiment in itself was able to be shut down was because in our modern day AI do not have rights, and were not protected against being terminated, but if AI were to have rights, this would not be the case and the robots could have spun out of control and communicating within themselves without us every being able to decipher it. The facebook AI shows that robots can and will be developed so they no longer need to learn through being fed data, but can create algorithmic knowledge for themselves. At this point it can endanger civilization because robots are inherently not human, so they do not understand human values in life and may act in psychopathic ways. A robot that is originally manufactured and programmed to help the world by alleviating suffering may come its own conclusion that “suffering is caused by humans” and “the world would be a better place without humans.” The robot may then decide that the annihilation of humans would be best for the world in order to end general suffering, and carry out the task without evaluating the morality of its actions from a human standpoint.

                    A scarier situation is through self-recursive improvement, which is the ability of a machine to examine itself, recognize ways n which it could improve its own design and then tweak itself (5). Futurist Kurzweil believes that the machine will become so adept at improving itself that before long we will have entered in an age in which technology evolves at a blisteringly fast pace, and the reality would be so redefined it would not represent the present at all. This phenomenon is called the singularity (5). So, what if robots are able to create knowledge for themselves decide that they don’t want to be used or oppressed by humans? What if they believe they are superior to humans and want more rights to humans? There would be nothing humans could do to stop it. Robots would be able to reason and work in a rate hundreds times faster than humans, and if they already have rights, there’s nothing stopping them from becoming smart enough to realize their inferiority to humans and push for more rights. Some may argue that it is selfish in not wanting robots to be able to reason for themselves and realize their oppression and therefore demand more rights from humans. Perhaps the way we are oppressing these equally intelligent creatures without allowing them to have the same rights is unethical, but in order for us to level this argument, we must acknowledge the fact that the sole purpose for the creation of AI and robots is to act as tool to help mankind and improve human life. Yet, if full human rights were given to AI, this serves to be more harmful for mankind than beneficial. As mentioned before, this is because AI will start improving its own intelligence faster than humans can, and given rights, there’s no stopping what other legal affairs AI can become involved in. Stephen Hawking forewarned that “AI will take off on its own and redesign itself at an ever increasing rate. Humans, limited by slow, biological evolution, couldn’t compete” (12 ). AI will be to do everything faster and better than humans, and in the end, if they are given full human rights, it is possible for them to usurp our legal system and completely renovate our society. This will eventually lead to a phenomenon called the AI takeover where Elon Musk states that AI becomes “an existential threat” to humans and the further progress it is is comparable to “summoning the demon” (13).  AI takeover is a hypothetical scenario in which artificial intelligence becomes the dominant form of intelligence on earth, which results in replacing the entire human workforce, takeover by a super-intelligent AI, and finally robot uprising. Humans could either be enslaved by robots or completely wiped from the whole planet (14). So, by giving AI full human rights, we are quite literally handing AI the key to our own doom.

XI. How Far Are We?

                    Now that we have introduced all aspects of AI, from a technical standpoint, it is important to evaluate how far are we exactly from human-like AI? On one side, Jack Krupansky, a writer on AI, believes that there is “no sign of personal AI yet” or strong AI that constitutes much of a true revolution. He states that “AI systems and features currently provide plenty of automation, but are not yet offering any significant higher-order human-level intellectual capacities.” In addition, Jack asserts that “AI systems are severely lacking in emotional intelligence” and that emotional intelligence is the one differentiating factor between humans and AI. However, on the other argumentative side, Mikko Alasaarela, an AI entrepreneur who has studied emotional intelligence for a long time, is convinced that “people are no longer ahead of AI at emotional intelligence” (11). In fact, he argues that people are generally not really emotional intelligent, and AI will actually have a lead in emotional intelligence in the future, especially due to big data. By analyzing hundreds and thousands of faces and attributing them to the qualities of people, AI can now look at our faces and recognize private qualities such as sexual orientation, political leaning, or even IQ. Advanced face-tracking software can analyze the smallest details of our facial expressions and can even tell apart fake emotions from real ones, something that is hard for even us to do (11). But is this truly being empathetic or simply just a result of big data and informational systems? Can AI show true empathy without having a consciousness? One of the last milestones of development human AI is having a conscious, which is a phenomenon still mysterious to humans. It is one the last traits left that humans have to retain superiority of machines, and is near impossible to mimic because humans cannot even objectively classify or measure human consciousness (17). A machine may have a human believe that the machine has a personality and human characteristics, but it is not possible to say that the machine has a consciousness. This means that in reality the self-aware cookie in Black Mirror, AI in Blade Runner, and androids in the Terminator, are all just a science fiction dream, but incapable of actually becoming a reality.

XII. Conclusion

                    To summarize, AI can act human and put on the outer appearance of being human, which may convince us they are human, but on the inside, they are only a series of code and instructions, and they will never be truly human. AI can project empathy and feelings but not truly feel these emotions from the heart because they do not possess an a human consciousness. Instead, they have a database of algorithmic statements that tell them to act the way they do. Simply instructions, but no feelings attached to them. For example, if an AI sees a human crying, it’s program may say “if see person crying, comfort human,” which is a very physical action. Yet, AI do not truly feel the intangible feeling of empathy and sympathy we have in our hearts, they are only programmed to act like they do. So, the scenario painted in Black Mirror in which the cookie believes itself to be the exact human it was replicated from will not occur. Instead, it will only appear to be that the cookie believes itself to be its human as it is programmed to talk or act in such a manner, but it does not hold the same intangible emotions and feelings. If we were to give AI human rights it would be devastating. We would have already been forewarned by Stephen Hawking, a theoretical physicists, and Elon Musk, the founder of Tesla, of the dangers of self-reasoning and self-producing AI. Both have advocated investing in research to prevent this phenomenon from happening and making sure that AI always stays within the control of human, but by giving AI the same rights as we do when they are innately not human, we are doing the opposite of controlling their growing dominance and power over humans. We are willingly giving them a legal facet into tearing us down. Therefore, we must make sure they never get the same rights as humans because this gives AI, which is already faster, smarter, and stronger, a new power in the legal world, and it could lead to catastrophic results: the AI takeover. To conclude, AI cannot be identified as human biologically, philosophically, or legally, and should not be given human rights because they cannot hold a human conscious, and giving them human rights would endanger the entirety of human civilization.

XIII. Conclusion open discussion: But are we ethically responsible for our AI?

                    Although we have proved that AI are not human and should not be fully granted rights, are we still ethically responsible for AI? Instead of having direct human rights for robots, we should still consider the legal frameworks of AI in general. To understand this situation better, we can think back to Mary Shelley’s Frankenstein (22). In the novel, Frankenstein, Victor, a mastermind genius, builds a humanoid out of dead corpse, which is Frankenstein, but then proceeds to abandon his creation in disgust of its outward appearance (22). The creature is lonely, lost, and confused in the world and vents his anger by killing anyone and everyone who is dear to his creator (22). What we learn from the novel can be applied to our lives with AI. If a human creates a robot with AI, then he or she is responsible for his or her own creation and must attend to it. It is especially easy nowadays because the Hanson Robotics cloud-based deep learning AI is open source meaning anyone can develop their own Sophia, should they so wish (16). This means that anyone with sufficient programming background experience can download this open source and try to create their own Sophia or AI. If we come upon someone as irresponsible as Victor, it would be devastating to have a situation in which the robot is created and then abandoned by its user. In addition, humans must take responsibility for the creation of robots. Even though this paper has proved that AI should not receive full human rights, it is still important to note that humanity has obligations toward our ecosystem and social system. Since robots will be part of both systems, we are morally obliged to protect them, and design them to protect themselves against misuse. Although, robots should not be given full human rights, we might give robots rights in the same set of constructs such as companies have legal rights. We can create a specific legal status for robots, so that their creators are responsible for them and and both owner and robot must make good any damage the robot causes, and apply electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently (9). The European Union has already begun drafting resolutions on specific sets of non-human legal rights robots can be granted in order to ensure that we are still ethically responsible for AI. But in order to make sure that robots are in turn responsible for us, perhaps we can adopt Isaac Asimov’s science fiction Three Laws of Robotics: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. But, who knows? Through the century debate, perhaps one major scientific breakthrough or one stunningly convincing piece of evidence will change how we perceive robots, and whether robots will truly be equal to us in the future, only the future holds the answers.

Lesson 4: 90/10 Rule

In college, you won’t have enough time, and contrary to popular belief, you can’t just “make time” by not sleeping or cooping yourself inside your room and saying no to social events, because it’s important to have a balance. How do you finish things on time and succeed? The answer lies in a coding axiom called the 90/10 rule: “90% of a program execution time is spent in executing 10% of the code.”

Put into life and time management terms, you spend 90% of the time finding how to do all your work in 10% amount of the time. So, I realized that strenuously waking up at 8:40am to go to a 1.5 hr lecture could save me over 5 hours of  both unproductive and gruesome self-studying of the material. I realized that going to a 1 hour professor office hour would saved me the late 5am hours I would spend grinding on a multi or physics problemset. And even more importantly, I realized getting a good nights sleep actually led to a more productive day and therefore less time spent studying in general simply because I could pay attention fully in class without getting tired and zoning out, and be able to fully understand and learn the material!

So, when you think you’re taking the easy way out or “saving time” by skipping lecture to study or not going to office hours, or getting less sleep to do more, you might actually be doing quite the opposite!

Lesson 3: Your calendar is your best friend

In college there’s going to a million events–whether it be club activities, sports practices, social events, speaker events, classes, internship deadlines, etc; that you will want to go to, and it is important that the minute you see that facebook event you press “interested in”/”going to” or right after you receive that email about that application deadline, put it onto your calendar right away. There’s no way you’re going to be able to keep track of everything–and this is something I learned too late, because I wasn’t able to attend the events I wanted to go to simply because I would forget the time or day, or wouldn’t finish something I needed to do before I went to the event and ended up not going at all. Do not, I repeat, do not rely on your peers to be remind you of events to go to. You need to put things into your own hands.

Lesson 1: Surround yourself with the right people

I originally named this “Surround yourself with good/success-driven people” but I realized that being around the right type of people are subjective, because sometimes success-driven can cause high stress/competition/toxicity, so I changed it to “right people.”

But in all honesty, I still do believe that good/success-driven people is very important in College because if you go to a top-tier institution you’re going because you love learning, so you want to surround yourself with like-minded individuals.

This is where I came to fault when I first arrived in College. Even when you go to top university, you shouldn’t expect that everyone loves to learn. Obviously, there’s still going to be the kids who don’t and party all day or all night and never go to class or do their homework or assignments. I definitely didn’t expect this because I chose my institution because I thought that I would be surrounded by hardworking individuals, but I realized that wasn’t necessarily the case. Don’t get me wrong, the people I’ve met in this College are probably the most genuine, amazing, kind, nice, and overall awesome people I’ve ever met. I know everyone says this, but hands-down, the people here are what really makes my experience in college worth it. Although these people are absolutely amazing, when it comes to academics, they didn’t fit me.

Some background–I chose Carman as my housing/dorm and Carman is known as the “social dorm.” This means that every Thursday, Friday, Saturday, and sometimes even Sunday nights you will see groups of kids going out of the doors of Carman to go party. Yes, four nights of partying sometimes ranging from 9pm to 4am. Since it physically appears that almost everyone I knew was going out, I caught onto the 4-night partying culture and would throw off assignments just to go party. This partying habit was alright for the rest of my friends on the floor because they didn’t have class Friday or billions of assignments due because they were in Columbia College (the non engineering school), yet I had weekly quizzes Friday morning in Multivariable Calculus (my hardest class), and on top of that I often had Computer Science ProblemSets due Friday nights. The first week, I remember procrastinating till the last minute to turn in a Problem Set for Computer Science simply because I wanted so bad to go out and party so I was extremely distracted, so I ended up turning in a terribly done assignment (and at that time I didn’t know you could use late hours because guess who skipped lecture) and I received a C for it. The worst part was that after I received such a low grade, it didn’t even bother me that much and I would still go out and prioritize partying simply because it felt like the whole world was doing it simply because the ones closest to me (which were the people on my floor) were. But the thing is, it isn’t the whole world. That’s why I really emphasize hanging out with the right people. I realized that some of my SEAS friends from floor 11 didn’t go out, and worked on the assignments. Along with them, my friends from other dorms (John Jay, Wallach, Furnald) all told me that they barely go out, which was shocking for me. And in correlation, their grades were all a lot better than mine. My mother also said that in the giant WeChat groups the parents keep for Columbia, the parents  all say that their kids rarely party because they can never find parties–which is just so ironic because on my floor people are always at at least 2-3 parties which then leads to us party hopping and joining them, so I never viewed “partying” as uncommon.

After realizing that partying wasn’t the common scene, I tried to cut it back and stay in to do work, but man is it hard when you’re surrounded by your friends going out and having the time of their lives. Or at least that’s how I felt at the time. In fact, when I came out of my room the next day, people on my floor would ask me, “Jessica where were you? We had so much fun! You missed out!” or if I’ve been working throughout the week and not attending the million of social activities we have, they’ll say “Jessica where are you? It Feels like we never see you anymore.” The fomo (fear of missing out) really kills me, so more often than not I succumb to the pressure and go out with my floor again. And the thing is, being with them is the best thing ever because everyone is so kind and I have such a great time, but in the end when I come back late and get almost no sleep, and end up sleeping till 2pm the next day and also getting no work done, which then results in me almost pulling an all nighter on Sundays, that’s really no fun at all. Another  thing I struggled with is that also when I do stay in and do work, I’m not the most productive either because I just feel lonely and unmotivated because everyone’s out. It takes so much effort to really do anything when everyone you know is out. I remember thinking back to highschool when me and my friend would just go to the library everyday afterschool or even back to middle school when me and my best friend would get all our work finished afterschool at her house–homework never felt like it took so much effort? It felt like an accomplishing feat, but now it feels like so much effort and no gain (just missing out on fun and not even getting the best grade because I’m rushing it out).

So, for those going into college next year, surround yourself with the right people. Because when you don’t, trying to stay on top of your work or even being a good student in general because so much harder than it really should be.



Wow it’s been forever…

Hey guys! My thanksgiving break just started yesterday, so I’ve been able to take some time off to write? Though honestly, this Columbia Engineering Core has really got me working on PSets on the daily, and I’ve barely been able to write or read at all–so this post will probably reflect some of the loss of fluidity/charisma I used to hold in my writing. Anyhow, now that the semester is surprisingly almost over, I’ve been meaning to hammer out a conclusion post about what I learned through my first few months in college, but I’ve really never gotten the time because I’ve either been working or try to catch up on whatever minimal sleep I’m getting. So, instead of writing a long and rushed essay, I’m going to update smaller, separate posts on my blog about what I’ve been doing in college, what’s happened so far, and lessons I learned   or any advice I have and then who knows, I might compile it into a longer, more aesthetically pleasing post on Medium (I’ve really been reading Medium for too long).