Since I don’t have anything crypto on my mind right now to write about, I will post an essay I wrote 4 years ago during my freshman year about Artificial Intelligence. It was a critical inquiry class and my amazing teacher had us write about some Black Mirror, Twilight Zone, and other sci-fi content. Here’s what I had to say 😂 :
The Dangers of and Safety
Precautions for Artificial Intelligence
Joseph Kelly
SUNY University at Albany
Abstract
My inquiry is based on the Black Mirror episode “White Christmas” which dives deep into the advancements of technology and artificial intelligence. I saw many implications that this new technology brought and was curious about it. The questions and ideas of making an artificial program were before this time, but artificial intelligence began gaining popularity around the 1950s, when things such as the Turing Test arose and the Dartmouth Conference of 1956. Artificial intelligence is used all around us in our everyday lives. From being used to process large amounts of data for businesses to simply saying “Hey Siri,” and being able to know anything or do anything on your phone. AI is a gigantic part of our life now and it will only continue to become a bigger part, as our society is continuously moving in the advanced technology direction. The possibilities of what AI and machines are capable of is scary, and can turn into a real-life horror sci-fi film in which they become too powerful and knowledgeable and be the end of human life. That is why I believe this topic is so important, because it’s the now and the future, it’s successful, and if we don’t go about it properly, it could easily be one of the last things we achieve in the world.
The Dangers of and Safety
Precautions for Artificial Intelligence
Artificial Intelligence (AI) in a basic explanation is programmed technology that can help humans with anything from day-to-day processes or big picture plans for a business. With new research coming in daily on this relatively new subject, there are lots of areas for debate as to how to implement and go forward with AI programs. The main issue regarding AI in the future is the tremendous capability and capacity for learning. The possibilities seem endless, and it is for that reason that some people held high in regards to the subject, such as Utku Köse, are warning about how to advance with this technology. It is important that we ensure that what we are doing now- or aren’t doing now - won’t spell the end of humanity in the future. There are many possible dangers and risks associated with the advancement in AI, including the ethics, morals, and rights of these systems, their abilities, and how they’re implemented (Bostrom 2014). Along with these risks or dangers come solutions or strategies to insure our safety. The research done in this article is what I found when looking over the dangers of AI and some strategies we can use to insure our safety from AI.
Literature Review
The world of artificial intelligence is a very confusing yet interesting one if you know where to look. It’s easy to get lost in the uses and impacts that artificial intelligence has on our everyday society. But, as previously stated, it can be one of the most interesting things to learn about nowadays. AI can be the very best thing, or the very worst thing that could happen to humanity. Two articles that I analyzed based on the topic, that is the positive and negative effects associated with the advancement in this technology. Some recurring questions pop up when trying to learn about AI and its possible impacts: What could go right or wrong from AI advancements? How could we let something go so wrong in the first place? And in what ways can we ensure these bad things won’t happen? These questions are ones that I saw myself wondering about many times throughout my research, and they tend to encompass the thoughts brought on by the research I found during my inquiry.
Example #1
The first example I will be taking a look at is the 2018 Salvador Pueyo article. It is titled “Growth, degrowth, and the challenge of artificial superintelligence.” This article gives insight on the possibilities of AI in the future, including its growth and challenges. Pueyo focuses more on the superintelligence side of AI, which is AI that can outperform humans in basically every aspect.
The emergence of superintelligence can create two major outcomes, according to Pueyo. These possibilities, assuming that we reach this point in technology to where superintelligence exists, will either be hostile or friendly ones. This major point is decided simply by how we decide to program these AI’s (Pueyo 2018). AIs are meant to do things as efficiently and productive as possible. Giving unclear goals to a machine like this can easily strew the outcome, possibly being a catastrophic one.
One of the aspects of life that Pueyo focuses on, in regards to what could go wrong from AI advancements, is the economic side. The author speaks heavily of an idea called neoliberalism which relates to social ideas and free-market capitalism. Neoliberalism is basically the idea that keeping your head down and doing your thing is the best way to do things (Pueyo 2018). In regards to governments and inputting policies, it would mean that the government isn’t very involved in the markets and policies they create. Pueyo suggests that these neoliberal principles could spell tragedy in many ways if we decide against having proper regulations and supervisions in the advancements of such technology (Pueyo 2018). This idea creates possible answers to the questions of what could go wrong and how we could let something go wrong.
Example #2
The second representative example being used in my inquiry is an article by Utku Köse, published in 2018. The article is titled Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on Machine Ethics and Artificial Intelligence Safety. Immediately after reading the title, it’s easy to get a sense of what you’re going to be diving into. The main subject at hand is the safety of the future of artificial intelligence. Another category that you might think is discussed from the title is ethics, or the debate of what’s right and wrong, in regards to machines.
Köse acknowledges the fact that the issue of artificial intelligence safety is coming to the forefront of discussions all over the world. It has been seen as increasingly important to a majority of people concerned with what we will see in the future. Through fictional dystopian examples and moral dilemmas that would help see that the rise of safe and ethical implications of new technology are extremely important and detrimental to our sustained future. Köse even states the fact that concern has risen over the past decade or two due to the fact that science fiction themed texts and films are becoming more popular (Köse 2018). Movies such as the Terminator or the series Black Mirror display some form of the future or a dystopia where machines or intelligence outsmarts man, creating costly consequences. These spark fear, curiosity and interest into research on the field of machine ethics and AI safety (Köse 2018).
His section about machine ethics is an eye-opener, which is at the forefront of questions surrounding artificial intelligence, especially from science fiction popularity. As Köse states, the hysteria surrounding machine ethics stems from developments ensuing now, in addition to advancements we will most likely see in the near future (Köse 2018). He presents four main subjects regarding the discussion of machine ethics. Some very interesting questions arose from this section that are hard not to think about. What rights, if any, would machines and AI get? Should they be treated like humans if they become more intelligent and efficient than us? If they cause an accident or death, are they or their developers responsible (Köse 2018)? Going through these questions I find myself questioning the validity and bias of my own reasoning, as I’m already referring to machines and AI as ‘they.’ This could suggest that I prematurely believe these technologies will end up gaining rights similar to humans, and assume that other intelligences will surpass us as a cause of that.
In terms of safety from AI consequences, Köse gives a diagram outlining how we as humans can thrive alongside intelligent systems and other living organisms. He presented a three-circle Venn diagram, with humankind, other living organisms, and other intelligent systems such as AI making up these circles. As with these diagrams, each circle has an equal amount of space shared with another, and altogether (Köse 2018). This would be the logic that we have to follow in order to ensure that machines couldn’t gain an advantage over the rest of the world in terms of power, but it would also be a difficult task to live in perfect harmony. He also touches on a technological singularity, existential risks, possible social changes regarding artificial intelligence, and solutions to these problems. Köse does a fantastic job at explaining and putting into perspective the problems that could stem from advanced AI, and offers information useful in examining the three major questions regarded in this inquiry.
Conclusion
This inquiry has hopefully given some insight on the overall topic of artificial intelligence, the risks and dangers involved with AI use in the future, and some ways we might be able to stop a singularity of sorts from taking place. Something as simple as someone making a cheap, easily copied human-like AI could cause mass unemployment among ordinary humans (Blackford and Broderick 2014).
One might find it hard to find solutions to the dangers that AI could generate, as we don’t know all that much to begin with. However, looking to the future and taking into account the predictions made by academics and intellectuals in the field, you can start to see how it’s mostly simple precautions that could save us from these dangers. Such measures could include ensuring that not one aspect of the planet gets too much power over the others. This could be as easy as making sure we code AI correctly to their specific job.
It’s very easy to articulate questions from such a seemingly out-of-this-world aspect of technology, as it seems so foreign and out-of-reach. However, with these questions come answers, and with these answers come new questions. What started as me being curious about what AI could do to us, or how we could solve the problem, turned into more articulate questions. What will be the basis for making a code of ethics and morals for AI? What mistake in coding could cause a technological singularity? Would AI, in a singularity, turn humans into machines, the likes of which we did to them? Or would we simply be taken care of because there would be no need for us? Could AI advancements made under extreme Neoliberalism be the reason that things could get out of hand?
References
Blackford, R., Broderick, D. (Eds.). (2014). Intelligence Unbound: The Future of Uploaded and Machine Minds. West Sussex, UK. John Wiley & Sons Inc.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford, UK. Oxford University Press
Kose, Utku. (2018). Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on Machine Ethics and Artificial Intelligence Safety. BRAIN, 9(2) 184-197.
Pueyo, Salvador. (2018). Growth, degrowth, and the challenge of artificial superintelligence. Journal of Cleaner Production, 197(2), 1731-1736. https://doi.org/10.1016/j.jclepro.2016.12.138
