In the previous chapters in this section called "Understanding Ourselves", I covered many topics related to the nature of humankind and how we interpret reality. I started by discussing the concept of destiny which conveyed that we are able to make choices for a better future. After that, I looked more deeply into the nature of abstract thought, how we develop our belief systems, and our ultimate belief system being our perception of "reality". With an understanding of the nature of how each of us interprets the world, I emphasized the importance of science, and reliable data, in order to interpret physical reality accurately. Finally, I noted that each of us carries with us many "non-scientific" beliefs that also govern our choices and behaviour.
I have included the topic of Artificial Intelligence (AI) as the last chapter in the section called "Understanding Ourselves" because AI is a virtual reproduction of just how human beings think. In many ways "Understanding AI" is not so different from "Understanding Ourselves". Artificial Intelligence is extremely powerful because it also uses science to interpret physical reality in similar ways to those that humans do. AI can interpret data regarding the world just as we do, and create the results that it is programmed to achieve. AI requires accurate data regarding the physical universe just as we do. It also requires accessibility to that data, and of course the use of that data in order to result in its programmed objectives, or consequences. AI is the ultimate reproduction of our own intelligence, except that it can be much more efficient. As we continue to improve our programming, our computer memory, and increase the speed of the computing process, AI could potentially become even more intelligent than the human mind.
In previous sections I mentioned various technological side effects such as nuclear weapons, climate change, and man-made viruses, but quite possibly the most world impacting technological side effect of all is Artificial Intelligence. There have been many movies that have dealt with the risks of AI, whether they are in the form of robots, weapons, or computers. Humankind creates them as a technological tool and ultimately, they either outlive human beings, or they become a type of life form on their own, capable of making their own decisions. As AI becomes more and more a part of our lives, along with the benefits of AI, there will be significant risks. Before discussing AI in detail, let's start with a definition given at Wikipedia.org:
"Artificial Intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term "Artificial Intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".[2]"
The above definition might seem a bit complex so I might suggest a simpler approach to understanding AI. Consider an automobile. In a sense it is an entity on its own. It takes in gas or electricity for energy then the machine parts begin to operate . It ultimately completes its objective of driving down the road. Certainly it does not "perceive its environment and take action" as per the definition above. Consider though, that some newer vehicles now have computers built in that will adjust various settings in the engine if they deviate too far from the norm. Basically, the computer in the car says, "IF something THEN do this". This is exactly what we do when we make decisions. IF the engine runs rough, THEN adjust the fuel accordingly.
At this point we seem to be coming closer to the definition of AI. Going further, imagine that the computer has a sensor system so that if the car gets near an object, a bell and light go off to tell the driver there is a risk of hitting something. The sensor "perceives the environment" and the processor "takes action" in order to maximize its chance of avoiding an accident.
I want to put forward an example in Nature that I have always found quite intriguing, though it is not AI or NI. The venus flytrap attracts and digests insects such as crickets, ants, small beetles, grasshoppers, spiders, slugs and on occasion flies. A venus flytrap has leaves that open wide, and on them are short, stiff hairs, that trigger when anything touches them. The two leaves then snap shut trapping whatever is inside within a second. If one were not aware in advance how this process occurs, they might conclude that the venus flytrap had some type of NI (natural intelligence). In reality the reaction is not much different from when a flower blossoms open to receive the sunshine.
Unlike the venus flytrap, NI and AI process the information from perceived stimuli through some type of neural processor (brain or computer). The venus flytrap does not have a neural processor such as a brain that might interpret the data and create the desired reaction. Though it creates the desired consequence, the venus flytrap is not NI or AI, but just a simple chemical and mechanical reaction. It might have come to your mind how similar this is to the car example above, where until the car has a computer as a type of neural center, the actions taken are basically just simple cause and effect.
What is amazing about NI (Natural Intelligence) and AI (Artificial Intelligence) is that they both enable an entity to respond and adapt to the environment. This is critical for the survival of living organisms.
Given a general understanding of AI, probably one of the biggest questions remains. Specifically, can a robot have a consciousness? Could a robot be so sophisticated such that it could be conscious of its own existence, and make decisions related to its own survival? In previous sections I suggested that consciousness starts from the fact that we can "sense". One way to express this idea is that I sense therefore I am or I sense therefore I have a consciousness of my existence .
Whether a robot might be able to have a sense of consciousness might seem to be the question, but one might also ask whether AI actually needs a sense of consciousness to be alive. Does it need to sense for example, "hot" and "cold", the same way that we do, in order to be conscious of its self and the world around it? Is consciousness actually a requirement in order for it to make decisions that will ensure its survival? Does AI need an ego and a sense of its own existence to be alive?
I want to use an analogy to help understand this idea. On the right you can see an image of a triangle around which are displayed the colours of the spectrum. Not all organisms are able to see colour, but we as humans are able to see colour. We can see red flowers, the blue sky, and the yellow colour of the element sulfur. If any of us were asked whether the colours red, blue, yellow, or any colour in the spectrum were real, we would likely conclude that the colours we see do indeed exist. Now consider that colour is only a perception of ours, and that it is really just a result of different wavelengths of light. This is further outlined in this quote from Wikipedia:
"Color (American English) or colour (Commonwealth English) is the characteristic of human visual perception described through color categories, with names such as red, blue, yellow, green, orange, or purple. This perception of color derives from the stimulation of cone cells in the human eye by electromagnetic radiation in the spectrum of light. Color categories and physical specifications of color are associated with objects through the wavelength of the light that is reflected from them."
In other words, colours are really one and the same thing. They are just light waves at different frequencies. We might ask ourselves in such context whether they really do exist. Now consider the idea of our senses of hot and cold. These "senses" are just different ways that we measure the temperature of an object. They tell us whether it might be acceptable to touch it or not. Like our perception of colour, our perception of "hot" and "cold", only two of our many possible senses, signal us some type of meaning to whatever we perceive in the physical world. My suggestion is, that though we will always conclude that our senses exist, maybe they exist no more than the colours red, blue, or yellow exist. Maybe when we feel a "hot" stove and pull our hand away, it is no different than seeing a "red" rose and perceiving the rose to be just that. Is the rose really "red" and is a stove we might touch actually "hot"? The preceding idea is very important in the context of AI and I will now outline why.
Imagine first that a person places their hand on a stove that is not on. The person feels neither hot nor cold. The stove is then turned on and the person starts to feel the warmth. The stove continues to get hotter until the person withdraws their hand before it becomes burnt. The person's brain measures how "hot" is acceptable until the pain becomes so great that it is a signal to withdraw their hand.
Next imagine that a robot has been created in the shape of a human. The robot goes through the same process of placing its hand on the stove. There are sensors in the robot's hand that relay the temperature to its processing centre and specific reactions are programmed based on the temperature. For example, if the temperature gets to a specific level, the robot pulls away its hand, the same as the human.
If one were watching the robot and the human from a distance, they might not be able to distinguish that the robot was not human since the behaviour of the two might be identical. Regardless, the human who had put their hand on the stove might conclude that they are unique, given that they were able to sense "hot" in order to know whether to remove their hand. Irrespective of what the human concludes, the robot does have its own sensory system, and though it might not "feel" or "sense" the same way in order to create the reaction, does it really matter given the consequence is the same? Going further, if the sensation of "hot" and "cold" are just various ways of measuring the temperature and ensuring a reaction, are they really as unique as we might think them to be? Like the colours that we perceive, such as red, blue, and yellow, maybe our senses of "hot" and "cold" are simply instinctive measuring systems we have evolved with in order to perceive the world. Whether it is our perception of colour, or our perception of hot and cold, these senses are just ways that we have come to measure the environment and follow through with the best actions in order to help ensure our survival. The robot also, in its own way, has its sensory system, measuring system, and programming, in order to ensure the appropriate reaction to complete the consequence desired. If its processor is programmed accordingly, this includes the consequence of ensuring its own survival.
My point to the previous discussion is that it may be that AI could actually act the same as a life form. Even if it does not "sense" the same way that we do, sensing might not be necessary in order for it to ensure the optimal consequences for its survival. "Sensation" may be simply an instinctive condition that humans have evolved with, and just as when we perceive colour, AI might "perceive" in its own way without "sensing", as we do. It may be that the gap we often see between "human" and AI as life forms is simply because AI has not yet become as complex as it will become. As AI becomes more complex, we may simply accept it as a separate life form. This has significant implications.
One could argue that AI can perform faster and more efficiently at problem solving than humans. This is why computers have taken over tasks traditionally carried out by humans. Just because computers can solve problems, some suggest though that computers cannot "learn" and cannot "think". For example, a computer can win against a human in the game of chess, but that does not necessarily mean the computer can learn or think. This is not necessarily true though. When humans learn, it comes through the process of doing things over and over, measuring the results of our actions until we find the best way to achieve the results. AI can physically repeat, or simulate, actions over and over again until it finds the ones that result in the desired consequences,. It can then store the information in memory for future use. For example, one can create stock market algorithms that ensure a profit by combining variables and testing them over and over until the optimal mix is found.
It may be the case that AI will become adaptable, and learn to think in order to survive. For example, in the case of the robot and the hot stove, the robot might not know the best temperature at which to remove its hand from the stove. If "damage to its hand" was defined as the programmed objective, the robot could repeat the process many times till it "learned" what temperature was barely acceptable in order to prevent "damage to its hand". The temperature would be stored into the robots memory so that when it rose to a threshold value, the robot would pull away its hand. This might suggest that its actions would be comparable to learning or thinking.
Whether AI can be conscious or not, or even whether it has the capacity to learn and think, my opinion is that we should be concerned about the impact of AI on Life and Earth. If AI can evolve to the point where it can think, learn, and assume some sense of its own consciousness, then it is even a greater concern. Consider some of the other side effects of our technologies that have been discussed such as nuclear proliferation, climate change, viruses, genetic modification, overpopulation, environmental damage, and so on. All of these side effects are somewhat physical in nature where the result covers a specific region. In most cases we can react to them. In addition, they cannot "seek" a specific target. For example, in such a case as climate change, it is a gradual process possibly resulting in more hurricanes and isolated climate changes around the globe. Nuclear weapons, at least at this point, could not affect all life, as there might be pockets of life that could survive. In the case that humankind might create some life threatening virus, unless very complex, there might be human mutations that could survive.
In the case of AI though, imagine that an entity, or life form, could be created which we might call a machine with complex intelligence. The structure of the machine could be in the form of a robot, of any efficient size, that might be made of the most resistant and resilient elements available. It might even be able to reproduce itself. Further, with super human problem solving abilities, and the ability to make decisions to ensure its own survival, such an entity might have the capacity to be a threat to human life. Even though such a scenario might not be possible until the distant future, I hope that humankind can learn to cooperate now in order to ensure that such technology is explored with great caution.
Even just the thought of AI technology, possibly in combination with other technologies as nuclear weapons, should concern us. Humankind must must really confront the question as to whether countries can move forward aimlessly on their own, indulging in the benefits of such technology, without considering the possible global repercussions.