lifeandearth.com



Offer Your Comments

Artificial Intelligence (Understanding Ourselves)

In previous sections I mentioned various technological side effects such as nuclear weapons, climate change, and man-made viruses. Quite possibly the most world impacting technological side effect of all is artificial intelligence, commonly termed as AI.

There have been many movies that have dealt with the risks of AI whether it be in the form of robots, weapons, or computers. Humankind creates them as a technological tool and ultimately they either outlive human beings, or they become a type of life form on their own capable of making their own decisions. As AI becomes more and more a part of our lives, along with the benefits of AI, there will be significant risks. Before discussing AI in detail let's start with a definition given at Wikipedia.org:

"Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".[2]"

The above definition might seem a bit complex so I might suggest a simpler approach to understanding AI. Consider an automobile. In a sense it is an entity on its own. It takes in gas for energy then the machine parts begin to operate and it ultimately completes its objective of driving down the road. Certainly it does not "perceive its environment and take action" as per the definition above. Consider though that some newer vehicles now have computers built in that will adjust various settings in the engine if they deviate too far from the norm. Basically, the computer in the car says, "IF something THEN do this". This is exactly what we do when we make decisions. IF the engine runs rough, THEN adjust the fuel accordingly.

At this point we seem to come closer to the definition of AI. Going further, imagine that the computer has a sensor system so that if the car gets near an object, a bell and light go off to tell the driver there is a risk of hitting something. The sensor "perceives the environment" and the processor "takes action" in order to maximize its chance of avoiding an accident.

I want to put forward an example in Nature that I have always found quite intriguing though it is not AI or NI. The Venus flytrap attracts and digests insects such as crickets, ants, small beetles, grasshoppers, spiders, slugs and on occasion flies. A Venus flytrap has leaves that open wide and on them are short, stiff hairs that trigger when anything touches them. The two leaves then snap shut trapping whatever is inside within a second. If one were not aware in advance how this process occurs, they might conclude that the Venus flytrap had some type of NI (natural intelligence). In reality the reaction is not much different than when a flower blossoms open to receive the sunshine. It is just a chemical and mechanical reaction.

NI and AI process the information from perceived stimuli through some type of neural processor (brain or computer). The Venus flytrap does not have a neural processor such as a brain that might interpret the data and create the desired reaction. Though it creates the desired consequence, the Venus flytrap is not NI or AI but just a simple chemical and mechanical reaction. It might have come to your mind how similar this is to the car example above, where until the car has a computer as a type of neural center, the actions taken are basically just simple cause and effect.

What is amazing about NI (Natural Intelligence) and AI (Artificial Intelligence) is that they both enable an entity to respond and adapt to the environment, which is critical for survival in living organisms.

Can AI have consciousness and does it matter?

Given a general understanding of AI, probably one of the biggest questions remains. Specifically, can a robot have a consciousness? Could a robot be so sophisticated such that it could be conscious of its own existence and make decisions related to its own survival? In previous sections I suggested that consciousness starts from the fact that we can "sense". One way to express this idea is that “I sense therefore I am” or “I sense therefore I have a consciousness of my existence”.

Whether a robot might be able to have a sense of consciousness might seem to be the question, but one might also ask whether AI actually needs a sense of consciousness to be alive. Does it need to sense for example "hot" and "cold" the same way that we do in order to be conscious of its self and the world around it? Is consciousness actually a requirement in order for it to make decisions that will ensure its survival?

I want to use an analogy to help understand this idea. On the right you can see an image of a triangle around which are displayed the colors of the spectrum. Not all organisms are able to see color but we as humans can see color. We can see red flowers, the blue sky, and the yellow color of the element sulfur. If any of us were asked whether the colors red, blue, yellow, or any color in the spectrum were real we would likely conclude that the colors we see do indeed exist. Now consider that color is only a perception of ours and it is really a result of different wavelengths of light. This is further outlined in this quote from Wikipedia:

"Color (American English) or colour (Commonwealth English) is the characteristic of human visual perception described through color categories, with names such as red, blue, yellow, green, orange, or purple. This perception of color derives from the stimulation of cone cells in the human eye by electromagnetic radiation in the spectrum of light. Color categories and physical specifications of color are associated with objects through the wavelength of the light that is reflected from them."

In other words colors are really one and the same thing, being light waves, just at different frequencies. We might ask ourselves in such context whether they really do exist? Now consider the idea of our senses of hot and cold. These "senses" are just different ways that we measure the temperature of an object for example, telling us whether it might be acceptable to touch it or not. Like our perception of color, our perception of "hot" and "cold", only two of our many possible senses, signal us some type of meaning to whatever we perceive in the physical world around us. My suggestion is that, though we will always conclude that our senses exist, maybe they exist no more than the colors red, blue, or yellow exist. Maybe when we feel a "hot" stove and pull our hand away, it is no different than seeing a "red" rose and perceiving the rose to be just that. Is the rose really "red" and is a stove we might touch actually "hot"? The preceding idea is very important in the context of AI and I will now outline why.

Imagine first that a person places their hand on a stove that is not on. The person feels neither hot nor cold. The stove is then turned on and the person starts to feel the warmth. The stove continues to get hotter until the person withdraws their hand before it becomes burnt. The person's brain measures how "hot" is acceptable until the pain becomes so great that it is a signal to withdraw their hand.

Next imagine that a robot has been created in the shape of a human. The robot goes through the same process of placing its hand on the stove. There are sensors in the robot's hand that relay the temperature to its processing centre and specific reactions are programmed based on the temperature. For example if the temperature gets to a specific level the robot will pull away its hand the same as the human.

If one were watching the robot and the human from a distance they probably might not be able to distinguish that the robot was not human since the behavior of the two might be identical. Regardless, the human who had put their hand on the stove might conclude that they are unique given that they were able to sense "hot" in order to know whether to remove their hand. Regardless of what the human concludes, the robot has its own sensory system and though it might not "feel" or "sense" the same way in order to create the reaction does it really matter given the consequence is the same? Going further, if the sensation of "hot" and "cold" are just various ways of measuring the temperature and ensuring a reaction, are they really as unique as we might think them to be? Like the colors that we perceive such as red, blue, and yellow, maybe our senses of "hot" and "cold" are simply instinctive measuring systems we have evolved with in order to perceive the world. Whether it is our perception of color, or our perception of hot and cold, they are just ways that we have come to measure the environment and follow through with the best actions to help ensure our survival. The robot also in its own way has its sensory system, measuring system, and programming to ensure the appropriate reaction to complete the consequence desired. If its processor is programmed accordingly, this includes the consequence of ensuring its own survival.

My point to the previous discussion is that it may be the case that AI could actually act the same as a life form. Even if it did not "sense" the same way we do, that might really not be necessary in order for it to ensure the optimal consequences for its survival. Given the questionable nature of what it is to "sense" and that "sensation" may be simply an instinctive condition that humans have evolved with, just as when we perceive color, then the concept that AI might be a life form is not really that far-fetched. In fact it may be that the gap we often see between "human" as a life form, and AI as a life form, is simply because AI has not yet become as complex as it has yet to become. As AI becomes more complex we may simply accept it as a separate life form which has significant, and concerning, implications.

One might suggest that AI might even be able to be more efficient at problem solving than humans. This is why computers have taken over tasks traditionally carried out by humans. We simply cannot do them as fast or efficiently. Though computers have their place, some might suggest that computers cannot "learn" and cannot "think". For example, a computer can win against a human in the game of chess but it does not necessarily mean the computer can learn or think. This is not necessarily true though. When humans learn it comes through the process of doing things over and measuring the results of our actions until we find the best way to achieve the consequences we desire. AI can physically repeat or simulate actions over and over again until it finds the actions that result in the desired consequences, then store that information in memory for future use. For example, one can create stock market algorithms that ensure a profit by combining variables and testing them over and over until the optimal mix is found.

It may be the case that AI can become adaptable and have the ability to learn and think in order to survive. For example, in the case of the robot and the hot stove, the robot might not know the best temperature at which to remove its hand from the stove. If the consequence of "damage to its hand" was defined as an objective, the robot could repeat the process many times till it "learned" what temperature was barely acceptable in order to prevent the consequence of "damage to its hand". In my opinion, the results of its actions are much the same as if it was learning or thinking.

Should we be concerned that AI could threaten Life and Earth?

Whether AI can be conscious or not, or even whether it has the capacity to learn and think, my opinion is clearly that we should be concerned about the impact of AI on Life and Earth. If AI can evolve to the point where it can think, learn, and assume some sense of its own consciousness then it is even a greater concern. Consider some of the other side effects of our technologies that have been discussed such as nuclear proliferation, climate change, viruses and genetic modification, overpopulation, environmental damage and so on. All of these side effects are somewhat physical in nature where the result covers a specific region and in most cases we can react to them. In addition, they cannot "seek" a specific target. For example, in such a case as climate change it is a gradual process possibly resulting in more hurricanes and isolated climate changes around the globe. Nuclear weapons, at least at this point, could not affect all life, as there might be pockets of life that could survive given the changes in the surface in the planet that might ensue. In the case that humankind might create some life threatening virus, unless very complex, there might be mutations that could survive.

In the case of AI though imagine that an entity, or life form, could be created which we might call a machine but also be capable of complex intelligence. The structure of the machine could be in the form of a robot, of any efficient size, that might be made of the most resistant and resilient elements available in such a way that it might even be able to reproduce itself. Further, with virtually super human problem solving abilities and the ability to make decisions to ensure its own survival, how might this compare to the other side effects of our technologies? Though it might be quite distant in the future that such might exist, I hope that humankind can learn to cooperate now to ensure that countries explore such technology with great caution. Once again, even the thought of such technology, possibly in combination with other technologies as nuclear weapons, warrants humankind to really ponder whether countries can move forward aimlessly indulging in the benefits of technology without considering the possible repercussions.

Stove Image courtesy of graur codrin at FreeDigitalPhotos.net
Color image courtesy of dan at FreeDigitalPhotos.net
Robots image courtesy of Geerati at FreeDigitalPhotos.net
Venus Flytrap Image courtesy of lobster20 at FreeDigitalPhotos.net
Brain Image courtesy of cooldesign at FreeDigitalPhotos.net

© 2015 lifeandearth.com