Artificial Intelligence – Its Place in Our World.

Square

Artificial Intelligence. Machine Learning. Deep Learning.

In our ever-changing society – laden with the latest and greatest technology, it is almost inevitable that you have heard these phrases. Although, usually used interchangeably, I’d like to take this opportunity to highlight stark differences; which are often hidden by the ignorance of media.

Artificial Intelligence, AI for short, is any form of machine or algorithm made or generated encompassing the ability to reciprocate and replicate some aspect of human behavioral characteristics. A subset of which is Machine Learning, where a computer looks for patterns in a set of information to execute a task without categorical instructions. Derived from this is Deep Learning, unlike Machine Learning it is focused on building AI modeled around the human neural structure and uses layers to progressively abstract data and build patterns in similar data.

Even though, it may be made to seem as if AI is a relatively new epiphany, it is in actual fact an idea which sprung up in the late 20th Century. Since 1995, researchers have been engaged in attempting to reach The Singularity, the place at which the intellect of AI surpasses that of humans – leading to exponential growth and explosion of technological developments that will change the face of the Earth. Since the start of the so-called Technological Revolution we are presently in, the uprising of AI is a matter we all have to consider – starting with its place in our world.

Why do we, humans, rule over all other species in the world? I fail to see any reason aside from our indubitably superior intellect. Now, if The Singularity is reached, we would need to reconsider our existence. A being, although artificial, having a superior intellect may not only replace our position but also make use of us – much like the humble farmer makes use of the sheep in his pen.

At this point, this may be ringing some bells, any Terminator fans? But, however much like sci-fi horror movie, let me present to you the true reality and very real prospect of this occurring. Behavior tests conducted by the leader in the field, Google’s DeepMind AI, portray how cautious we must be while developing such technologies. The Google team ran an experiment where the AI had to compete against clones of itself to reap fruit, all was well as long as there was a plentiful supply of fruits and enough to go round for all of the clones – the problem arose when these numbers slowly diminished… The clones, at this point, resorted to use their weapons for the first time, laser beams, and attempt to kill each other in an attempt to collect the most fruit.

What’s interesting about this characteristic is that the AI is simply programmed to achieve the highest number of points possible, and the only way it can do so is by collecting fruit. Killing its opponents is of no beneficial value to the AI, other than deeming them immobile for a set period of time.

Through various other tests and experiments, Google’s DeepMind team, were able to come to the conclusion that in small environments with small numbers of AI Clones, peaceful mutualism was possible and highly probable between one another. As soon as the AI Ecosystem became larger in size, characteristics of greed, sabotage, and high levels of aggression were shown by the Clones.

Hopefully from this, you can see the imminent threat that is posed to the human race if the growth is not regulated, and before we can the use of the three laws of Asimov, below…

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

…surely, if artificial intelligence becomes clever enough such that its intellect is greater than ours, it is clever enough not only to understand the 3 rules, but also know that we will monitor it abides by those rules. Will it not try and take advantage of us, by just seemingly following the rules…?

Computer scientists are implementing what is known as “containerism” in order to always have the physical or metaphorical “plug” to the system in hand. In general, AI was created and designed to spot classifiers, which could not be found by even the cleverest of humans. The potential that these systems hold is immensely great, but at the same time they hold the immensely great potential to destroy our trust and our world.


References

  1. Originally featured in the Mensa 2020 Vision Magazine

Author

Leave a Comment