Is Artificial Intelligence (AI) a Threat to Us?

Artificial Intelligence (A.I) is being used in a wide range of industrial applications today. Farming, Manufacturing, Trading, or Communication and more. But this technology has driven a lot of fear because of its astonishing outputs. Nowadays, there is a lot of confusion, arguments, and fear about will this rising technology be the reason for the downfall of mankind?

Technically speaking, not everyone knows about what A.I is exactly. In this article, I have compiled those facts to explain why it is not a possibility that A.I can be a threat for us in the future.



What is A.I?

"Artificial Intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some activities in which computers with artificial intelligence are designed for include, speech recognition, learning, planning, problem-solving etc."

The main problem or confusion that occurs is when the line "Intelligent machines that work and react like humans" in the definition is read. It sounds a bit scary. One can imagine such technology outsmarting financial marketers, out-inventing human research, out-manipulating human leaders, and developing weapons we cannot understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. So, let's break down into further details.

There are two types of A.I:
1. ANI (Artificial Narrow Intelligence)
2. AGI (Artificial General Intelligence)




Both of these types have a different and opposite meaning.

ANI refers to the type of intelligence using which an A.I module or machine is developed and designed to perform a specific task, in other words, it is capable of only performing a single task, e.g Self-driving car, Language Detection etc.

AGI refers to the type of intelligence using which an A.I algorithm or machine is designed which can perform multiple tasks at the same time, in other words, it can work, think, and act like a human being or may even be 'super-intelligent'.


Our researchers have made huge progress in ANI, there are a lot of examples such as Alexa, Google DeepBrain, Sophia. All of these modules are designed to perform a single task and they are no threat to humans beings except easing our lives. They focus on that specific task and through the intelligent neural networks that are designed and various ML algorithms, they are capable of developing on their own. 
On the other hand, the field of AGI has shown zero progress even today. Even if we combine a big amount of ANI algorithms we cannot achieve the outputs that an AGI module can provide.

In every professional environment, before designing a project, it is properly planned and both the risk and success graphs are balanced. This planning consists of proper and effective solutions that can be implemented if any abnormality arises. If by any chance, we make an A.I machine that has the potential to operate on AGI algorithms, there will be a backup solution if that module starts to behave abnormally or doesn't act according to the provided instructions, shutting it down or installing a self-destruction system would be the best options. It is true that an A.I module can learn on its own, but it will only try to dig information about the specific task it is designed for. 



Here is a tweet by Elon Musk:

"I think we should be very careful about A.I. If I were to guess like what our biggest existential threat is, it's probably that. So, we need to be very careful with artificial intelligence. Increasingly, scientists think there should be some regulatory oversight maybe at the national and internationl level, just to make sure we don't do something very foolish. With A.I, we are summoning the demon. In all those stories where there's the guy with the pentagram and the holy water, it's like yeah he's sure he can control the demon. Didn't work out.''

By summarizing this statement we come to a conclusion that according to his mindset A.I is harmful and dangerous. The conflict of arguments rises when everyone looks upon the negative impacts of this technology. Why don't we take a look at what A.I has to offer on its positive side? 'Risk' (as I mentioned before) is a crucial part of a successful project. Without risk, there is no success.

So, based on all these facts, I advise everybody to 'stay calm'. We are far from achieving AGI. 
The bottom line is: There is no threat to our future from this technology. We are safe.



Comments