3 Myths About AI You Should Stop Believing Today!

Myths are after all never ending stories”

So let us put an end to a few of them.

Popularity is a curse, it is always accompanied by rumors, myths and hypothesis. AI is no exception, from being called a threat to human existence to believing it will consume all human jobs. We have all fallen prey to at least one of these myths. So let us unravel the truth once and for all.

Myth Unus (#1) : AI is new

AI or modern AI dates back to 1940’s.  The quest began when philosophers began to describe the human intelligence as a mechanical process which eventually led to the invention of the first digital programmable computer. This was a very nascent stage of AI with maybe a pinch of algorithmic reasoning. This device led to the possibility of building a mechanical human brain.

So now we know the idea behind Frankenstein!

The first research lab for AI was set up on a summer afternoon back in 1956 in the campus of Dartmouth college. Many researchers predicted than machine intelligence would be a reality in less than a century. Looks like these folks were quite the fortune tellers!

The real breakthrough in AI was observed in the early decades of the 21st century with investors and researchers joining hands to create more sophisticated hardware systems to implement complex algorithms. We have all seen robots and smart cars in films like James Bond, Terminator, among a myriad others.

Gottfried Wilhelm (von) Leibniz  a German philosopher prominently speculated that human reasoning can be infused to mechanical means back in mid 1600’s.

Having said that, AI was more of term back in the day but with the advent of technology, unfathomable computing power and data we can see a number of applications of AI from monitoring systems to detection and recognition systems used by Facebook, and recommendation systems used by Amazon, Netflix, Flipkart amongst others. AI has not only made various functions easier but it has also helped to increase business in the least amount of time. So although AI has become the buzzword today, it isn’t new.

Myth Duo(#2) : Artificial Intelligence, Machine Learning, Deep Learning are basically all the same.

BuzzSumo, Google Trends, Reddit, Google search – are all witness to the buzz these three words have created lately. Although these three words are used interchangeably, you must have heard that AI, ML and DL are all the same. This isn’t true!

AI which refers to intelligence exhibited by machines in contrast to natural human intelligence is a branch of computer science which emphasizes on the creation of intelligent machines such that they possess human-like intelligence.

There are two types of AI : Weak or narrow AI and Strong AI.

Weak AI is one which is designed and trained to perform a particular task. Apple’s Siri is an example of weak AI whereas Strong AI is designed to possess cognitive human like abilities such that when it is confronted with an hostile environment it has sufficient intelligence to find the solution.

Related : Basics of ML and AI.

Machine learning is a set of algorithms that allows a software application to become more accurate in giving results without being programmed explicitly. ML is a specific  AI task which is executed by learning from given data. So Machine Learning is a strict subset of Artificial intelligence.

The basic logic of ML is to build algorithms which give a machine the ability to receive inputs, carry out analysis and predict the output while simultaneously updating the output as more and more inputs are fed hence, the word ‘Learning’.

Machine learning can be further categorized into three types – Supervised, Unsupervised and Reinforcement Learning.

Supervised Learning : In the simplest words, supervised machine learning is one where we train an algorithm using data (known as labelled data) and put it’s decision making skills to test. Here we act as teachers by constantly feeding the input data, tell the correct output and expect the algorithm to learn the pattern. This training process continues over and over again until the algorithm has harnessed the desired level of accuracy. Decision tree, KNN, logistic regression are the examples of supervised learning name a few.

Unsupervised Learning : No teacher, no labelled data yet Machines learn – Eureka! We have unsupervised learning.

Here, we have no labelled data. This kind of machine learning is used for pattern detection and predictive modelling. Unsupervised machine learning algorithms use techniques to detect patterns, create groups, summarize data points and for clustering of the data. K-means clustering is an example of unsupervised learning.

Reinforcement Learning : Trial and error – this is main logic of this form of machine learning.

Here, the machine is exposed to an environment and is expected to learn from its past observations, this trains the machine using a continuous trial and error technique. This technique helps to easily determine the ideal behaviour in a specific context in order to maximize the performance. Examples of reinforced learning are Q learning, K-means, Gradient boosting etc.

So let us see how machines learn.

https://youtu.be/R9OHn5ZF4Uo

Deep Learning : I won’t be surprised if you are thinking of ‘neurons’ right now. Deep Learning algorithms are used to solve problems using algorithms based on neural networks. Again, we can say that DL is a subset of ML and AI further.

Deep learning is facilitated by neural networks which mimic the neurons in the human brain. Deep learning embeds multiple layer architecture (few visible and few hidden). So, this is an advanced form of machine learning which collects data, learns from it and optimises. Often some problems are so complex, it is practically impossible for the human brain to comprehend it so programming it is a far fetched thought. Primitive forms of apple’s siri and google assistant are an apt example of programmed machine learning as they are found effective in their programmed spectrum. Whereas, Google’s deep mind is a great example of deep learning. Essentially, deep learning means a machine which learns by itself by multiple trial and error methods, Often a few hundred million times!

Related : Is Deep Learning Better Than Machine Learning

Note : Every Machine learning is AI, but every AI is not machine learning

Myth Trēs (#3) : AI is going to overtake humans

Maybe we cannot compete with a computer when it comes to complex algebraic computations, maybe we cannot paint our cars as efficiently and smoothly as the robots can but where we surpass machines is strategic thinking, our adaptive nature, our survival instincts, emotional intelligence, creativity and obviously the ‘human touch’.

How likely is it for your first desktop computer to be jealous of your new upgraded laptop?

Yeah we have all heard of the Terminator references hundreds of times and Stephen Hawking having made it clear that it is a potential threat. But the truth is, it is unlikely to occur. Although we have drones and robots to detect and demolish threats, they have yet to be developed the ability to gang up against us and go rogue.

A few myths busted, many more left. If you have heard of any such myths, do share in the comments and let us decode them in the near future.

1
Leave a Reply

avatar
1 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
0 Comment authors
Elon Musk's Prediction About Artificial Intelligence, How Accurate is It? - GL4L Recent comment authors
  Subscribe  
newest oldest most voted
Notify of