What is Deep Learning? - Great Learning

What is Deep Learning?

We humans developed calculators and computers. Then we started feeding it algorithms to perform specific advanced functions. While we thrived on the computational wonders of machines earlier, we are now in an era where we are enabling machines to learn by hit and trial, using past data points, and reinforcement methods. Such fast-evolving technological changes leave us both amazed and exhausted as it becomes difficult to catch-up with the everyday wonders of machine learning. Right from natural language processing to audio and video processing, text-to-speech recognition, virtual reality to developing multisensory interfaces, machine learning is enabling us to transform the digital economy as it continues to make a stronger impact at lower costs.

What is Deep Learning?

Any Deep Learning tutorial will start by saying that the larger umbrella of machine learning comprises several technologies and Deep Learning is one of them. According to Gartner, “Advanced machine learning algorithms are composed of many technologies (such as deep learning and neural networks and natural language processing), used in unsupervised and supervised learning, that operate guided by lessons from existing information.” Deep Learning finds its application in several fields, hence all the attention and focus is on it as advanced cutting-edge solutions are required to reach unprecedented heights of maturity and sophistication in using deep learning AI (Artificial Intelligence) to solve an array of problems.

Replicating human intelligence in functions or decisions that were otherwise limited to human intervention is the next frontier in technology. Deep Learning simulates the human brain by learning from huge amounts of data and developing algorithms inspired by the deduction capabilities of humans. Just like humans learn by experience, deep learning enables the machine to learn with every exposure and improvise in the future. Deep Learning is ‘deep’ as the neural networks created have several layers running deep that enable learning.

Deep Learning algorithms require big data (bigger than most data sets) to learn from. Hence the data boom has helped the cause of deep learning. Another factor is the type of data that can be used for deep learning algorithms – it thrives on structured, unstructured, inter-connected, and very diverse data as well which makes its application possible in vastly different domains. The more learning (data) is provided to the algorithm, the better it learns. Self-driving cars are an excellent example that use data from hundreds of sensors captured every second to form terabytes of data processed for deep learning.

Difference Between Machine Learning and Deep Learning

One of the most searched phrases on the internet is “deep learning vs machine learning.” Truth be told, deep learning can be considered a subfield of machine learning. Machine Learning comprises convolutional neural networks, deep learning, natural language processing and an array of tools and techniques that make Artificial Intelligence a reality. Some of the key differences between machine learning and deep learning can be described as following:

  1. Data Set Size – Deep Learning doesn’t perform well with a smaller data set. Machine Learning algorithms can process a  smaller data set though (still big data but not the propensity of a deep learning data set) without compromising its performance. The accuracy of the model increases with more data but a smaller data set may just be the right thing to use for a very specific function in traditional machine learning. Deep Learning is enabled by neural networks constructed logically by asking a series of binary questions or by assigning weights or a numerical value to every bit of data that passes through the network. Given the complexity of these networks at its multiple layers, deep learning projects require data as large as a Google image library or an Amazon inventory or Twitter’s cannon of tweets.
  2. Featured Engineering – An essential part of all machine learning algorithms, featured engineering and its complexity marks the difference between ML and DL. In traditional machine learning, an expert defines the features to be applied in the model and then hand-codes the data type and functions. In Deep Learning, on the other hand, featured engineering is done at sub-levels including low to high-level features segregation to be fed to the neural networks. It eliminates the need for an expert to define the features required for processing by making the machine learn low-level features as simple as shape, size, textures, and pixel values, and high-level features such as facial data points and a depth map.
  3. Hardware Dependencies – Sophisticated high-end hardware is required to carry the heavy weight of matrix multiplication operations and computations that are the trademark of deep learning. Machine learning algorithms, on the other hand, can be carried out on low-end machines as well. Deep Learning algorithms require GPUs so that complex computations can be efficiently optimized.
  4. Execution Time – It is easy to assume that a deep learning algorithm will have a shorter execution time as it is more developed than a machine learning algorithm. On the contrary, deep learning requires a larger time frame to train not just because of the enormous data set but also because of the complexity of the neural network. A machine learning algorithm can take anything from seconds to hours to train but a deep learning algorithm can go up to weeks, in comparison. However, once trained, the runtime of a deep learning algorithm is substantially less than that of machine learning.

Commonly-Used Deep Learning Applications

  1. Virtual Assistants – Amazon Echo, Google Assistant, Alexa, and Siri are all exploiting deep learning capabilities to build a customized user experience for you. They ‘learn’ to recognize your voice and accent and present you a secondary human experience through a machine by using deep neural networks imitating not just speech but also the tone of a human. Virtual assistants help you shop, navigate, take notes and translate them to text, and even make salon appointments for you.
  2. Facial Recognition – The iPhone’s Facial Recognition uses deep learning to identify data points from your face to unlock your phone or spot you in images. Deep Learning helps them protect the phone from unwanted unlocks and making your experience hassle-free even when you have changed your hairstyle, lost weight, or in poor lighting. Every time you unlock your phone, deep learning uses thousands of data points to create a depth map of your face and the inbuilt algorithm uses those to identify if it is really you or not.
  3. Personalization – E-Commerce and Entertainment giants like Amazon and Netflix, etc. are building their deep learning capacities further to provide you with a personalized shopping or entertainment system. Recommended items/series/movies based on your ‘pattern’ are all based on deep learning. Their businesses thrive on pushing out options in your subconscious based on your preferences, recently visited items, affinity to brands/actors/artists, and overall browsing history on their platforms.
  4. Natural Language Processing – One of the most important technologies, Natural Language Processing is taking AI from good to great in terms of use, maturity, and sophistication. Deep learning is used extensively to fuel underlying complexities in NLP applications. Document summarization, question answering, language modelling, text classification, sentiment analysis are some of the popular applications that are already picking up momentum. Several jobs worldwide that depend on human intervention for verbal and written language expertise will become redundant as NLP matures.
  5. Healthcare – Another sector to have seen tremendous growth and a transformation is the healthcare sector. From personal virtual assistants to fitness bands and gears, a lot of data about a person’s physiological and mental condition is being recorded every second. Early detection of diseases and conditions, quantitative imaging, robotic surgeries, and availability of decision-support tools for professionals are turning out to be game-changers in the life sciences, healthcare, and medicine domain.
  6. Autonomous Cars – Uber AI Labs in Pittsburg are engaging in some tremendous work to make autonomous cars a reality for the world. Deep Learning, of course, is the guiding principle behind this initiative for all automotive giants. Trials are on with several autonomous cars that are learning better with more and more exposure. Deep learning enables a driverless car to navigate by exposing it to millions of scenarios to make it a safe and comfortable ride. Data from sensors, GPS, geo-mapping is all combined together in deep learning to create models that specialize in identifying paths, street signs, dynamic elements like traffic, congestion, and pedestrians.
  7. Text Generation – Soon the world will be able to create original text (even poetry) as deep learning is being used for text generation. Everything from the mammoth dataset comprising text from the internet to Shakespeare is being fed to deep learning models to learn and emulate human creativity with perfect spelling, punctuation, grammar, style, and tone. Caption/title generation is already being done on a lot of platforms, testimony to what lies ahead.
  8. Visual Recognition – Convolutional Neural Networks enable digital image processing that can further be segregated into facial recognition,  object recognition, handwriting analysis, etc. Computers can now recognize images using deep learning and can even – Image recognition technology refers to the technology that is based on the digital image processing technology and utilizes artificial intelligence technology, especially the machine learning method, to make computers recognize the content in the image. Further applications include coloring black and white images and adding sound to silent movies which has been a very ambitious feat for data scientists and experts in the domain.

The Deep Learning arsenal is incomplete without the mention of Python, Tensorflow, and Keras. Here is a brief description of each:

Python – Python is one of the most popular high-level programming languages used in machine learning. Any deep learning book will reveal that there are several applications for Deep Learning in Python as it is a highly interactive, portable, dynamic, and object-oriented programming language. It has extensive support libraries that limit the length of code to be written for specific functions. It is easily integrated with C, C++, or Java and its control capabilities along with good support for objects, modules, and other reusability mechanisms makes it the numero uno choice for deep learning projects. Easy to understand and implement, aspiring professionals in the domain start with Python as it is open-sourced. However, several questions have been raised on its runtime errors and speed. While Python is used in several desktop and server applications, it is not used for many mobile computing applications.

Tensorflow – According to the TensorFlow website, “TensorFlow™ is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains.” Tailored for machine learning, TensorFlow finds most of its application in developing solutions using deep learning with python as there are several hidden layers (depth) in deep learning in comparison to traditional machine learning networks. Most of the data in the world is unstructured and unlabeled that makes Deep Learning TensorFlow one of the best libraries to use. A neural network nodes represent operations while edges stand for multidimensional data arrays (tensors) flowing between them.

Keras – Keras is a neural network library used for deep learning experiments. It is an open-source library written in Python that can run on top of TensorFlow, Microsoft Cognitive Toolkit, or Theano. Initially developed for project ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System), it is maintained and authored by Google engineer, François Chollet. The default library for Keras is TensorFlow. Keras is dynamic as it supports both recurrent networks and Convolutional Neural Networks and can also work on a combination of the two. Keras is well known for its user-friendliness guaranteed by its simple API. User errors are simply reported in Keras and minimum user action is required for common use cases. Keras models are easier to debug as they are developed in Python. The compact models provide ease of extensibility with new modules that can be directly added as classes and functions in a building blocks kind of configuration.

Subscribe to Our Blog