Neural Networks & Deep Learning.

Nur Younis
3 min readApr 7, 2021

--

Last week we talked about Supervised Learning. Following up on that article, we will now dive into Neural Netowokrs and Deep Learning.

If we connect many perceptrons together, we will create an Artificial Neural Netowork. These work better for certain tasks like image recognition.

Image recognition example.

But how does this take place? Thanks to hidden layers and the complex math behind it.

Since computers are good at comparing 0 and 1, it is easy for them to compare pictures by matching pixels. However, the challenges were to make the machine recognize dogs in any picture.

Fei-Fei Li and other Machine Leaning (ML) and Computer Vision researchers wanted to improve this technology. For this, they asked the Data Science community to create a huge open source data set with 3.2 million images. This was called ImageNet. This worked with nested categories. For example: ‘Student’ under under ‘ young person’ under ‘human’.

Image Net.

The advantage was that in 2010, they got people together to improve image recognition. In 2012, AlexNet was a new Neural Network used for image recognition that applied hidden layers. Ever since, the results are getting better over time.

How Neural Networks be used for classification problems? First let’s jump into their architecture.

They are composed by an input layer, output layer, and other hidden layers in between.

Neural Network Architecture.

Input layer: Data is received as numbers. Each neuron represents a single feature — number. Sounds can be translated into sound waves and these into numbers. Each pixel will represent information and form a picture. However, for each pixel in a colored image, this can be represented by 3 numbers due to the RGB scale.

These numbers get hidden in the layers and each neuron in the hidden layers makes a calculation, and sends it to the neuron in the next layer.

Output layer: where hidden layers’ outputs are combined to give a final answer. Each output neuron represents the probability for each label. We pick the answer with the highest probability.

Each neuron in the hidden layer has a specific mathematical formula to look at different features in the picture. For example: curves, shapes, or spots.

However, because it looks for bright pixels, it will multiply these pixel values by a positive weight and the rest by a negative one. Then the weighted pixel value is represented as the guess of that neuron. Other hidden neurons look for different components like a specific texture or shape.

These values in the layer are summed up and passed on to the next layer. This repeats until the values reaches the output layer, which has just one neuron.

Since Neural Networks look to each pixel and 3 values per pixel, we need high computing power. How’s this solved then?

When we use deeper neuranl netowrks, which have deeper hidden layers, we result in Deep Learning. This can solve trickier problems. More hidden layers, equals more math, which again, equals higher computing power.

Deep Neural Network.

In hidden layers, each neuron in the first hidden layer looks for a specific component of the data but in deeper layers these components are more abstract to which humans would label that data. But what if a NN is used to deny a loan application? which features make the difference then?

Most banks use these to detect and prevent fraud. Cancer researchers use them to detect, and Alexa uses it to understand which song to play.

In the next article we will talk about backproprgation and optimization.

You can follow me on LinkedIn here: https://www.linkedin.com/in/nur-younis-aa79a9183/

You can read more stories here: https://nuryounis.medium.com/

--

--

Nur Younis
Nur Younis

Written by Nur Younis

Where curious minds interested in the intersection of finance, technology, and sustainability meet.

No responses yet