top of page

Episode 1: Neural Networks

thenivangujralpodcast.png

Artificial Intelligence is expanding exponentially while making huge impacts in many different fields. A key foundation of AI is Neural Networks. The Host, Nivan Gujral, explains how Neural Networks function. He also talks with Aniket Majumder about the future and ethics of AI.

1200px-Podcasts_(iOS).svg.png
spotify-app-icon_edited.png
600x600wa_edited_edited.png
pocketcast-icon-app.png

Episode Transcript

(0:06) Welcome to The Nivan Gujral Podcats, where we discuss emerging technologies with amazing people! You are listening to episode 1 on Neural Networks.

​

(0:57) If you are listening to this episode, then you probably have a computer, smartphone, or any digital device. Which means, have probably experienced Artificial Intelligence. Artificial Intelligence (AI) is where a computer demonstrates behaviors associated with human intelligence. AI is a vast field that is found in impacting everything from agriculture to healthcare. At the heart of AI, there are Neural Networks. They can easily find patterns in data which would be harder for a human to do. They also set the basis for other models to branch off and be used for even more specific analysis.

​

(1:37) In a Neural Network, there are three different parts which are the Input, Hidden Layers, and Output. The inputs are the values that are given to the Neural Network for it to complete a function. The hidden layers complete the functions by using some of the values which are given by the inputs. There can be multiple hidden layers in one Neural Network which depends on the objective for which it is made. The output is the end result after all of the functions have taken place.

​

(2:08) The lines that connect the inputs to the Hidden Layers are called Synapses. Each Synapse is assigned a weight that decides which signal is important. Let’s imagine that the input and the hidden layer are train stations, the Synapses are train tracks, and the signal is the train. The train goes on the train tracks to get to from one station to another station. On the track, there are a lot of trains going through which has caused a backup to occur. To make the trains start flowing smoothly again there is a weight assigned which allows the important trains to go through first and then the non-important ones.

​

(2:47) Each circle in the hidden layer represents a single neuron. In the neuron, all the values that are linked to that neuron are added by their weighted values together. After all the values are added to each other, an activation function takes place.

​

(3:08) One type of activation function is called the Threshold Function. On the X-axis there is the sum of the weighted values and on the Y-axis there are the values from 0 to 1. The way this function works is by giving an output of 0 if the sum of the weighted values is less than 0 and gives an output of 1 if the sum of the values is greater than or equal to 1. This function is most commonly used for yes or no circumstances.

​

(3:35) Another type of activation function is called the Sigmoid Function. Just like the Threshold Function on the X-axis, there is the sum of the weighted values and on the Y-axis there are the values from 0 to 1. This function has more of a gradual progression, unlike the Threshold function which goes from one to another right away. If the sum of the values is less than 0, then the values will be approximated to 0. If the sum of the values is greater than 1, then the value will be approximated to 1. This function is most commonly used to try and predict probability.

​

(4:13) The third type of activation function is called the Rectifier Function. Just like the Threshold Function and the Sigmoid Function on the X-axis, there is the sum of the weighted values, but the Y-axis does not have a limit and continues to go up. The function starts at 0 and gradually progresses as the X value increases. This function is the most popular activation functions used in Neural Networks.

​

(4:38) The last type of activation function is called the Hyperbolic Tangent Function. Just like the previous functions on the X-axis, there is the sum of the weighted values, but on this function, the Y-axis goes from both 1 to 0 and 0 to -1. This function is just like the Sigmoid function where it has a gradual progression but this function also goes into the negatives as well.

​

(5:04) To train a Neural Network, there are two outputs which are the actual value and the output value. The output value is the value that the Neural Network gives out based on the input values. The actual value is that value that should come out from the Neural Network. After the values go through the Neural Network and get the output value, then it figures out the cost function.

​

(5:30) The cost function is found by 1/2 of the squared difference between the output value and the actual value. The cost function tells the error that the Neural Network made. The Neural Network needs to lower the value of the cost function because that means the Neural Network will be more accurate. The cost function will be fed back into the Neural Network and then the weights will be updated. This process repeats until the cost function is 0.

​

(6:31) Gradient Descent is a way to lower the cost function to find the correct weight. Let us imagine a ball at the position of where the current weight is at. It works by starting at one random weight and then finds the slope at that point. If the slope is negative, then the graph would be going downhill, and then the ball would roll down. When the ball stops at a point it finds the slope again and repeats the process until the ball reaches the center. This is basically how Gradient Descent finds out the right weight that would work.

​

(7:11) Unsurprisingly, the concept of neural networks started as a model of how neurons act in the brain called 'connectionism' and used linked circuits to simulate intelligent behavior. In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts depicted a simple electrical circuit. In his book, The Organization of Behaviour (1949), Donald Hebb took the idea further, proposing that over each successive use, neural pathways improve, particularly between neurons that tend to fire at the same time thus beginning the long journey towards quantifying the complex processes of the brain.

 

(7:47) We will be back after a short break but stay tuned as we will be taking with Aniket Majumdar about the future and ethics of AI

​

(7:55) Break

​

(8:07) Talk with Aniket! (No Transcript Available for the Segment)

​

(29:16) Neural Networks take in inputs that they then process through an activation function which is specifically used for the task at hand. For the Neural Network to train it tries to lower the cost function all the way down to 0 with methods like Gradient Descent. Neural Networks help complete many takes that are related to finding colorations in data. They are the building blocks of where other, more specific training models are built off of such as RNNs and CNNs. RNNs are used to find colorations in data over time, while CNNs are used to find colorations in images. Hence Neural Networks form a key foundation of Artificial Intelligence.

bottom of page