[0:00]In this video, we will be implementing neural networks from scratch in Python. In the last video, we looked at gradient descent. We'll just extend that code and write our own neural network class. This will be purely in Python, we'll not be using any TensorFlow or Keras, so it will be a lot of fun, and then we will do prediction using our own custom built class. So please watch till the end. Let's start coding now. This is the notebook we created for our gradient descent tutorial in last video. So if you have not seen it, you have to see it because it is kind of a prerequisite for this session. So just to go over it very quickly, what we did is we imported an insurance data set. Where based on age and affordability, we can decide if the person will buy the insurance or not. And then we built a simple neural network which is nothing but a logistic regression. You all know that logistic regression can be thought of as a simple neural network which has just single neuron. And in the input layer, you will have two neurons and we built this network in Keras TensorFlow first. We did compile and fit, and then we did some predictions. And then we replicated the exact same thing in plain Python by writing our own gradient descent function. So we stopped at this point in our last video. In this video, we are now going to write a neural network class that makes use of this method that we wrote in last video. So our end goal would be to alright, let me see what is our end goal. So in the end, what we want is, so let me just remove all these cells. We want to write our own custom class. So let's say if I create a class called my neural network. And if I can call a fit method, you know how we call a fit method on Keras model. So similar to that, I want to create this fit method and it should just work. And after that, I want to create a prediction functions as well. So basically, I want to do this, and then I want to do this. So then this should behave same as our Keras predict function. See, this one. Okay. So let's write that class. Okay, so the class you all know in Python, if you want to create a class, you start by class my neural network. And this is a constructor of the class and you have to always supply self as an argument. Okay. So I just created a very, very simple class. In this class, I'm going to now put gradient descent function. So let's put this function inside that class. It's fairly simple. You just copy paste, you have to take care of indentation and add self argument as the first argument in this method. Alright, this looks good now. So what now we want to do is, uh create a fit method. Okay. So let's create a fit method and the fit method the first argument of every class method is self. And then here you have X, you have Y, you have epochs and the loss threshold.
[4:06]Pretty much whatever you have here. And when you call fit method, what you're doing is you're supplying X and Y, which is X train scale and Y train. From that, you need to get age and affordability because gradient descent function expects that. So let's, let's call gradient descent. So I will do self dot gradient descent. Okay. And what is the first argument? First argument is age. So the age will be X of age. In data frame, this is how you access any column, then X dot affordability. Okay. Then your Y true is Y. Then remaining argument, you can just supply as is. And what you will get as a result is a tuple. You will get W1, W2 and bias. So that you can store maybe in a class member. So I need to create a class member called self dot W1. So this is the weight one. If you look at this awesome diagram, that's the weight one, and we'll initialize weight two, and we'll initialize B or a bias. So let's do all of that step by step. So you all know we started with weight one. If you remember my last video's presentation, we started with weight one and weight two to be one.
[5:49]And bias, we are initializing it to be zero. I have explained why I am initializing it in this way in previous video, hence watching the previous video is very important. And once gradient descent returns those three, they, they come back as a tuple. And in Python, if you want to accept tuple, you can do something like this. So self dot W1, self dot W2, self dot bias is equal to self dot gradient descent. Okay. So this way, now you created a fit method. Alright. So let's you know what, I'm just going to run the fit method and I will keep my epoch very, very small because I'm just trying testing this out. So here I am creating an object of this my and N class, then calling a fit method. In that I am supplying X-train, Y-train and so on. Okay, my and N is not defined because I did not run this cell. So Control Enter, I ran that cell. X-train scale is not defined. Okay. So X-train scale is not defined, so I need to probably execute all of this. So I just do Control Enter and execute these one by one. Okay. So when I executed all my cell, this is how the result looks. So it is working perfectly okay. As per our expectation, it ran five epochs and at the end of every epoch, we are seeing some value of weight one, weight two, bias and so on. Alright. Now, since we want to run more epoch, I want to make one change where, you know, this is too much printing actually. So I will print this statement only maybe at the 50th iteration and if you want to do that, you know, you can do it easily by doing this. So when your I is in at the 50, 100, 100, 150, etc., it will print this trace line. Okay. Perfect. So let's maybe quickly test it out. So I'll just run 100 epoch. I need to execute that. Okay, so Control Enter, Control Enter again. So now, see, it is printing at every 50th epoch. So I have less logging going on. Okay. Now let's implement the predict method. So you are mostly done with the most important part which was determining the weights and bias. Okay, in predict method, the arguments will be the X test. Okay. So X test, and what we are going to do is, you all know that in any neuron, there are two mathematical components, one is weighted sum, the other one is sigmoid. So we are doing exactly that. We'll do weighted sum, and then sigmoid. Okay. So how do you do weighted sum? You can say weighted sum is equal to what? So weighted sum is nothing but, let's look at our picture. It is age multiplied by W1, affordability multiplied by W2. And you did for, you do it for entire vector, and you get a summation. Okay. And then you, of course, add a bias. So W1 into age. Okay, what is age? Well, X test is a data frame which will have age and affordability both. So now you do self dot W2 into X test and affordability. Since the spelling is long, I'm just going to do copy paste and then self dot bias. Once you have weighted sum, you are just calling a sigmoid function, which is the sigmoid numpy function. This function, by the way, we define here. Okay. And you pass weighted sum there. And that's it. So that's, that's the end of our predict method. So now do Control Enter, Control Enter. Okay. Maybe I will run more epoch, maybe 500. And you know at 350, it, it just broke. So, um maybe I should print this here as well so that I know like when I'm breaking the loop, what are my weights, etcetera.
[10:44]Okay. So it broke after 366 epoch and these were the values of my weights. Now if you compare these weights with our TensorFlow model weights, which was this coefficient and intercept. You will find that we are finding exactly same, not exactly, but close enough weights as our TensorFlow model. So TensorFlow gave 5.06, our own Python class gave 5.05. 1.4, 1.45, -2.95, -2.91. So they are like almost similar. Now I am going to do prediction using my custom model. So let's do prediction. Okay, this is what I get. And I will do now predict same prediction using my TensorFlow model. So this model, M O D E L variable contains a TensorFlow model. If you look at up, this is that model. Okay. So we are doing model dot predict for same X test, and you will notice that we almost get the same value. See. 0.355, 0.355, 0.828, 0.829. Almost same. So we successfully completed writing our own custom neural network to solve the insurance data set problem. Now, if you want to write this class in a truly generic way, you will not have variables such as X and affordability. You will have X that will be your data frame and you will use that data frame X instead of age and affordability, etc., and you will have W1, W2, etcetera. Again as a numpy vector. Maybe in a future video, we'll look into that generic implementation. But this video should give you a good idea on how you can write your own neural network class from scratch. This is something they might ask in your data science or machine learning interview. So friends, please practice this code. I'm going to provide a link of this notebook in a video description below. You download that code, run it on your own, if possible, try to change the values in the data set. I'm going to provide this CSV file as well. So just change the value and see how it behaves and you can compare the performance of your own custom neural network class with this Keras neural network class. Okay. Thank you very much for watching. I will see you in next video.



