The test data set is used in the last step Just before we’re about to put our bottle in the production isn’t going to give me similar performance Okay You know why we do this I promise to prevent war The stored data leaks Data leaks can happen in different ways One of the ways that data leaks happen is when you split your later into training Set test set on Use that data for hyper perimeter tuning Also what are hyper perimeters Can you give an example of five per perimeter in Canadia Saber que number of a number of neighbors Okay there is phone hyper perimeters Orin Decision tree.
The depth of the decision Trees hyper perimeter Hyper perimeters are an integral part of your model definition When you define immortal the mortal is defined when it create immortal The model is created with some different values for these hyper perimeters When I go and change the hyper perimeter values whatever model I create that mortal subsumes these values of these hyper perimeters Okay so hyper perimeters are an integral part of your model object that they define the model Now I got my training set Test said I build a model on this I tested on this A curious is not good.
I’m going to tweak the hyper perimeters to bring higher accuracy I do a great search on I find the parameters will give me good value It’s like building a model again using the new hyper pyramid values What is the use of testing this mortal on the same test data Because you’re hyper perimeter values were a function of the test later So obviously you’re mortal is already experienced The tested of the patterns in the test later based on which you tweet hyper perimeter’s not going to run this mortal again on the test later to test it What is the use It is like exam question paper leak Right.
The model it gives a good performance here may not give you good performance in the production world so we always split our data into three When we touch hyper perimeters train the mortal on one tweak the hyperparameters on the validation set Once I’m happy with the performance is it going to give me similar performance in the production for that user testicle This place can be like me for testing Usually it’s luxury ritually hardly have access to Usually we don’t have access to that But if the access to that facility please use it That’s the best feeling We generally release the model in Al find Peter which is similar to what you see Yes Coming Sometimes.
We don’t have sufficient data Vitter’s to split it into three on yesterday I told you by splitting our data you’re introducing modifying the distributions and hence you’re introducing biases So when your data is not sufficiently large for you to split it into three without losing too much of us then you scare for the constellations All right so what Your method I’m going to show you here It’s good only in the class Don’t do this in your capstone projects And don’t don’t do this in your production Splitting their dental training statistic There’s only four classrooms All right Okay let’s move I’ll say run the same again Onyx Who was that cushion Yeah OK no problem.
What you do is you do a train to split split Ah on the X and the way Okay the result of the strangest split ISS We call trains training data extreme by train and we call it X test Writers write science of that You call it X Underscore Greenwell you can give any name you like extreme white underscore Trained Well the other one is ex Enter school test Why underscore test Not these two You run the train test again The split again You understood again on these two so I see a train to split on I see x underscore Greenwell Come on White underscore Train them This time.
I’ll put you capture it as X underscore Treen Extender School Well bye Underscore Train going to score Well which means you don’t run this strange desperate choice If you do this then it’s okay If you don’t use this train desperate alone then you’re using data leaks Shell No one Okay coming back to the court So we’re going to split our date under Prince intestate There’s good and do it You can check Check your what Princess Plate does Is it randomly splits daytime to training statistic The train tests commercial uses internally random function It randomly selects data to make a training set maker testing That’s why when you do this why underscore test I’m just printing or the 1st 10 labels in wine Underscore test You see the combination of zeros and ones even though we stacked them We did not mix them Train to split command randomly picks data on create a training set Test said That’s how you see the label Savory All right What we’re going to do is.
We’re going to standardize the data we’re going toe You know what It’s standardizing interviews ing the wrong numbers which are coming from your fine begun to standardize that for which I’m using standards Killer What if standards killer excitement is exportable Started delusion because he used the standard division there That’s where the name started skills What he calls he’s courting Yeah so let’s understand its killer on this on convert our extreme on X test Both do not apply your standards Killer on the output variables a lot of people get good vertical confused here should be scale our output variables also all put variables have to be predicted based on the input variables.
What you scale is body scale is why is equal to F X This is the model What the scale is this The white part We can convert in the one heart According and all the things That’s fine But you should scale this one Are you okay Okay Same question comes when people do PC principal conference analysis Principal component analysis is done on the inputs Not in the output If you’re doing PC removed Output from your analysis first before you do PC Okay Coming back to we understand the scaler Look at the size now total of 5017 records including the red and white I’m sorry 100,000 records Now let’s get into what we want to do Build a model using deep new liberty If I was using the earlier version off Tensorflow then I will say I won’t be saying from tensorflow care us I’ll simply say import Ah sequential from Kira’s I’ll don’t locate us Then I’ll import sequences from Kira’s.
Now since we’re using tensorflow 2.0 the chaos comes bundled with tens of low 2.0 That’s all here to say from tensorflow dot carriers not from the standard on Kira’s don’t mix the two If you mix the dough down the line in some of these operations you will have a problem Seven So from Tensorflow caters we’re importing something consequential What a sequential There are two ways off building your deep near little mortals what is called a sequential approach which is what we’re going to use The other one is called Functional approach.
The functional approach is much more versatile much more flexible than sequential But sequence is much simpler to learn First okay on functional and methodology also is very simple If the first land sequential then you’ll see that Britain functional and Secret Silver’s hardly any difference But functional mythology gives a much more flexibility event sequence so we’ll start with sequin shit So what I’m going to do is I’m going to import sequin Shell.