Monday, January 03, 2022

Bias and Variance in ML

Before we embark on machine learning, it is important to understand basic concepts around bias, variance, overfitting and underfitting. 

The below video from StatQuest gives an excellent overview of these concepts: https://youtu.be/EuBBz3bI-aA

Another good article explaining the concepts is here - https://towardsdatascience.com/understanding-the-bias-variance-tradeoff-165e6942b229

Bias as the difference (aka error) between the average prediction made by the ML model and the real data in the training set. The bias shows how well the model matches the training dataset. A low bias model will closely match the training data set. 

Variance refers to the amount by which the predictions would change if we fit the model to a different training data set. A low variance model will produce consistent predictions across different datasets. 

Ideally, we would want a model with both a low bias and low variance. But we often need to do a trade-off between bias and variance. Hence we need to find a sweet spot between a simple model and a complex model. 
Overfitting means your model has a low bias but a high variance. It overfits the training dataset. 
Underfiting means your model has a high bias but a low variance. 
If our model is too simple and has very few parameters then it may have high bias and low variance (underfitting). On the other hand if our model has large number of parameters then it’s going to have high variance and low bias (overfitting). So we need to find the right/good balance without overfitting and underfitting the data.

No comments:

Post a Comment