# compare overfitting and underfitting with an intuitive explanation of the bias-variance tradeoff

### Mohammed

Guys, does anyone know the answer?

get compare overfitting and underfitting with an intuitive explanation of the bias-variance tradeoff from screen.

## Overfitting, underfitting, and the bias

Data Science and Machine Learning

## Overfitting, underfitting, and the bias-variance tradeoff

May 19, 2019

Overfitting, underfitting, and the bias-variance tradeoff are foundational concepts in machine learning. A model is **overfit** if performance on the training data, used to fit the model, is substantially better than performance on a test set, held out from the model training process. For example, the prediction error of the training data may be noticeably smaller than that of the testing data. Comparing model performance metrics between these two data sets is one of the main reasons that data are split for training and testing. This way, the model’s capability for predictions with new, unseen data can be assessed.

When a model overfits the training data, it is said to have **high variance**. One way to think about this is that whatever variability exists in the training data, the model has “learned” this very well. In fact, too well. A model with high variance is likely to have learned the noise in the training set. Noise consists of the random fluctuations, or offsets from true values, in the features (independent variables) and response (dependent variable) of the data. Noise can obscure the true relationship between features and the response variable. Virtually all real-world data are noisy.

If there is random noise in the training set, then there is probably also random noise in the testing set. However, the specific values of the random fluctuations will be different than those of the training set, because after all, the noise is random. The model cannot anticipate the fluctuations in the new, unseen data of the testing set. This why testing performance of an overfit model is lower than training performance.

Overfitting is more likely in the following circumstances:

There are a large number of features available, relative to the number of samples (observations). The more features there are, the greater the chance of discovering a spurious relationship between the features and the response.

A complex model is used, such as deep decision trees, or neural networks. Models like these effectively engineer their own features, and have an opportunity develop more complex hypotheses about the relationship between features and the response, making overfitting more likely.

At the opposite end of the spectrum, if a model is not fitting the training data very well, this is known as **underfitting**, and the model is said to have **high bias**. In this case, the model may not be complex enough, in terms of the features or the type of model being used.

Let’s examine concrete examples of underfitting, overfitting, and the ideal that sits in between, by fitting polynomial models to synthetic data in Python.

#Import packages

import numpy as np #numerical computation

import matplotlib.pyplot as plt #plotting package

#Next line helps with rendering plots

%matplotlib inline

import matplotlib as mpl #additional plotting functionality

## Underfitting and overfitting with polynomial models

First, we create the synthetic data. We:

Choose 20 points randomly on the interval [0, 11). This includes 0 but not 11, technically speaking.

Sort them so that they’re in order.

Put them through a quadratic transformation and add some noise:

y=(−x+2)(x−9)+ϵ y=(−x+2)(x−9)+ϵ . ϵ ϵ

is normally distributed noise with mean 0 and standard deviation 3.

Then we make a scatter plot of the data.

np.random.seed(seed=9)

n_points = 20

x = np.random.uniform(0, 11, n_points)

x = np.sort(x)

y = (-x+2) * (x-9) + np.random.normal(0, 3, n_points)

mpl.rcParams['figure.dpi'] = 400

plt.scatter(x, y) plt.xticks([]) plt.yticks([]) plt.ylim([-20, 20])

This looks like the shape of a parabola, as expected for a quadratic transformation of

x x

. However we can see the noise in the fact that not all the points appear that they would lie perfectly on the parabola.

In our synthetic example, we know the **data generating process**: the response variable

y y

is a quadratic transformation of the feature

x x

. In general, when building machine learning models, the data generating process is not known. Instead, several candidate features are proposed, a model is proposed, and an exploration is made of how well these features and this model can explain the data.

In this case, we would likely plot the data, observe the apparent quadratic relationship, and use a quadratic model. However, for the sake of illustration of an **underfit model**, what does a linear model for these data look like?

We can fit a polynomial model of degree 1, in other words a linear model, with numpy’s polyfit:

lin_fit = np.polyfit(x, y, 1)

lin_fit

array([ 0.44464616, -0.61869372])

This has produced the slope and intercept of the line of best fit for these data. Let’s plot it and see what it looks like. We can calculate the linear transformation of the feature

स्रोत : **www.steveklosterman.com**

## Elucidating Bias, Variance, Under

Overfitting, underfitting, and bias-variance tradeoff are foundational concepts in machine learning. They are important because they explain the state of a model based on their performance. The best…

## Elucidating Bias, Variance, Under-fitting, and Over-fitting.

O

**verfitting, underfitting, and bias-variance tradeoff**are foundational concepts in machine learning. They are important because they explain the state of a model based on their performance. The best way to understand these terms is to see them as a tradeoff between the

**bias**and the

**variance**of the model. Let's understand the phenomenon of overfitting and underfitting.

**Overfitting**occurs when a statistical model or machine learning algorithm

**captures the noise**of the data. Intuitively, overfitting occurs when the model or the algorithm fits the data too well. Specifically, overfitting occurs if the model or algorithm shows

**low bias**but

**high variance**. Overfitting is often a result of an excessively complicated model, and it can be prevented by fitting multiple models and using validation or cross-validation to compare their predictive accuracies on test data.

**Underfitting**occurs when a statistical model or machine learning algorithm

**cannot capture the underlying trend**of the data. Intuitively, underfitting occurs when the model or the algorithm does not fit the data well enough. Specifically, underfitting occurs if the model or algorithm shows

**low variance**but

**high bias**. Underfitting is often a result of an excessively simple model.

Both overfitting and underfitting lead to **poor predictions** on new data sets.

Well, let's understand the Bias and variance in simpler terms. (**Very Simpler Terms!**)

**What is Bias?**

Bias is the difference between the average prediction of our model and the correct value which we are trying to predict. A model with high bias pays very little attention to the training data and oversimplifies the model.

**Simple definition:**“Resulted Error from Training Data!”

**What is a Variance?**

Variance is the variability of model prediction for a given data point or a value that tells us the spread of our data. A model with high variance pays a lot of attention to training data and does not generalize on the data which it hasn’t seen before.

**Simple definition:**“Resulted Error from Test Data!”

Well, to understand the concepts more clear and better, I have divided concepts into Two parts, Bias and variance in the case of **Regression** as well as **Classification** models.

Considering **Regression** models:

Figure 1: Bias and Variance for Regression Model

We can see clearly that the Model-1 and Model-3 are **Underfitting** and **Overfitting** respectively.

Model-1 has not captured the trends properly, or the model is too simple, hence it's obvious that the training and test accuracy will be hampered!

As we discussed earlier, **“Bias is Error resulted from Training set, while Variance is error resulted from Test set!”**. The Model-1 will have less train and test accuracy, I.e. Will have High Bias(**High Training error**) and High Variance(**High Testing error**).

Similarly, for Model-3, The model has trained too good on training data, the reason it fails for testing data(**Low test accuracy**). Since the training accuracy for Model-3 is High and Test accuracy is low, Model-3 will have Low Bias( **Low Training error**) and High Variance(**High Testing error**).

Considering Model-2, As the Model-2 is in the “**Just Right**” condition, the model has trained well on training as well as a test set respectively. The reason, model has High training accuracy (**Low Bias-low training error**) and High testing accuracy( **Low Variance-low testing error**).

Now, Let’s consider the condition for **Classification** models, Please have a look at the explained image below!

Figure 2: Bias and Variance for ClassificationModel

Here we have 3 Models, which have the following training and testing errors.

As we can see our Classification Model-1 has a Low Training error(2%), while has a high testing error(18%). As explained the concepts earlier we can conclude the model is having Low Bias(Low training error) and High Variance(High testing error), i.e. the **Model-1 is clearly Overfitting**!

Similarly, We can conclude our Classification Model-2 as clearly an underfitting model. Coming towards Model-3, This model shall be considered as the **Most Generalized or Most Recommended** model to train on!

Well, this was the explanation for Underfitting, Overfitting, Bais, and Variance for Regression and Classification Models respectively!

We are done with the explanation part, now let's have a look at the **graphical plotting **of these concepts. Please have a look at the figure below!

## Clearly Explained: What is Bias

In this post, we will understand this age-old important stepping stone in the world of modeling and statistics in a way that even newbies can grasp it without difficulty. Let us understand it using a…

## CANNOT EMPHASIZE ON HOW IMPORTANT THESE CONCEPTS ARE FOR ANYONE IN DATA SCIENCE!

## Clearly Explained: What is Bias-Variance tradeoff, Overfitting & Underfitting

CANNOT EMPHASIZE ON HOW IMPORTANT THESE CONCEPTS ARE FOR ANYONE IN DATA SCIENCE! Clearly Explained: What is Bias-Variance tradeoff, Overfitting & Underfitting Explained with examples, it will be worth your time, read on:)

Photo by John Moeses Bauan on Unsplash

In this post, we will understand this age-old important stepping stone in the world of modeling and statistics in a way that even newbies can grasp it without difficulty.

Whenever we build a prediction model, even after all the adjustments and treatments, our predictions will generally be imperfect: there will be some nonzero difference between the predicted and the actual values. This difference is called the **prediction error**. We can decompose prediction error into two components: **Reducible and Irreducible error.**

Let us understand it using a real-life analogy. A train driver will be able to successfully lead the train to the desired destination or not, this is dependent on two factors, HIMSELF (reducible error, if he is alert and aware, he will not make any inadvertent mistakes) and EXTERNAL FACTORS (irreducible errors, what if a stray animal ventures on the rail track, what if another train driver coming from opposite direction loses control, what if a suicidal person throws himself in the front of his train, etc.)

Have a look at the following equation to summarize it all:

Now, we will understand the two components (Bias error and Variance error) of this equation before discussing the trade-off situation between the two that plagues every model out there.

Consider a person Mr. X who is a freshman and an aspiring data scientist and is currently interviewing with various firms to get his first breakthrough in this industry. He appears for an interview one day and came back home really stressed and gloomy because it did not go well. His mother asked him all about it and he shared that the interviewer asked him way too advanced level questions and judged him critically at every point without mincing any words which broke his confidence. His mother then soothes him by pointing out that “this was just not your day, tomorrow will be better”.

Now, from this story we can see that the mother did not ask her son to prepare more zealously than before in order to increase his chances of success. The reason is that she will always have an emotional **BIAS** towards her son and will always try to downplay his failures. If you try predicting her response to a person who comes to her for comfort without considering if the person is her son (immediate family) or not, your prediction will have a bias error. Now, the following pointers will help you understand how Bias affects statistical models and the technical definition to fall back upon:

**Real-life example for Bias**- I will quote a stock market example here, let’s say I built a fairly simple model which predicts the share price of a stock based on only 3–4 predictors. The model predicted that on 3rd Feb 2019 share price per unit will be 4$ but the actual price turned out to be 3.4$ that day. Thus, the total error will be the difference between the two numbers, i.e. 0.6$ and the bias component is contributing majorly here because we took only a few predictors.

Bias error is the difference between the predicted data points and the actual data points which was caused because our model was oversimplified.

A model with high bias is too simple and has a low number of predictors. It is missing some other important predictors due to which it is unable to capture the underlying pattern of data. It misses how the features in the training data set relate to the expected output. It pays very little attention to the training data and oversimplifies the model. This leads to high error in training and test data.

Now, let’s go back to the story once again. We saw that the interviewer asked high difficulty level questions from a freshman i.e. he complicated the seemingly easy situation because the unnecessary complex questions dumbfounded Mr. X. Let’s throw two more facts here-

Mr. X prepared only for the entry-level questions, thus he couldn’t answer advanced questions.

The interviewer had only interviewed people for positions requiring 5–6 years of experience, thus, he had never interviewed a fresher before.

In this situation, the interviewer is not comfortable outside of his comfort zone. Mr. X is an unseen input for him that he doesn’t know how to tackle so he flounders by asking way too complicated questions. This is the perfect case to understand **VARIANCE**. When exposed to situations in his comfort zone(training data), the interviewer performs excellently but a new experience(test data)threw him off track. Now you are ready to understand the technical definition of Variance and its effects on models as explained below:

Any model which has a very large number of predictors will end up being a very complex model that will deliver very accurate predictions for the training data that it has seen already but this complexity makes the generalization of this model to unseen data very difficult i.e a high variance model. Thus, this model will perform very poorly on test data.

Guys, does anyone know the answer?