The bias-variance trade-off is the point where we are adding just noise by adding model complexity (flexibility). The training error goes down as it has to, but the test error is starting to go up. The model after the bias trade-off begins to overfit.
When the nature_of_the_problem is changing the trade-off is changing.
The ingredients of prediction error are actually:
Bias and variance together gives us prediction error.
This difference can be expressed in term of variance and bias:
<math>e^2 = var(model) + var(chance) + bias</math>
where:
As the flexibility (order in complexity) of f increases, its variance increases, and its bias decreases. So choosing the flexibility based on average test error amounts to a bias-variance trade-off.
where:
We want to find the model complexity that gives the smallest test error.
When the nature_of_the_problem is changing the trade-off is changing.
When the nature of the problem is changing the trade-off is changing.
Model Complexity = Flexibility