Data Mining - Training Error

Thomas Bayes


Training error is the prediction error we get applying the model to the same data from which we trained.

Training error is much easier to compute than test error.

Train error is often lower than test error as the model has already seen the training set. It's then going to fit the training set with lower error than it was going to occur on the test set.

And the more we over fit, the harder we fit the data, the lower the training error looks. On the other hand, the test error can be quite a bit higher.

Discover More
Adaboost Accuracy By Numiterator Boosting
Data Mining - (Boosting|Gradient Boosting|Boosting trees)

Boosting forces new classifiers to focus on the errors produced by earlier ones. boosting works by aggressively reducing the training error Gradient Boosting is an algorithm based on an ensemble of decision...
Subset Selection Model Path
Statistics - Model Selection

Model selection is the task of selecting a statistical model from a set of candidate models through the use of criteria's Dimension reduction procedures generates and returns a sequence of possible...
Thomas Bayes
Statistics Learning - Prediction Error (Training versus Test)

The Prediction Error tries to represent the noise through the concept of training error versus test error. We fit our model to the training set. We take our model, and then we apply it to new data that...

Share this page:
Follow us:
Task Runner