Pruning is a general technique to guard against overfitting and it can be applied to structures other than trees like decision rules.
A decision tree is pruned to get (perhaps) a tree that generalize better to independent test data. (We may get a decision tree that might perform worse on the training data but generalization is the goal).
See Information gain and Overfitting for an example.
Sometimes simplifying a decision tree gives better results.
How to prune:
- Don’t continue splitting if the nodes get very small: minimum_number_cases_that_reach_a_leaf
- Build full tree and then work back from the leaves, applying a statistical test at each stage (Weka: confidenceFactor)
- Sometimes it’s good to prune an interior node, raising the subtree beneath it up one level (subtreeRaising, default true)
- Univariate vs. multivariate decision trees (Single vs. compound tests at the nodes)
Minimum number cases that reach a leaf
One simple way of pruning a decision tree is to impose a minimum on the number of training examples that reach a leaf.
Weka: This is done by J48's minNumObj parameter (default value 2) with the unpruned switch set to True. (The terminology is a little confusing. If unpruned is deselected, J48's uses other pruning mechanisms)
With the breast cancer data set:
| min Num |
| Number of |
| Size of |
The number of leaves in the tree decreases very rapidly as the size of each leaf is allowed to grow. The tree size follows the same trajectory as the number of leaves: it decreases very rapidly as the leaf size grows.
|confidence Factor J48||Accuracy|