The degree to which a system has no pattern is known as entropy. A high-entropy source is completely chaotic, is unpredictable, and is called true randomness.
Entropy is a function “Information” that satisfies:
Information_{1&2}(p_1p_2) = Information_1(p_1) + Information_2(p_2)where:
Mathematics - Logarithm Function (log)
Information_{x}(p_x) = log_2(p_x) I(X) = log_2(p_x) Entropy = H(X) = E(I(X)) = sum{x}{}{ p_x I(x)} = - sum{x}{}{ p_x log_2 p_x}where:
The entropy of a distribution with finite domain is maximized when all points have equal probability.
Bigger is the entropy, more is the event unpredicatble
Higher entropy means there is more unpredictability in the events being measured .
Higher entropy mean that the events being measured are less predictable.
100% predictability = 0 entropy
Fifty Fifty is an entropy of one.
Why is picking the attribute with the most information gain beneficial? It reduces entropy, which increases predictability. Information gain is positive when there is a decrease in entropy from choosing classifier/representation. A decrease in entropy signifies an decrease in unpredictability, which also means an increase in predictability.
A two class problem.
Entropy = - sum{x}{}{ p_x log_2 p_x} = -(0.5 log_2 0.5 + 0.5 log_2 0.5) = - 2 * (0.5 log_2 0.5) = 1A six class problem.
<MATH> \begin{array}{rrl} Entropy & = & - \sum_{x}{ p_x log_2( p_x)} \\ & = & - 6 * (\frac{1}{6} log_2 (\frac{1}{6})) \\ & \approx & 2.58 \end{array} </MATH>
The weighted die is more predictable than a fair die.
Titanic training set with a two class problem: survived or die
case: 342 survivors on a total of 891 passengers
- ( 342/891 log_2 342/891 + 549/891 log_2 549/891) approx 0.96case: 50 survivors on a total of 891 passengers
- ( 50/891 log_2 50/891 + 841/891 log_2 841/891) approx 0.31It's a more predictable data set.