A classifier is a Supervised function (machine learning tool) where the learned (target) attribute is categorical (“nominal”) in order to classify.
It is used after the learning process to classify new records (data) by giving them the best target attribute (prediction).
Rows are classified into buckets. For instance, if data has feature x, it goes into bucket one; if not, it goes into bucket two.
The target attribute can be one of k class membership.
To summarize the results of the classifier, a Confusion matrix may be used.
In classification, there is two kind of problem:
One class classification: Data Mining - (Anomaly|outlier) Detection
A two class problem (binary problem) has possibly only two outcomes:
A multi-class problem has more than two possible outcomes.
Example | Prediction | Illustrate the Model |
---|---|---|
Filter Spam | Yes or No | Binary Classification |
Purchasing Product X | Yes or No | Binary Classification |
Defaulting on a loan | Yes or No | Binary Classification |
Failing in the manufacturing process | Yes or No | Binary Classification |
Producing revenue | Low, Medium, High | Multi-class Classification |
Differing from known cases | Yes or No | One-class Classification |
The classification task is to build a function that takes as input the feature vector X and predicts its value for the outcome Y i.e.
<math> C(X) \in C </math> where:
C of X gives you values in the set C.
Often we are more interested in estimating the probabilities (confidence) that X belongs to each category in C.
For example, it is more valuable to have an estimate of the probability that an insurance claim is fraudulent, than a classification fraudulent or not.
You can imagine, in the one situation, you might have a probability of 0.9 the claim is fraudulent. And in another case, it might be 0.98. Now in both cases, those might both be above the threshold of raising the flag that this is a fraudulent insurance claim. But if you're going to look into the claim, and you're going to spend some hours investigating, you'll probably go for the 0.98 first before the 0.9. So estimating the probabilities is also key.
Most of the algorithms are based on this data structure (knowledge representation):
MaxEnt and SVM uses another mathematical models and feature weight selection then Naive Bayes. The feature are more weighed.
Given demographic data about a set of customers, predict customer response to an affinity card program