About
Regression is a statistical analysis used:
- based on scores of:
- one predictor variable: simple regression
- or multiple predictor variables: multiple regression
Regression analysis is a statistical process (Supervised function) for:
- estimating the relationships among variables.
- approximating and forecasting continuous values (through the definition of closeness)
“Regression” problems are principally aimed to resolve problem with a continuous value (numeric) outcome but can also be applied to nominal outcome
Regression analysis helps one understand how the typical value of a (outcome|dependent) variable changes when any one of the (predictor|independent) variables is varied, while the other (predictor|independent) variables are held fixed. (ie which among the independent variables are related to the dependent variable)
The term “regression” was coined by Francis Galton in the nineteenth century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known as regression toward the mean). For Galton, regression had only this biological meaning, but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context.
“Regression” comes historically from the idea of regression towards the mean, which is a concept which was discussed in the early 1900s. But we have to live with this term because it's become time honored and you can change this term by model.
Example
- Given demographic and purchasing data about a set of customers, predict customers' age
- Customer lifetime value, house value, process yield rates
Assumptions
Problem
- regression: The learned attribute is continuous numeric value but linear regression can be used to perform classification. See Machine Learning - Logistic regression (Classification Algorithm)
Algorithm
Many (techniques|methods) for carrying out regression analysis have been developed.
Technique | Parametric | Description |
---|---|---|
Nearest Neighbors | No | |
Linear regression | Yes | |
Data Mining - (Global) Polynomial Regression (Degree) | Yes | |
Statistics - Standard Least Squares Fit (Gaussian linear model) | ||
ordinary least squares regression | Yes | The earliest form of regression which was published by Legendre in 1805 and by Gauss in 1809. |
Multiple Regression (GLM) | ||
Support Vector Machine (SVM) | ||
Logistic regression | ||
LeastMedSq | LeastMedSq gives an accurate regression line even when there are outliers. However, it is computationally very expensive. In practical situations it is common to delete outliers manually and then use LinearRegression. |
Model
A model predicts a specific target value for each case from among (possibly) infinitely many values.
The regression model is used to model or predict future behaviour and involves the following variables:
- Y is the outcome (dependent variable, Y)
- The unknown parameters, denoted as β, which may represent a scalar or a vector.
- <math>\epsilon</math> represents the prediction error (ie measurement errors and other discrepancies). An irreducible error ?
<MATH> Y = f(X) + \epsilon </MATH>
- or from Wikipedia
<MATH> \begin{array}{ccc} Y & \approx & f ( {X}, \beta ) \\ E(Y | X) & = & f ( {X}, \beta ) \end{array} </MATH> The approximation is usually formalized as E(Y | X)
The true function above generates always errors, When there is no error, there is overfitting.
A model is:
- simple: the model is the the regression equation.
- or complex: the model is a set of regression equations
Example of Simple Linear Regression Model: <math>Model = \hat{Y} = B_0 + B_1.{X_1}</math>
How to improve the model ?
The goal is to produce better models so we can generate more accurate predictions
We can improve a model by:
- Adding more predictor variables (but it will add overfitting and variance.)
- Selecting feature. When we want to predict better, we'll shrink, or regularize, or select features in order to improve the prediction.
Inferential statistics
When we're doing regression, we're more engaging in inferential statistics and we're going to look at this statistics:
- p value (in order to make probabilities judgement)
in order to know if the results from this sample is going to generalize to other samples.
We want to know if it's possible to make an inference from this sample data to a more general population.
Implementation
R
The lm function