# Statistics - Confidence Interval

The definition of a confidence interval says that under repeated experiments 95% of the time this confidence interval will contain the true statistic (mean, …).

if we started the whole experiment over again from scratch 100 times, and each time we made entirely new confidence intervals based on that entirely new data, then on average, 95 times out of a 100, those new confidence intervals would have the true value in it.

• A 95% confidence interval is defined as a range of values such that with 95% probability, the range will contain the true unknown value of the parameter.
• A degree of confidence of 95% means that you have 95% confidence that the true score should be in the confidence interval.

Reporting an interval, acknowledge the fact that we have sampling error.

The logic of the confidence intervals is to report:

The phrase confidence interval comes from the fact that (researchers|writers) will be (or should be) more confident in the accuracy of what they're reporting if they report an interval estimate rather than a point estimate.

A confidence interval is an interval estimate of a population parameter based on one random sample.

Confidence intervals is an entirely different approach than NHST which is just to report around sample statistics, rather than engage in inferential statistics per se.

The width of a confidence interval is determined by and calculated from the standard error. It's then influenced by:

Confidence Interval can be applied to any statistic.

Confidence intervals are a frequentist concept: the interval, and not the true parameter, is considered random. Even a Bayesian would not necessarily agree with statement 2 (it would depend on his/her prior distribution).

A Bayesian would not agree with this statement as it would depend on his/her prior distribution “If I perform a linear regression and get confidence interval from 0.4 to 0.5, then there is a 95% probability that the true parameter is between 0.4 and 0.5.”.

## Hypothesis test

Hypothesis testing, is a closely related idea. They're doing equivalent things.

If the hypothesis test:

• fails. We will reject the null hypothesis and conclude that the slope is not 0. Correspondingly the confidence interval constructed for that data for the parameter will not contain 0.
• is not rejected. We cannot conclude that the predictor X has an effect. Its slope may be 0. The confidence interval for that parameter will contain 0.

The confidence interval is then also doing hypothesis testing but it's also telling how big the effect is.

It's then always good to compute confidence intervals as well as do hypothesis test.

## Around

### Sample Mean

Sample means (M)

This is sort of the easiest and most obvious place to start when talking about confidence intervals.

$$\begin{array}{rrl} Upper bound & = & M + t.SE \\ Lower bound & = & M – t.SE \\ \end{array}$$

where:

As sample size increases, the width of confidence intervals typically decrease. See standard error formula

Discover More
Data - Uncertainty

How likely is this prediction to be true? probability distributionconfidence interval3055303Erik Meijer - Making Money Using Math Thomas Bayes anticipated the need for dealing with uncertainty and formulated...
Data Mining - (Global) Polynomial Regression (Degree)

polynomials regression Although polynomials are easy to think of, splines are much better behaved and more local. With polynomial regression, you create new variables that are just transformations...
Data Mining - (Parameters | Model) (Accuracy | Precision | Fit | Performance) Metrics

Accuracy is a evaluation metrics on how a model perform. rare event detection Hypothesis testing: t-statistic and p-value. The p value and t statistic measure how strong is the...
R - Correlation

with: confidence interval df (degree of freedom) t-value p-value cor (Pearson's product-moment correlation)
R - Interaction Analysis

interaction with R . An interaction term between a numeric x and z is just the product of x and z. lm processes the “” operator between variables andautomatically: add the interaction...
R - Multiple Linear Regression

Multiple linear regression with R functions such as lm Unstandardized Multiple Regression Regression analyses, standardized (in the z scale). The point is a short-cut to select all variables....
R - Non-linear Effect Analysis

Non-linear Analysis with R. where: the quadratic is indicated by the power 2 (predictor1^2) As power has a meaning in this formula, the identity function (I) is used to protect it....
R - Simple Linear Regression

simple linear regression with R function such as lm Unstandardized Simple Regression Regression analyses, standardized (in the z scale). In simple regression, the standardized regression coefficient...
Statistics - (Confidence|likelihood) (Prediction probabilities|Probability classification)

Prediction probabilities are also known as: confidence (How confident can I be of this prediction?). or likelihood: (How likely is this prediction to be true?) They gives the probability of a predicted...
Statistics - (Degree|Level) of confidence

Degree of confidence represents the probability that the confidence interval captures the true population parameter. With a degree of confidence of 95%, you have 95% confidence that the true population...