Median absolute deviation

Statistical measure of variability

In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample.[1]

For a univariate data set X1X2, ..., Xn, the MAD is defined as the median of the absolute deviations from the data's median X ~ = median ( X ) {\displaystyle {\tilde {X}}=\operatorname {median} (X)} :

MAD = median ( | X i X ~ | ) {\displaystyle \operatorname {MAD} =\operatorname {median} (|X_{i}-{\tilde {X}}|)}

that is, starting with the residuals (deviations) from the data's median, the MAD is the median of their absolute values.

Example

Consider the data (1, 1, 2, 2, 4, 6, 9). It has a median value of 2. The absolute deviations about 2 are (1, 1, 0, 0, 2, 4, 7) which in turn have a median value of 1 (because the sorted absolute deviations are (0, 0, 1, 1, 2, 4, 7)). So the median absolute deviation for this data is 1.

Uses

The median absolute deviation is a measure of statistical dispersion. Moreover, the MAD is a robust statistic, being more resilient to outliers in a data set than the standard deviation. In the standard deviation, the distances from the mean are squared, so large deviations are weighted more heavily, and thus outliers can heavily influence it. In the MAD, the deviations of a small number of outliers are irrelevant.

Because the MAD is a more robust estimator of scale than the sample variance or standard deviation, it works better with distributions without a mean or variance, such as the Cauchy distribution.

Relation to standard deviation

The MAD may be used similarly to how one would use the deviation for the average. In order to use the MAD as a consistent estimator for the estimation of the standard deviation σ {\displaystyle \sigma } , one takes

σ ^ = k MAD , {\displaystyle {\hat {\sigma }}=k\cdot \operatorname {MAD} ,}

where k {\displaystyle k} is a constant scale factor, which depends on the distribution.[2]

For normally distributed data k {\displaystyle k} is taken to be

k = 1 / ( Φ 1 ( 3 / 4 ) ) 1 / 0.67449 1.4826 , {\displaystyle k=1/\left(\Phi ^{-1}(3/4)\right)\approx 1/0.67449\approx 1.4826,}

i.e., the reciprocal of the quantile function Φ 1 {\displaystyle \Phi ^{-1}} (also known as the inverse of the cumulative distribution function) for the standard normal distribution Z = ( X μ ) / σ {\displaystyle Z=(X-\mu )/\sigma } .[3][4]

Derivation

The argument 3/4 is such that ± MAD {\displaystyle \pm \operatorname {MAD} } covers 50% (between 1/4 and 3/4) of the standard normal cumulative distribution function, i.e.

1 2 = P ( | X μ | MAD ) = P ( | X μ σ | MAD σ ) = P ( | Z | MAD σ ) . {\displaystyle {\frac {1}{2}}=P(|X-\mu |\leq \operatorname {MAD} )=P\left(\left|{\frac {X-\mu }{\sigma }}\right|\leq {\frac {\operatorname {MAD} }{\sigma }}\right)=P\left(|Z|\leq {\frac {\operatorname {MAD} }{\sigma }}\right).}

Therefore, we must have that

Φ ( MAD / σ ) Φ ( MAD / σ ) = 1 / 2. {\displaystyle \Phi \left(\operatorname {MAD} /\sigma \right)-\Phi \left(-\operatorname {MAD} /\sigma \right)=1/2.}

Noticing that

Φ ( MAD / σ ) = 1 Φ ( MAD / σ ) , {\displaystyle \Phi \left(-\operatorname {MAD} /\sigma \right)=1-\Phi \left(\operatorname {MAD} /\sigma \right),}

we have that MAD / σ = Φ 1 ( 3 / 4 ) = 0.67449 {\displaystyle \operatorname {MAD} /\sigma =\Phi ^{-1}(3/4)=0.67449} , from which we obtain the scale factor k = 1 / Φ 1 ( 3 / 4 ) = 1.4826 {\displaystyle k=1/\Phi ^{-1}(3/4)=1.4826} .

Another way of establishing the relationship is noting that MAD equals the half-normal distribution median:

MAD = σ 2 erf 1 ( 1 / 2 ) 0.67449 σ . {\displaystyle \operatorname {MAD} =\sigma {\sqrt {2}}\operatorname {erf} ^{-1}(1/2)\approx 0.67449\sigma .}

This form is used in, e.g., the probable error.

In the case of complex values (X+iY), the relation of MAD to the standard deviation is unchanged for normally distributed data.

MAD using geometric median

Analogously to how the median generalizes to the geometric median (gm) in multivariate data, MAD can be generalized to MADGM (median of distances to gm) in n dimensions. This is done by replacing the absolute differences in one dimension by euclidean distances of the data points to the geometric median in n dimensions.[5] This gives the identical result as the univariate MAD in 1 dimension and generalizes to any number of dimensions. MADGM needs the geometric median to be found, which is done by an iterative process.

The population MAD

The population MAD is defined analogously to the sample MAD, but is based on the complete distribution rather than on a sample. For a symmetric distribution with zero mean, the population MAD is the 75th percentile of the distribution.

Unlike the variance, which may be infinite or undefined, the population MAD is always a finite number. For example, the standard Cauchy distribution has undefined variance, but its MAD is 1.

The earliest known mention of the concept of the MAD occurred in 1816, in a paper by Carl Friedrich Gauss on the determination of the accuracy of numerical observations.[6][7]

See also

Notes

  1. ^ Dodge, Yadolah (2010). The concise encyclopedia of statistics. New York: Springer. ISBN 978-0-387-32833-1.
  2. ^ Rousseeuw, P. J.; Croux, C. (1993). "Alternatives to the median absolute deviation". Journal of the American Statistical Association. 88 (424): 1273–1283. doi:10.1080/01621459.1993.10476408. hdl:2027.42/142454.
  3. ^ Ruppert, D. (2010). Statistics and Data Analysis for Financial Engineering. Springer. p. 118. ISBN 9781441977878. Retrieved 2015-08-27.
  4. ^ Leys, C.; et al. (2013). "Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median" (PDF). Journal of Experimental Social Psychology. 49 (4): 764–766. doi:10.1016/j.jesp.2013.03.013.
  5. ^ Spacek, Libor. "Rstats - Rust Implementation of Statistical Measures, Vector Algebra, Geometric Median, Data Analysis and Machine Learning". crates.io. Retrieved 26 July 2022.
  6. ^ Gauss, Carl Friedrich (1816). "Bestimmung der Genauigkeit der Beobachtungen". Zeitschrift für Astronomie und Verwandte Wissenschaften. 1: 187–197.
  7. ^ Walker, Helen (1931). Studies in the History of the Statistical Method. Baltimore, MD: Williams & Wilkins Co. pp. 24–25.

References

  • Hoaglin, David C.; Frederick Mosteller; John W. Tukey (1983). Understanding Robust and Exploratory Data Analysis. John Wiley & Sons. pp. 404–414. ISBN 978-0-471-09777-8.
  • Russell, Roberta S.; Bernard W. Taylor III (2006). Operations Management. John Wiley & Sons. pp. 497–498. ISBN 978-0-471-69209-6.
  • Venables, W. N.; B. D. Ripley (1999). Modern Applied Statistics with S-PLUS. Springer. p. 128. ISBN 978-0-387-98825-2.
  • v
  • t
  • e
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
  • Z-test (normal)
  • Student's t-test
  • F-test
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
  • Category
  • icon Mathematics portal
  • Commons
  • WikiProject