Repeated Median Estimator - Maple Help

Statistics

 RepeatedMedianEstimator
 compute the repeated median estimator

 Calling Sequence RepeatedMedianEstimator(X, Y, v) RepeatedMedianEstimator(XY, v)

Parameters

 X - values of independent variable(s) Y - values of dependent variable XY - values of independent and dependent variables v - algebraic expression in which to express the result

Description

 • The RepeatedMedianEstimator function computes a robust linear estimator from a collection of points in the plane. It is a quantity first described by Siegel [2], using an algorithm described by Matoušek, Mount, and Netanyahu [3].
 • The repeated median estimator is a robust estimator. This means that it will continue to perform well if some points are replaced by outliers. Least-squares linear regression, the type of regression most commonly used and implemented by LinearFit and NonlinearFit, is very susceptible to outliers.
 • Conceptually, one obtains the repeated median estimator by computing, for each data point $P$, the slopes of the lines connecting $P$ to every other point, and then taking the median of those slopes. Then, one takes the median of these median slopes as $P$ ranges over all data points. This gives the slope $s$ for the repeated median estimator.
 • Then one computes for all points $P$ the intercept that would have the line with slope $s$ going through $P$. The intercept for the repeated median estimator is the median of these intercepts.
 • The repeated median estimator has a breakdown point of $\frac{1}{2}$. This means that when less than half of all data points in a sample are changed, the value of the repeated median estimator at a given value can change only a limited amount.
 • In the first calling sequence, the parameter X is a Vector (or a k-by-1 Matrix, or a list) containing the values of the independent variables. The ith element is the independent value for the ith data point. The parameter Y is a Vector containing the k values of the dependent variable, in the same way. Alternatively, these values can be specified in a single k-by-2 Matrix, XY, by using the second calling sequence.
 • The returned value is an expression representing the estimator evaluated at the value v. By supplying a variable name here, you obtain the general expression for the estimator. If you supply a number, you obtain the value of the estimator at that number.

Notes

 • The underlying computation is done in floating-point; therefore, all data points must have type realcons and all returned solutions are floating-point, even if the problem is specified with exact values.  For more information about numeric computation in the Statistics package, see the Statistics/Computation help page.

Examples

 > $\mathrm{with}\left(\mathrm{Statistics}\right):$

Suppose we have a set of points in the plane where the X-coordinate is generated uniformly from the interval $-1..1$, and the Y-coordinate is given by $y=2.4x+0.9+\mathrm{noise}$, with the noise coming from the $\mathrm{Cauchy}\left(0,1\right)$ distribution. This typically generates serious outliers.

 > $N≔1000$
 ${N}{≔}{1000}$ (1)
 > $\mathrm{xvalues}≔\mathrm{Sample}\left(\mathrm{Uniform}\left(-1,1\right),N\right)$
 ${{\mathrm{_rtable}}}_{{36893628576602218492}}$ (2)
 > $\mathrm{yvalues}≔\mathrm{~}\left[\mathrm{+}\right]\left(2.4\mathrm{xvalues}+\mathrm{Sample}\left(\mathrm{Cauchy}\left(0,1\right),N\right),\mathrm{ },0.9\right)$
 ${{\mathrm{_rtable}}}_{{36893628576602206924}}$ (3)
 > $\mathrm{points}≔\mathrm{PointPlot}\left(\mathrm{yvalues},'\mathrm{xcoords}'=\mathrm{xvalues},'\mathrm{color}'='\mathrm{green}'\right):$$\mathrm{points}$

We shrink the view a little, so that we can see more of what's going on.

 > $\mathrm{points}≔\mathrm{plots}:-\mathrm{display}\left(\mathrm{points},'\mathrm{view}'=\left[-1..1,-3..5\right]\right):$$\mathrm{points}$

Now we would like to find the model $y=2.4x+0.9$ from the data. Using standard (least-squares) linear regression, we get an unsatisfactory fit:

 > $\mathrm{leastsquares}≔\mathrm{Fit}\left(ax+b,\mathrm{xvalues},\mathrm{yvalues},x\right)$
 ${\mathrm{leastsquares}}{≔}{-}{5.35146588080964}{}{x}{-}{6.16980901266835}$ (4)
 > $\mathrm{plots}:-\mathrm{display}\left(\mathrm{points},\mathrm{plot}\left(\mathrm{leastsquares},x=-1..1,'\mathrm{thickness}'=3\right)\right)$

The repeated median estimator, however, deals well with the errors.

 > $\mathrm{repeatedmedian}≔\mathrm{RepeatedMedianEstimator}\left(\mathrm{xvalues},\mathrm{yvalues},x\right)$
 ${\mathrm{repeatedmedian}}{≔}{0.958872442473601}{+}{2.42820627384362}{}{x}$ (5)
 > $\mathrm{plots}:-\mathrm{display}\left(\mathrm{points},\mathrm{plot}\left(\left[\mathrm{leastsquares},\mathrm{repeatedmedian}\right],x=-1..1,'\mathrm{thickness}'=3,'\mathrm{legend}'=\left["least squares","repeated median"\right]\right)\right)$

In the following example, we have one outlier in the data.

 > $\mathrm{xydata}≔\left[\left[0,11\right],\left[1,0\right],\left[2,8\right],\left[3,9\right],\left[4,8\right],\left[5,4\right],\left[6,4\right],\left[7,3\right],\left[8,4\right],\left[9,0\right],\left[10,-1\right]\right]$
 ${\mathrm{xydata}}{≔}\left[\left[{0}{,}{11}\right]{,}\left[{1}{,}{0}\right]{,}\left[{2}{,}{8}\right]{,}\left[{3}{,}{9}\right]{,}\left[{4}{,}{8}\right]{,}\left[{5}{,}{4}\right]{,}\left[{6}{,}{4}\right]{,}\left[{7}{,}{3}\right]{,}\left[{8}{,}{4}\right]{,}\left[{9}{,}{0}\right]{,}\left[{10}{,}{-1}\right]\right]$ (6)
 > $\mathrm{points}≔\mathrm{PointPlot}\left(\mathrm{xydata}\left[..,2\right],'\mathrm{xcoords}'=\mathrm{xydata}\left[..,1\right],'\mathrm{color}'='\mathrm{green}'\right)$

Once again, we compare the least squares and repeated median estimators.

 > $\mathrm{leastsquares}≔\mathrm{Fit}\left(ax+b,\mathrm{xydata},x\right)$
 ${\mathrm{leastsquares}}{≔}{-}{0.799999999999999}{}{x}{+}{8.54545454545454}$ (7)
 > $\mathrm{repeatedmedian}≔\mathrm{RepeatedMedianEstimator}\left(\mathrm{xydata},x\right)$
 ${\mathrm{repeatedmedian}}{≔}{10.}{-}{1.}{}{x}$ (8)
 > $\mathrm{plots}:-\mathrm{display}\left(\mathrm{points},\mathrm{plot}\left(\left[\mathrm{leastsquares},\mathrm{repeatedmedian}\right],x=0..10,'\mathrm{legend}'=\left["least squares","repeated median"\right]\right)\right)$
 > 

In this case, the difference is less dramatic than in the first example. It is clear, though, that the outlier has very little influence on the repeated median estimator, and some influence on the least squares fit.

References

 [1] Stuart, Alan, and Ord, Keith. Kendall's Advanced Theory of Statistics. 6th ed. London: Edward Arnold, 1998. Vol. 1: Distribution Theory.
 [2] Siegel, Andrew F. Robust Regression Using Repeated Medians. Biometrika 69 (1), 1982, pp.242-244.
 [3] Matoušek, Jiří, Mount, David, and Netanyahu, Nathan. Efficient Randomized Algorithms for the Repeated Median Line Estimator. Algorithmica 20 (2), 1998, pp.136-150.

Compatibility

 • The Statistics[RepeatedMedianEstimator] command was introduced in Maple 2015.