Grubb's test for outliers.
function GrubbsTest(const Data: TVec; out hRes: THypothesisResult; out Signif: double; var ConfInt: TTwoElmReal; hType: THypothesisType = htTwoTailed; Alpha: double = 0.05): double;
Parameters |
Description |
Data |
dataset. |
hRes |
Returns the result of the null hypothesis (default assumption is there are no outliers). |
Signif |
(Significance level) returns the probability of observing the given result by chance given that the null hypothesis is true. |
ConfInt |
Returns the confidence interval to determine the outliers. |
hType |
Defines the type of the null hypothesis (one or two - tailed, default value two-tailed). |
Alpha |
Defines the desired significance level. If the significance probability (Signif) is bellow the desired significance (Alpha), the null hypothesis is rejected. |
Grubb's (G) statistic.
Performs the Grubbs test for outliers. Test is used to detect outliers in a univariate data set. It is based on the assumption of normality. That is, you should first verify that your data can be reasonably approximated by a normal distribution before applying the Grubbs' test.
More about the test can be found at here.
Grubbs' test detects one outlier at a time. This outlier is expunged from the dataset and the test is iterated until no outliers are detected. However, multiple iterations change the probabilities of detection, and the test should NOT be used for sample sizes of six or less since it frequently tags most of the points as outliers. Grubbs' test is also known as the maximum normed residual test.
Grubbs' test is defined for the hypothesis:
Copyright (c) 1999-2025 by Dew Research. All rights reserved.
|
What do you think about this topic? Send feedback!
|