guttman {psych} R Documentation

## Alternative estimates of test reliabiity

### Description

Eight alternative estimates of test reliability include the six discussed by Guttman (1945), four discussed by ten Berge and Zergers (1978) (μ_0 ... μ_3) as well as β (the worst split half, Revelle, 1979), the glb (greatest lowest bound) discussed by Bentler and Woodward (1980), and omega_h and omega_t (McDonald, 1999; Zinbarg et al., 2005).

### Usage

```guttman(r,key=NULL,digits=2)
tenberge(r,digits=2)
glb(r,key=NULL,digits=2)
```

### Arguments

 `r` A correlation matrix or raw data matrix. `key` a vector of -1, 0, 1 to select or reverse items `digits` How many digits of accuracy in the output?

### Details

Surprisingly, 104 years after Spearman (1904) introduced the concept of reliability to psychologists, there are still multiple approaches for measuring it. Although very popular, Cronbach's α (1951) underestimates the reliability of a test and over estimates the first factor saturation. The guttman function includes the six estimates discussed by Guttman (1945), four of ten Berge and Zergers (1978), as well as Revelle's β (1979) using `ICLUST`. The companion function, `omega` calculates omega hierarchical (omega_h) and omega total (omega_t).

lamda 1= 1-tr(Vx)/Vx

The second bound, λ_2 replaces the diagonal with a function of the square root of the sums of squares of the off diagonal elements. Let C_2 = vec{1}( vec{V}-diag(vec{V}))^2 vec{1}' , then

lambda_2=

Effectively, this is replacing the diagonal with n * the square root of the average squared off diagonal element.

Guttman's 3rd lower bound, λ_3, also modifies λ_1 and estimates the true variance of each item as the average covariance between items and is, of course, the same as Cronbach's α.

Lambda 3 = (n)/(n-1)(1-tr(Vx)/(Vx) = (n)/(n-1)(Vx-tr(Vx)/Vx = alpha

This is just replacing the diagonal elements with the average off diagonal elements. λ_2 >=q λ_3 with λ_2 > λ_3 if the covariances are not identical.

λ_3 and λ_2 are both corrections to λ_1 and this correction may be generalized as an infinite set of successive improvements. (Ten Berge and Zegers, 1978)

μ_r = frac{1}{V_x} bigl( p_o + (p_1 + (p_2 + ... (p_{r-1} +( p_r)^{1/2})^{1/2} ... )^{1/2})^{1/2} bigr), r = 0, 1, 2, ...

where

p_h = sum_{ine j}σ_{ij}^{2h}, h = 0, 1, 2, ... r-1

and

p_h = frac{n}{n-1}σ_{ij}^{2h}, h = r

tenberge & Zegers (1978). Clearly μ_0 = λ_3 = α and μ_1 = λ_2. μ_r >=q μ_{r-1} >=q ... μ_1 >=q μ_0, although the series does not improve much after the first two steps.

Guttman's fourth lower bound, λ_4 was originally proposed as any spit half reliability but has been interpreted as the greatest split half reliability. If vec{X} is split into two parts, vec{X}_a and vec{X}_b, with correlation r_{ab} then

lambda 4 = 4rab/(Va + Vb + 2arbVaVb)

which is just the normal split half reliability, but in this case, of the most similar splits.

λ_5, Guttman's fifth lower bound, replaces the diagonal values with twice the square root of the maximum (across items) of the sums of squared interitem covariances

λ_5 = λ_1 + frac{2 sqrt{bar{C_2}}}{V_X}.

Although superior to λ_1, λ_5 underestimates the correction to the diagonal. A better estimate would be analogous to the correction used in λ_3:

λ_{5+} = λ_1 + frac{n}{n-1}frac{2 sqrt{bar{C_2}}}{V_X}.

Guttman's final bound considers the amount of variance in each item that can be accounted for the linear regression of all of the other items (the squared multiple correlation or smc), or more precisely, the variance of the errors, e_j^2, and is

lamada 6 = 1 - sum(e^2)/Vx = 1-sum(1-r^2(smc))/Vx

Guttman's λ_4 is the greatest split half reliability. This is found by combining the output from three different approaches, and seems to work for all test cases yet tried. Lambda 4 is reported as the max of these three algorithms.

The algorithms are

a) Do an ICLUST of the reversed correlation matrix. ICLUST normally forms the most distinct clusters. By reversing the correlations, it will tend to find the most related cluster. Truly a weird approach but tends to work.

b) Alternatively, a kmeans clustering of the correlations (with the diagonal replaced with 0 to make pseudo distances) can produce 2 similar clusters.

c) Clusters identified by assigning items to two clusters based upon their order on the first principal factor. (Highest to cluster 1, next 2 to cluster 2, etc.)

### Value

 `beta` The normal beta estimate of cluster similarity from ICLUST. This is an estimate of the general factor saturation. `tenberge\$mu1` tenBerge mu 1 is functionally alpha `tenberge\$mu2` one of the sequence of estimates mu1 ... mu3 `beta.factor` For experimental purposes, what is the split half based upon the two factor solution? `glb.IC` Greatest split half based upon ICLUST of reversed correlations `glb.Km` Greatest split half based upon a kmeans clustering. `glb.Fa` Greatest split half based upon the items assigned by factor analysis. `glb.max` max of the above estimates `keys` scoring keys from each of the alternative methods of forming best splits

William Revelle

### References

Cronbach, L.J. (1951) Coefficient alpha and the internal strucuture of tests. Psychometrika, 16, 297-334.

Guttman, L. (1945). A basis for analyzing test-retest reliability. Psychometrika, 10 (4), 255-282.

Revelle, W. (1979). Hierarchical cluster-analysis and the internal structure of tests. Multivariate Behavioral Research, 14 (1), 57-74.

Revelle, W. and Zinbarg, R. E. (2009) Coefficients alpha, beta, omega and the glb: comments on Sijtsma. Psychometrika, 2009.

Ten Berge, J. M. F., & Zegers, F. E. (1978). A series of lower bounds to the reliability of a test. Psychometrika, 43 (4), 575-579.

Zinbarg, R. E., Revelle, W., Yovel, I., & Li, W. (2005). Cronbach's α , Revelle's β , and McDonald's omega_h ): Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70 (1), 123-133.

`alpha`, `omega`, `ICLUST`,
```data(attitude)