\documentclass[doc]{apa}
%\documentclass[10pt]{article}{}
\usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots.
\geometry{letterpaper} % ... or a4paper or a5paper or ...
%\geometry{landscape} % Activate for for rotated page geometry
%\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent
%\usepackage{geometry} % See geometry.pdf to learn the layout options. There are lots.
%\geometry{letterpaper} % ... or a4paper or a5paper or ...
%\geometry{landscape} % Activate for for rotated page geometry
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{epstopdf}
\usepackage{siunitx}
%\usepackage{endfloat}
\usepackage{apacite}
%\usepackage[authoryear,round]{natbib}
%\bibliographystyle{apa} %this one plus author year seems to work?
%\makeindex % used for the subject index
%\usepackage{ulem}
\bibliographystyle{Documents/Active/book/kapalike}
\usepackage[authoryear,round]{natbib}
%\usepackage[authoryear,round,longnamesfirst]{natbib} %this one is more appropriate
\usepackage{tikz}
\usetikzlibrary{arrows,snakes,backgrounds,calc}
%\usepackage{hyperref}
%\let\proglang=\textsf
%\citeindextrue % works with natbib but not apacite
%\usepackage[usenames]{color}
\usepackage[colorlinks=true,citecolor=blue]{hyperref} %this makes reference links hyperlinks in pdf!
\usepackage{setspace}
%\usepackage{rotating} %allows rotating tables in apa style
%
\newcommand{\wrc}[1]{\marginpar{\textcolor{blue}{#1}}} %bill's comments
\newcommand{\wra}[1]{\textcolor{blue}{#1}} %bill's additions
\newcommand{\lec}[1]{\marginpar{\textcolor{red}{#1}}} %Lorien's comments
\newcommand{\lea}[1]{\textcolor{red}{#1}} %Loriens additions
\newcommand{\ahc}[1]{\marginpar{\textcolor{green}{#1}}} %andrew's comments
\newcommand{\aha}[1]{\textcolor{green}{#1}} %andrew's additions
%
% To count the words in the text, convert to doc mode, select what is to be counted and then issue the command
% pbpaste | wc -w in X11 window
%
%
% To convert to Word/HTML, use the htlatex function in macTex .%convert to HTML and then copy to word
%
\usepackage[formats]{listings}
%\lstdefineformat{R}{~=\( \sim \)}
%\lstset{basicstyle=\ttfamily,format=R}
\lstset{language=R}
\renewcommand{\vec}[1]{\mathbf{#1}}
\usepackage{fancyvrb}
\fvset{fontfamily=courier}
%\DefineVerbatimEnvironment{Sinput}{Verbatim}
%{fontseries=b, fontsize=\scriptsize, xleftmargin=0.6cm}
\DefineVerbatimEnvironment{Routput}{Verbatim}
{fontsize=\scriptsize, xleftmargin=.5cm}
\DefineVerbatimEnvironment{Routput}{Verbatim}
{fontseries=b,fontsize=\scriptsize, xleftmargin=0.5cm}
\DefineVerbatimEnvironment{Toutput}{Verbatim}
{fontsize=\tiny, xleftmargin=1cm}
\DefineVerbatimEnvironment{Toutput}{Verbatim}
{fontseries=b,fontsize=\tiny, xleftmargin=0.5cm}
\DefineVerbatimEnvironment{Binput}{Verbatim}
{fontseries=b, fontsize=\scriptsize,frame=single, label=\fbox{lavaan model syntax}, framesep=2mm}
%\DefineShortVerb{\!} %%% generates error!
\DefineVerbatimEnvironment{Rinput}{Verbatim}
{fontseries=b, fontsize=\scriptsize, frame=single, label=\fbox{R code}, framesep=3mm}
%\DefineVerbatimEnvironment{Rinput}{listings}
%{fontseries=b, fontsize=\scriptsize, frame=single, label=\fbox{R code}, framesep=3mm}
\DefineVerbatimEnvironment{Link}{Verbatim}
{fontseries=b, fontsize=\small, formatcom=\color{darkgreen}, xleftmargin=1.0cm}
%\newcommand{\pkg}[1]{{\normalfont\fontseries{b}\selectfont #1}}
\let\proglang=\textsf
\newcommand{\R}{\proglang{R}}
%\newcommand{\pkg}[1]{{\normalfont\fontseries{b}\selectfont #1}}
\newcommand{\Rfunction}[1]{{\texttt{#1}}}
\newcommand{\fun}[1]{{\texttt{#1}\index{#1}\index{R function!#1}}}
\newcommand{\pfun}[1]{{\texttt{#1}\index{#1}\index{R function!#1}\index{R function!psych package!#1}}}\newcommand{\Rc}[1]{{\texttt{#1}}} %R command same as Robject
\newcommand{\Robject}[1]{{\texttt{#1}}}
\newcommand{\Rpkg}[1]{{\textit{#1}\index{#1}\index{R package!#1}}} %different from pkg - which is better?
\newcommand{\iemph}[1]{{\emph{#1}\index{#1}}}
\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}
\title{Statistical analyses and computer programming in personality\\A chapter for the\\Cambridge University Press Handbook of Personality Psychology }
\author{William Revelle, Lorien Elleman and Andrew Hall}
%the following works only with apaclass
\affiliation{Department of Psychology, Northwestern University }
\acknowledgements{\\ contact: William Revelle revelle@northwestern.edu \\
Draft version of \today. We greatly appreciate the thoughtful comments from David Condon. This is the final version as submitted to CUP.}
\shorttitle{Statistical Analysis}
\rightheader{Statistical Analysis}
\leftheader{PMC Lab}
\abstract{The use of open source software for statistical analysis provides personality researchers new tools for exploring large scale data sets and allows developing and testing new psychological theories. We review the development of these techniques and consider some of the major data analytic strategies. We provide example code for doing these analyses in \R{}.}
\begin{document}
\maketitle
\tableofcontents
\newpage
\section{Prologue: A brief history of Open Source Statistical Software}
It is hard to imagine early in the 21st century that many of the statistical techniques we think of as modern statistics were developed in the late 19th and early 20th centuries. What we now call regression was the degree of ``reversion to mediocrity" as introduced by Francis \cite{galton:86}. A refinement of regression was the measure of ``co-relations" \citep{galton:88} which was specified as the relationship of deviations as expressed in units of the probable error. Galton's insight of standardization of units became known as `Galton's coefficient' which, when elaborated by \cite{pearson:95,pearson:96,pearson:20} became what we now know as the Pearson Product Moment Coefficient. Because the Pearson derivations were seen as too complicated for psychologists, Charles \cite{spearman:rho} explained the developments of Pearson and his associates to psychologists where ``the meaning and working of the various formulae have been explained sufficiently, it is hoped, to render them readily usable even by those whose knowledge of mathematics is elementary" \citep[p 73][]{spearman:rho}. Later in that same paper, he then developed reliability theory and the correction for attenuation. In his second amazing publication that year he developed the basic principles of factor analysis and laid out his theory of general intelligence \citep{spearman:04}.
Fundamental theorems of factor analysis \citep{thurstone:33,Eckart} and estimates of reliability \citep{brown:10,spearman:10,kuder:37,guttman:45} came soon after and were all developed when computation was done by ``computers" who were humans operating desk calculators. If not difficult, computation was tedious in that it required repeatedly finding sums of squares and sums of cross products. When examining the correlation structure of multiple items, the number of correlations went up by the square of the number of items, and thus so did the computational load. Test theory as developed in the 1930s and 1940s led to a number of convenient short cuts by which the sums of test and item variances could be used to estimate what the correlations of composites of items would be if certain assumptions were met. Thus, the coefficients of \cite{cronbach:51,guttman:45,kuder:37} ($\alpha$, $\lambda_3$ and KR20) were meant to be estimates based upon the structure of covariances without actually finding the covariances.
Another short cut was to dichotomize continuous data in order to find correlations. For example, in an ambitious factor analysis of individual differences in behavior among psychiatric patients \citep{eysenck:44} made use of this shortcut by dichotomizing continuous variables and then finding the \cite{yule:12} coefficient. Unfortunately, such a procedure produced a non-positive definite matrix which makes reanalysis somewhat complicated.
With the need to calculate flight paths for large artillery shells and rockets, the ideas developed by Babbage for his ``Analytical Engine" in 1838 \citep{bromley:82} and algorithms for programming \citep{lovelace:42} were converted from punched cards and automatic looms into the electrical tubes and circuit boards of von Neuman \citep{isaacson}. The age of computation had arrived. It was now possible to properly analyze covariance structures.
For personality psychologists, these were exciting times, for it allowed for the calculation of larger correlation matrices and the use of factor analysis with more elegant algorithms than the centroid approach. Indeed, Raymond Cattell moved from Harvard to the University of Illiinois because he was promised access to the new `ILLIAC' computer (and would not have to teach undergraduates).
\subsection{Main frame computers and proprietary software}
At first, software for these new devices was tailor made for the particular machine, and there was a plethora of programming languages and operating systems. In the late 1950s the programming language FORTRAN (later renamed to be Fortran when computers learned about lower case) was developed at IBM for the numeric computation necessary for scientific computing and then translated for other operating systems. While some programs would run on IBM 709 and 7090s, others would only work on the super computers of the time, the Control Data Corporation's 1604 and 6400. An early package of statistical programs, developed for the IBM 7090 in 1961 at UCLA was the BioMedical Package (BMDP). At first BMDP was distributed for free to other universities but BMDP{ \citep{bmdp} eventually became a commercial package which has since disappeared. Two other statistical systems (SAS\textsuperscript{\textregistered} and SPSS), originally developed at other universities (North Carolina State and Stanford) and shared with others, were also developed.
These three major software systems for doing statistics came from three somewhat different traditions. BMDP was developed for biomedical research, SAS\textsuperscript{\textregistered} for agriculture research, and SPSS for statistics in the social sciences \citep{SPSS}. Although all three systems were originally developed at universities and were freely distributed to colleagues, all three soon became incorporated as for profit corporations. All were developed for main frame computers where instructions were originally given in stacks of Hollerith Cards (known to many as IBM cards), containing 80 characters per card. All of the programs made use of the FORTRAN programming language and still, many years later, have some of their main frame geneaology embedded in their systems. Older researchers still shudder at the memories of needing to wait for 24 hours after turning in a box of cards only to discover one typo negated the entire exercise.
\subsection{S and R: interactive statistics}
In contrast to the statistical analyses done on mainframes, S and subsequently \R{} were developed for interactive statistics. The S computing `environment' was developed for Unix in the 1970's at Bell labs by John Chambers and his associates \citep{S}. It was meant to take advantage of interactive computing where the user could work with his/her data to better display it and understand it. After several iterations it became the defacto statistical package for those using the Unix operating system. In 1992, two statisticians, Ross Ihaka and Robert Gentleman, at the University of Otago, in New Zealand started adapting S to run on their Mac computers. It incorporated the list oriented language Scheme and emphasized object-oriented programming. Most importantly, they shared the design specifications with other interested developers around the world and
intentionally did not copyright the code. Rather, they, with the help of John Chambers and the rest of the R Development Core Team, deliberately licensed \R{} under the GNU General Public License of the Free Software Foundation which allows users to copy, use, and modify the code as long as the product remains under the GPL \citep{R}.
Perhaps the real power of \R{} is that because it is open source, it is extensible. That means that anyone can contribute packages to the overall system. That, and the power of the GPL and open source software movement has led to an amazing effect. From the original functions in \R{} and the ones written by the R Core Team, more than 12,600 packages have been contributed to the CRAN, the Comprehensive R Archive Network, and at least (as of this writing) more than 34,000 packages are available on GitHub. \R{} has become the lingua franca of statistics and many new developments in psychological statistics are released as \R{} functions or packages. When writing methodology chapters or articles, the associated \R{} code to do the operations may also be given (as we do in this chapter).
With the growing recognition of the importance of replicable research, the publication of the \R{} scripts and functions to do the analysis, as well as the release of \R{} readable data sets is an essential step in allowing others to understand and repeat what we have done. Because the source code of all of the \R{} packages is available, users can check the accuracy of the code and report bugs to the developers of the particular package. This is the ultimate peer review system, in that users use, review, and add to the entire code.
Given its open source nature and growing importance in scientific computing, much of the rest of this chapter will be devoted to discussing how particular analyses can be done in \R{}. This is not to deny that commercial packages exist, but to encourage the readers of this handbook to adopt modern statistics. The actual code used for the tables and figures is included in the Appendix.
Finally, little appreciated by many users of \R{} is that it is not just a statistical environment, it is also a very high level programing language. Although some of the packages in \R{} are written in Fortran or C++, many packages are written in \R{} itself. \R{} allows operations at the higher matrix level and allows for object-oriented programming. Each function operates on `objects' and returns another `object'. That is, functions can be chained together to add value to previous operations. This allows users to include standard functions in their own functions with the output available for even more functions. Actual programming in \R{} is beyond the scope of this chapter, but is worth learning for the serious quantitative researcher. Without developing packages, a willingness to write more and more complicated scripts is a positive benefit.
\subsection{Getting and using R}
\R{} may be found at \url{https://www.r-project.org} and the current release is distributed through the Comprehensive R Archive Network \url{https://cran.r-project.org}. Popular interfaces to \R{} include Rstudio (\url{https://www.rstudio.com}) which is particularly useful for PCs (the Mac version comes with its own quite adequate interface). Once \R{} is downloaded and installed, it is useful to install some of the powerful packages that have been added to it. We will make use of some of these packages, particularly the \Rpkg{psych} package which has been specifically developed for personality and psychological research \citep{psych}. See the appendix for detailed instructions.
\section{Data, Models, and Residuals}
The basic equation in statistics is that: \\
\begin{equation}
Data = Model + Residual \iff Residual = Data - Model.
\label{eq:data}
\end{equation}
That is, the study of statistics is the process of modeling our data. Our models are approximations and simplifications of the data \citep{rodgers:10}. Our challenge as researchers is to find models that are good approximations of the data but that are not overly complicated. There is a tradeoff between the two goals of providing simple descriptions of our data and providing accurate descriptions of the data. Consider the model that the sun rises in the East. This is a good model on average and as a first approximation, but is actually correct only twice a year (the equinoxes). A more precise model will consider seasonal variation and recognize that in the northern hemisphere, the sun rises progressively further north of east from the spring equinox until the summer solstice and then becomes more easterly until the fall equinox. An even more precise model will consider the latitude of the observer.
If we think of degrees of freedom in models as money, we want to be frugal but not stingy. Typically we evaluate the quality of our models in terms of some measure of \emph{goodness of fit}. Conceptually, fit is some function of the size of the residuals as contrasted to the data. Because almost all models will produce mean residuals of zero, we typically invoke a cost function such as ordinary least squares to try to find a model that minimizes our squared residual. As an example, the algebraic mean is that model of the data that minimizes the squared deviations around it (the variance).
The following pages will consider a number of statistical models, how to estimate them, and how to evaluate their fit. % But none of this is more complicated than taking Equation~\ref{eq:data} seriously.
All of what follows can be derived from a serious consideration of Equation~\ref{eq:data}.
In what follows we discuss two types of variables and three kinds of relationships. In a distinction reminiscent of the prisoners in the cave discussed by Plato in the \emph{The Republic} \citep{plato}, we consider two classes of variables: those which we can observe, and those which we can not observe but are the latent causes of the observed variables. Our observations are of the movement of the shadows on the cave's wall; we need to infer the latent causes of these shadows. Many of our tools of data analysis (e.g., factor analysis, reliability theory, structural equation modeling, and item response theory) are just methods for estimating latent variables and their inter-relationships from the pattern of relationships between observed variables. Traditionally we make this distinction by using Greek letters for unobserved (latent) population values and Roman letters for observed values. When we portray patterns of relationships graphically (e.g., Figure~\ref{fig:overview}) we show observed variables as boxes and latent variables as circles. As may be seen in Figure~\ref{fig:overview}, there are three kinds of relationships between our variables: relations between observed variables, relations between latent and observed variables, and relations between latent variables. Theories are organizations of latent variables as they represent the relationships between observed variables.
\begin{figure}[htbp]
\begin{center}
\caption{The basic set of statistical relationships may be seen as relations among and between observed variables (boxes) and latent variables (circles). }
\begin{tikzpicture}
\tikzstyle{trait}=[circle, draw, minimum size=1cm]
\tikzstyle{int}=[circle, draw, minimum size=.1cm]
\tikzstyle{obs}=[rectangle, draw, minimum size=.6cm]
\path[use as bounding box] (0,0) rectangle (10,10);
%Observed X
\draw(1.5,9.5)node {X};
\node(x1) at (1.5,8.5) [obs]{$X1$};
\node(x2) at (1.5,7.75) [obs] {$X2$};
\node(x3) at (1.5,7) [obs]{$X3$};
\node(x4) at (1.5,6.25) [obs]{$X4$};
\node(x5) at (1.5,5.5) [obs] {$X5$};
\node(x6) at (1.5,4.75) [obs] {$X6$};
\node(x7) at (1.5,4) [obs]{$X7$};
\node(x8) at (1.5,3.25) [obs]{$X8$};
\node(x9) at (1.5,2.5) [obs] {$X9$};
%error X
\draw(0,9.5)node {Error };
\node(d1) at (0,8.5) [int]{$\delta_{1}$};
\node(d2) at (0,7.75) [int] {$\delta_{2}$};
\node(d3) at (0,7) [int]{$\delta_{3}$};
\node(d4) at (0,6.25) [int]{$\delta_{4}$};
\node(d5) at (0,5.5) [int] {$\delta_{5}$};
\node(d6) at (0,4.75) [int] {$\delta_{6}$};
\node(d7) at (0,4) [int]{$\delta_{7}$};
\node(d8) at (0,3.25) [int]{$\delta_{8}$};
\node(d9) at (0,2.5) [int] {$\delta_{9}$};
%latent X
\node(c1) at( 4,7.75) [trait]{$\chi_{1}$};
\node(c2) at (4,5.5) [trait]{$\chi_{2}$};
\node(c3) at (4,3.25) [trait] {$\chi_{3}$};
%latent Y
\node(t1) at( 7,7.375) [trait]{$\eta_1$};
\node(t2) at (7,4.375) [trait]{$\eta_2$};
%\node(N) at (6,3.25) [trait] {$\chi_{3}$};
%observed Y
\draw(9,9.5)node { Y};
\node(y1) at (9,8.5) [obs]{$Y1$};
\node(y2) at (9,7.75) [obs] {$Y2$};
\node(y3) at (9,7) [obs]{$Y3$};
\node(y4) at (9,6.25) [obs]{$Y4$};
\node(y5) at (9,5.5) [obs] {$Y5$};
\node(y6) at (9,4.75) [obs] {$Y6$};
\node(y7) at (9,4) [obs]{$Y7$};
\node(y8) at (9,3.25) [obs]{$Y8$};
%\node(Y9) at (9,2.5) [obs] {$Y9$};
%error y
\draw(10.5,9.5)node {Error };
\node(e1) at (10.5,8.5) [int]{$\epsilon_{1}$};
\node(e2) at (10.5,7.75) [int] {$\epsilon_{2}$};
\node(e3) at (10.5,7) [int]{$\epsilon_{3}$};
\node(e4) at (10.5,6.25) [int]{$\epsilon_{4}$};
\node(e5) at (10.5,5.5) [int] {$\epsilon_{5}$};
\node(e6) at (10.5,4.75) [int] {$\epsilon_{6}$};
\node(e7) at (10.5,4) [int]{$\epsilon_{7}$};
\node(e8) at (10.5,3.25) [int]{$\epsilon_{8}$};
%\node(x9) at (10.5,2.5) [int] {$\epsilon_{9}$};
%errors in x
\draw[->](d1) to (x1);
\draw[->](d2) to (x2);
\draw[->](d3) to (x3);
\draw[->](d4) to (x4);
\draw[->](d5) to (x5);
\draw[->](d6) to (x6);
\draw[->](d7) to (x7);
\draw[->](d8) to (x8);
\draw[->](d9) to (x9);
%measurement model
\draw[->](c1) to (x1);
\draw[->](c1) to (x2);
\draw[->](c1) to (x3);
\draw[->](c2) to (x4);
\draw[->](c2) to (x5);
\draw[->](c2) to (x6);
\draw[->](c3) to (x7);
\draw[->](c3) to (x8);
\draw[->](c3) to (x9);
\draw[->](t1) to (y1);
\draw[->](t1) to (y2);
\draw[->](t1) to (y3);
\draw[->](t1) to (y4);
\draw[->](t2) to (y5);
\draw[->](t2) to (y6);
\draw[->](t2) to (y7);
\draw[->](t2) to (y8);
%latent model
\draw(4,9.5)node { Latent X};
\draw(7,9.5)node { Latent Y};
\draw[->](c1) to (t1);
\draw[->](c2) to (t1);
\draw[->](c2) to (t2);
\draw[->](c3) to (t2);
%errors in t
\draw[->](e1) to (y1);
\draw[->](e2) to (y2);
\draw[->](e3) to (y3);
\draw[->](e4) to (y4);
\draw[->](e5) to (y5);
\draw[->](e6) to (y6);
\draw[->](e7) to (y7);
\draw[->](e8) to (y8);
%\draw[->](d9) to (y9);
\end{tikzpicture}
\label{fig:overview}
\end{center}
\end{figure}
\section{Basic Descriptive Statistics}
Before any data analysis may be done, the data must be collected. This is more complicated than it seems, for it involves consideration of the latent variables of interest; presumed observed markers of these latent variables; the choice of subjects (are they selected randomly, systematically, are they volunteers, are they WEIRD \citep{weird:10}), the means of data collection (self report, observer ratings, life narratives, computerized measurement, web based measures, etc.); the number of times measures are taken (e.g., once, twice for test-retest measures or measures of change, multiple times in studies of growth changes or of emotions over time); the lags between repeated measures (minutes, hours, days, or years) and whether there are experimental manipulations to distinguish particular conditions \citep{rev:ea07}.
Once the data are collected, it is of course necessary to prepare them for data analysis. That is to say, to transfer the original data into a form suitable for computer analysis. If hand coding must be done \citep[e.g., scoring life narratives, ] []{guo:16} the separate ratings must be entered in a way that allows for computer based analysis (e.g., a reliability calculation) to be made.
Typically the data are organized as a two dimensional table (e.g., a spreadsheet in EXCEL or OpenOffice) with subjects as rows and variables as columns. If there are repeated measures per subject, the data might have a separate row for each occasion, but with one column identifying the subject and another the occasion. Consider the data in Table~\ref{tab:msq} which are taken from an example data set \pfun{msqR} in the \Rpkg{psych} package \citep{psych} from \R{}\footnote{In the following pages, we use boldfaced text for \pfun{functions} and \Rpkg{italics} for packages.}. The \pfun{msqR} data set was collected over about 10 years as part of a long term series of studies of the interactive effect of personality and situational stressors on cognitive performance.
\begin{table}[htpb]
\caption{A representative sample of eight subjects with time 1 and time 2 observations on 10 emotion terms. The data are from the \pfun{msqR} data set (N=3,032) which has repeated measures for 2,086 participants. The full data set is used for many of following examples. It is included in the \Rpkg{psych} package. The data are shown in `long' format with repeated measures `stacked' on top of each other to represent multiple time points. `Wide' format would represent the different time points as separate columns for each subject.}
\begin{center}
\begin{scriptsize}
\begin{tabular} {l r r r r r r r r r r r r }
% \multicolumn{ 12 }{l}{ A table from the psych package in R } \cr
\hline Line \# & {id} & {time} & {anxis} & {at.es} & {calm} & {cnfdn} & {cntnt} & {jttry} & {nervs} & {relxd} & {tense} & {upset}\cr
\hline
1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 1 & 0 & 2 & 0 & 0 \cr
2 & 2 & 1 & 1 & 2 & 2 & 1 & 2 & 0 & 1 & 2 & 1 & 1 \cr
3 & 3 & 1 & 2 & 2 & 2 & 2 & 2 & 0 & 1 & 2 & 1 & 0 \cr
4 & 4 & 1 & 0 & 2 & 2 & 2 & 3 & 0 & 0 & 2 & 0 & 0 \cr
5 & 5 & 1 & 0 & 3 & 3 & 2 & 2 & 1 & 0 & 2 & 0 & 0 \cr
6 & 6 & 1 & 1 & 3 & 2 & 3 & 3 & 0 & 0 & 3 & 0 & 0 \cr
7 & 7 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 2 & 0 & 0 \cr
8 & 8 & 1 & 0 & 2 & 3 & 1 & 2 & 0 & 0 & 2 & 0 & 0 \cr
69 & 1 & 2 & 1 & 2 & 2 & 2 & 2 & 1 & 0 & 1 & 0 & 0 \cr
70 & 2 & 2 & 1 & 2 & 2 & 1 & 2 & 1 & 1 & 2 & 1 & 1 \cr
71 & 3 & 2 & 1 & 2 & 1 & 2 & 2 & 0 & 1 & 2 & 1 & 0 \cr
72 & 4 & 2 & 1 & 1 & 0 & 2 & 3 & 1 & 1 & 1 & 1 & 0 \cr
73 & 5 & 2 & 0 & 2 & 3 & 2 & 1 & 0 & 0 & 2 & 0 & 0 \cr
74 & 6 & 2 & 1 & 2 & 2 & 3 & 3 & 1 & 0 & 3 & 0 & 0 \cr
75 & 7 & 2 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 \cr
76 & 8 & 2 & 0 & 2 & 2 & 1 & 2 & 0 & 0 & 2 & 0 & 0 \cr
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\label{tab:msq}
\end{table}
\if{FALSE}
msq.items <- c("anxious" , "at.ease" , "calm" , "confident", "content", "jittery",
"nervous" , "relaxed" , "tense" , "upset" ) #these overlap with the msq
df2latex(msqR[c(1:8,69:76),c(cs(id,time),msq.items)])
example <- msqR[c(1:8,69:76),c(cs(id,time),msq.items)]
\fi
An under-appreciated part of data analysis is the basic data cleaning necessary to work with real data. Mistakes are made at data entry, participants fall asleep, other participants drop out, some do not answer every question, some participants are intentionally deceptive in their responses. It is important before doing any analysis to find basic descriptive statistics to look for impossible responses, to examine the distribution of responses, and to attempt to detect outliers. However, as discussed by \cite{wilcox:01}, merely examining the shape of the distribution is not enough to detect outliers and it is useful to apply \emph{robust estimators} of central tendency and relationships. The \Rpkg{WRS2} package \citep{WRS2} implements many of the robust statistics discussed by Wilcox and his colleagues \citep{wilcox:03,wilcox:05}. For example, the algebraic mean is just the sum of the observations divided by the number of observations. The trimmed mean is the same after a percentage (e.g., 10\%) are removed from the top and bottom of the distribution. The trimmed mean is more robust to outliers than is the algebraic mean. The median is the middle observation (the 50th percentile) and is an extreme example of a trimmed mean (with trim=.5). The minimum and maximum observations, and the resulting range are most useful for detecting improper observations. Skew and Kurtosis are functions of the third and fourth power of the deviations from the mean \citep{mardia:70}. The \pfun{describe} function will also report various percentiles of the distribution including the Inter Quartile Range (25th to 75th percentiles) (Table~\ref{tab:describe}).
\begin{table}[htpb]
\caption{Descriptive statistics for the data in Table~\ref{tab:msq}. An important step before doing any more advanced analysis is to search for outliers by examing the data for impossible values and comparing the values of the algebraic versus trimmed mean versus the median.}
\begin{center}
\begin{scriptsize}
\begin{tabular} {l r r r r r r r r r r r r r r }
% \multicolumn{ 13 }{l}{Descriptive statistics found by the \pfun{describe} function. } \cr
\hline Variable & {vars} & {n} & {mean} & {sd} & {medin} & {trmmd} & {mad} & {min} & {max} & {range} & {skew} & {kurtosis} & {se} & {IQR}\cr
\hline
id & 1 & 16 & 4.50 & 2.37 & 4.5 & 4.50 & 2.97 & 1 & 8 & 7 & 0.00 & -1.45 & 0.59 & 3.50 \cr
time & 2 & 16 & 1.50 & 0.52 & 1.5 & 1.50 & 0.74 & 1 & 2 & 1 & 0.00 & -2.12 & 0.13 & 1.00 \cr
anxious & 3 & 16 & 0.62 & 0.62 & 1.0 & 0.57 & 0.74 & 0 & 2 & 2 & 0.35 & -0.96 & 0.15 & 1.00 \cr
at.ease & 4 & 16 & 1.94 & 0.57 & 2.0 & 1.93 & 0.00 & 1 & 3 & 2 & -0.02 & -0.19 & 0.14 & 0.00 \cr
calm & 5 & 16 & 1.88 & 0.81 & 2.0 & 1.93 & 0.00 & 0 & 3 & 3 & -0.51 & -0.20 & 0.20 & 0.25 \cr
confident & 6 & 16 & 1.75 & 0.68 & 2.0 & 1.71 & 0.74 & 1 & 3 & 2 & 0.29 & -1.04 & 0.17 & 1.00 \cr
content & 7 & 16 & 2.06 & 0.68 & 2.0 & 2.07 & 0.00 & 1 & 3 & 2 & -0.06 & -0.98 & 0.17 & 0.25 \cr
jittery & 8 & 16 & 0.44 & 0.51 & 0.0 & 0.43 & 0.00 & 0 & 1 & 1 & 0.23 & -2.07 & 0.13 & 1.00 \cr
nervous & 9 & 16 & 0.31 & 0.48 & 0.0 & 0.29 & 0.00 & 0 & 1 & 1 & 0.73 & -1.55 & 0.12 & 1.00 \cr
relaxed & 10 & 16 & 1.94 & 0.57 & 2.0 & 1.93 & 0.00 & 1 & 3 & 2 & -0.02 & -0.19 & 0.14 & 0.00 \cr
tense & 11 & 16 & 0.31 & 0.48 & 0.0 & 0.29 & 0.00 & 0 & 1 & 1 & 0.73 & -1.55 & 0.12 & 1.00 \cr
upset & 12 & 16 & 0.12 & 0.34 & 0.0 & 0.07 & 0.00 & 0 & 1 & 1 & 2.06 & 2.40 & 0.09 & 0.00 \cr
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\label{tab:describe}
\end{table}
\section{Tests of statistical significance: Normal theory and the bootstrap}
Those brought up in the Fisherian tradition of Null Hypothesis Significance Testing (NHST) traditionally compare fit statistics to their expected value given normal theory. Fits are converted into standardized scores (\emph{z} scores) and then probabilities are found from the normal distribution. This works well with large samples where errors are in fact random. For smaller samples, the variation of estimates of mean differences compared to the sample based standard error are larger than expected given the normal. This problem led to the introduction of the \emph{t} statistic for comparing the means of smaller groups \citep{student:t} ( \pfun{t.test} ) and the \emph{r} to \emph{z} transformation (\pfun{r2z}) for tests of correlations \citep{fisher:21}. (Use \pfun{cor.test} for one correlation, \pfun{corr.test} for many correlations). Most functions in \R{} will return both the statistic and the probability of that statistic. Many will also return a confidence interval for the statistic.
But the probabilities (and therefore the confidence intervals) are a function of the \emph{effect size}, the sample size, and the distribution of the parameter being estimated.
Unfortunately, not all tests can be assumed to be normally distributed and it is unclear how to find the distribution of arbitrary parameters of a distribution (e.g., the median). Extending ideas such as the `Jackknife' proposed by \cite{tukey:58}, \cite{efron:79} proposed the `bootstrap' (having considered such names as the `shotgun'), a powerful use of random sampling \citep{efron:83}.
The basic concept of the bootstrap is to treat the observed sample as the entire population, and then to sample repeatedly from this `population' with replacement; find the desired estimate (e.g, the mean, the median, the regression weight) and then do this again, and again, and many times again. Each sample, although the same size as the original sample, will contain (on average) 63.2\% of the subjects in the original sample, with 36.8\% being repeated at least once\footnote{This perhaps unintuitive amount is $1-\frac{1}{e}$ and is the limit of the probability of an item not being repeated as the number of cases increases ($p = 1 -(1-\frac{1}{n})^n)$.} The resulting distribution of the estimated value can be used to find confidence intervals without any appeal to normal theory.
For those who use NHST, it is important to understand the probability that a real effect is detected, the power, is not the same as the probability of rejecting the `nil' hypothesis that an effect is 0 \citep{cohen:88,cohen:92,cohen:94,streiner:03}. A number of \R{} packages \citep[e.g. \Rpkg{pwr}, ][]{pwr} include easy to use power calculators to find the the power of a design given an effect size, sample size, and desired $\alpha$ level.
One of the great advances of modern statistics is the use of the bootstrap and other randomization tests. In the space of seconds, 1,000 to 100,000 bootstrap resamples can be calculated for almost any statistic. We will use this procedure when we find the confidence intervals for correlation coefficients (Table~\ref{tab:Tal_Or}) and in particular for the effect of mediation in a regression model.
%1 - 1/e
\section{Correlation and Regression}
Originally proposed by \cite{galton:88} and refined by \cite{pearson:95} and \cite{spearman:rho}, the linear regression coefficient and its standardized version, the correlation coefficient are the fundamental statistics of research. In terms of deviation scores ($x = X - \bar{X}$ and $y = Y - \bar{Y}$)
\begin{equation}
r_{xy} = \frac{\Sigma xy}{\sqrt{\Sigma{x^2}\Sigma{y^2}}}.
\label{eq:cor}
\end{equation}
Depending upon the characteristics of the data, the correlation as defined in Equation~\ref{eq:cor} has many different names. If both X and Y are continuous, then the resulting correlation is known as the Pearson Product Moment Correlation Coefficient (or just Pearson r). If converted to ranks, as Spearman's $\rho$, and if both X and Y are dichotomous as the $\phi$ coefficient (Table~\ref{tab:cov.r}). Three of the correlations shown are estimates of what the latent continuous correlation would be if two continuous latent variables ($\chi$ and $\psi$) were artificially dichotomized (the tetrachoric), or split into multiple levels (the polychoric) correlation.
Because the first four of these correlations are just different forms of the Pearson r (albeit it in different forms), the same estimation function can be used. In core \R{} this is just the \pfun{cor} function or to find covariances the \pfun{cov} function. The last three require specialized functions written for polytomous (or dichotomous) data (i.e., the \Rpkg{psych} package functions \pfun{polyserial}, \pfun{tetrachoric} and \pfun{polychoric}. All of these functions are combined in \pfun{mixed.cor}.)
The tetrachoric correlations of ability data (answers are right or wrong) and the polychoric correlations of self report temperament scales (typically on a 1-4, 1-5, or 1-6 scale), being the modeled correlation of continuous latent scores, will be larger in absolute value than the Pearson correlations of the same data. In addition, these estimates of the latent correlations are not affected by differences in distributions the way the Pearson r on the observed variables is. An example of the difference between a Tetracoric and a Pearson $\phi$ is seen in Table~\ref{tab:sdt1} where $\phi = .32$ but the inferred relationship between two continuous variables was .54.
\begin{table}[htpb]
\caption[Alternatives to the Pearson r]{A number of correlations are Pearson r in different forms, or with particular assumptions. The first four use $r = \frac{ \sum x_iy_i}{\sqrt{\sum x_i^2\sum y_i^2}}$, The last three are based upon assumptions of normality of a latent X and Y, with an artificial dichotomization or categorization into discrete (but ordered) groups. }
\begin{tabular}{lllll} \hline
Coefficient &symbol& X & Y & Assumptions \\ \hline
Pearson &r& continuous & continuous& \\
Spearman &rho ($\rho$)& ranks & ranks & \\
Point bi-serial &$r_{pb}$ & dichotomous & continuous & \\
Phi &$\phi$ & dichotomous & dichotomous & \\
Bi-serial & $r_{bis}$ &dichotomous & continuous & normality \\
Tetrachoric &$r_{tet}$&dichotomous & dichotomous & bivariate normality \\
Polychoric &$r_{pc}$&categorical & categorical & bivariate normality \\ \hline
\end{tabular}
\label{tab:cov.r}
\end{table}
\subsection{The ubiquitous correlation coefficient}
The correlation is also a convenient measure of the size of an effect \citep{ozer:07}. It has long been been known that the difference in means compared to the within group standard deviation: the d statistic of \cite{cohen:62,cohen:88,cohen:92} is a better way to compare the difference between two groups than Student's t statistic. For it is the size of the difference that is important, not the significance. An undue reliance of ``statistical significance" has ignored the basic observation that the
test of significance = size of effect x size of study \citep{rosenthal:94} and that the resulting p value is a non-linear function of the size of the effect. To remedy this problem, \cite{cohen:62} developed the d statistic for the comparison of two groups (use \pfun{cohen.d}). Generalizations for multiple groups or continuous variables allow the translation of many alternative indices of effect size into units of the correlation coefficient (see Table~\ref{tab:effectsize}). Robust alternatives to d (found by \pfun{d.robust}) express differences in terms of trimmed means and Winsorized variances \citep{algina:05,erceg:08}. Basic principles in reporting effect sizes are available in a recent tutorial \citep{pek:flora:18}.
\begin{table}[htpb]
\caption{Alternative Estimates of effect size. Using the correlation as a scale free estimate of effect size allows for combining experimental and correlational data in a metric that is directly interpretable as the effect of a standardized unit change in x leads to r change in standardized y. }
\begin{tabular}{llll}
\hline
Statistic & Estimate &r equivalent &as a function of r \cr \hline
Pearson correlation& $r_{xy} = \frac{C_{xy}}{ \sigma_x \sigma_y} $ &$r_{xy} $ &\cr
Regression & $b_{y.x} = \frac{C{xy}}{\sigma_x^2}$ & $r = b_{y.x} \frac{\sigma_y}{\sigma_x}$ & $b_{y.x} = r \frac{\sigma_x}{\sigma_y}$ \cr
Cohen's d & $d = \frac{X_1 - X_2 }{ \sigma_x }$ & $r =\frac{ d}{\sqrt{d^2+4}}$ & $ d = \frac{2r}{\sqrt{1-r^2}}$\cr
Hedge's g &$g = \frac{X_1 - X_2}{s_x} $ & $r = \frac{g}{\sqrt{g^2 + 4(df/N)}}$ & $g = \frac{2r\sqrt{df/N}}{\sqrt{1-r^2}} $ \cr
t - test & $ t= \frac{d \sqrt{df}}{2}$ &$ r =\sqrt{ t^2 /\/(t^2 +df)} $ & $t = \sqrt{\frac{r^2 df}{1-r^2}} $ \cr
F-test &$ F = \frac{d^2 df}{4}$ &$ r = \sqrt{F/(F+df)} $ & $ F = \frac{r^2 df}{1-r^2} $\cr
Chi Square & &$ r = \sqrt{\chi^2/n}$ & $ \chi^2 = r^2 n $ \cr
Odds ratio&$ d= \frac{ ln(OR)}{1.81}$ &$ r= \frac{ln(OR)}{1.81\sqrt{(ln(OR)/1.81)^2+4}}$ &$ ln(OR) = \frac{3.62r}{\sqrt{1-r^2}}$ \cr \
$r_{equivalent}$ &r with probability p & $r = r_{equivalent} $ & \cr \hline
\end{tabular}
\label{tab:effectsize}
\end{table}
\subsection{Multiple Regression and the general linear model}
Just as the \emph{t}-test and the \emph{F}-test may be translated into correlations units, so they can be thought of in terms of the general linear model \citep{judd:mc}:
\begin{equation}
\hat{\vec{Y}} = \vec{\mu} + \vec{\beta} \vec{X}+ \epsilon.
\label{eq:glm}
\end{equation}
$\vec{X}$ can be an experimental \emph{design matrix} with one or more independent grouping variables but it can also include a set of person variables. In the case of just one dichotomous grouping variable, then Equation~\ref{eq:glm} is just the regression of the two levels of $\vec{X}$ with the dependent variable and is similar to the comparison of the means of Student's \emph{t} \citep{student:t}. \emph{t} is typically expressed as difference of means compared to the standard error of that difference but is better expressed as an effect size multiplied by one half the square root of the degrees of freedom (df) or the ratio of the correlation times the square root of the degrees of freedom to the coefficient of alienation \citep[$\sqrt{1-r^2}$, ][]{brogden:46}.
\begin{equation}
t= \frac{\bar{X_1} - \bar{X_2}}{\sqrt{\frac{\sigma^2_1}{n_1} + \frac{\sigma^2_2}{n_2}}} = \frac{d \sqrt{df}}{2} = \frac{r}{\sqrt{1-r^2}} \sqrt{df}
\end{equation}
where the degrees of freedom are $n_1 + n_2 -2$ \citep{rosnow:03}. The slope of the regression is the effect size, dividing this by the coefficient of alieniation and multiplying by the square root of df converts the regression to a t. It is found in \R{} with the \pfun{t.test} function.
If $\vec{X}$ has two categorical grouping variables (e.g., $x_1$ and $x_2$), then we have
\begin{equation}
\hat{y} = \mu + \beta_1x_1 + \beta_2 x_2 + \beta_{12}x_1 x_2 + \epsilon
\end{equation}
which for categorical values of $\vec{X}$ is just the traditional analysis of variance of two main effects and an interaction \citep{fisher:25}.
This may be found using the \pfun{aov} function which acts on categorical variables and returns the traditional ANOVA output. With unbalanced repeated measures designs, the \pfun{lme} function included in the \Rpkg{lme4} package \citep{lme4} allows a specification of random and fixed effects.
The advantage of the general linear model for psychologists interested in individual differences is that continuous person variables can be included in the same model as experimental variables. This is a great improvement from prior approaches which would artificially dichotomize the person variable into high and low groups in order to use an ANOVA approach. By retaining the continuous nature of the predictor, we improve the power over the ANOVA test.
As an example of using the general linear model, we use a data set from \cite{talor:10} that is discussed by \cite{hayes:13}. \cite{talor:10} measured the effect of an experimental manipulation of salience of a news article (cond) on presumed media influence (PMI), perceived importance of the issue (import), and reported willingness to change one's behavior (reaction)\footnote{With the kind permission of Nurit Tal-Or, Jonathan Cohen, Yariv Tsfati, and Albert C. Gunther, these data were added to the \Rpkg{psych} package as the \pfun{Tal\_Or} data set.}. The observed correlations are found by using the \pfun{lowerCor} function and are given in Table~\ref{tab:Tal_Or}.
\begin{table}[htpb]\caption{Correlations of the conditions with Perceived Media Influence, Importance of the message, and Reaction to the message \citep{talor:10}. As is traditional in NHST, correlations that are larger than would be expected by chance are marked with `magic astericks'. Confidence intervals for these correlations are shown given normal theory (upper and lower normal) as well as estimated by 1,000 bootstrap resamplings of the data (lower and upper empirical).}
%\begin{center}
%\begin{scriptsize}
\begin{tabular} {l S S S S S S }
\multicolumn{ 4 }{l}{ The Tal-Or et al. correlation matrix from \pfun{lowerCor} } \cr
\hline Variable & {cond} & {pmi} & {imprt} & {rectn} & & \cr
\hline
cond & 1.00 & & & \cr
pmi & 0.18{*} & 1.00 & & \cr
import & 0.18{*} & 0.28{**} & 1.00 & \cr
reaction & 0.16 & 0.45{***} & 0.46{***} & 1.00 \cr
\hline
\multicolumn{7}{l}{\scriptsize{\emph{Note: }\textsuperscript{***}$p<.001$;
\textsuperscript{**}$p<.01$;
\textsuperscript{*}$p<.05$.}}
\end{tabular}
\begin{tabular} {l r r r r r r r r}
\multicolumn{ 7 }{l}{ Empirical and normal theory based confidence intervals from \pfun{cor.ci}. } \cr
\hline Variable & {lwr.m} & {lwr.n} & {estmt} & {uppr.n} & {uppr.m}\cr
\hline
cond-pmi & 0.01 & 0.01 & 0.18 & 0.36 & 0.36 \cr
cond-imprt & 0.02 & 0.00 & 0.18 & 0.35 & 0.35 \cr
cond-rectn & -0.01 & -0.01 & 0.16 & 0.33 & 0.33 \cr
pmi-imprt & 0.09 & 0.09 & 0.28 & 0.45 & 0.45 \cr
pmi-rectn & 0.31 & 0.31 & 0.45 & 0.57 & 0.56 \cr
imprt-rectn & 0.32 & 0.32 & 0.46 & 0.60 & 0.61 \cr
\hline
\end{tabular}
%\end{scriptsize}
%\end{center}
\label{tab:Tal_Or}
\end{table}
There are multiple ways to analyze these data. We could naively do three t-tests of the experimental manipulation, find all of the intercorrelations, or do a regression predicting reaction from the condition, perception of media influence (PMI) and perceived importance of the message (Importance). All of these alternatives are shown in the appendix. The \pfun{setCor} and \pfun{mediate} functions will also draw the regressions as path diagrams (Figure~\ref{fig:regression}).
\subsection{Mediation and Moderation}
In the \cite{talor:10} data set, the experimental manipulation affected the dependent variable of interest (reaction) but also two other variables (perception of media influence and perceived importance of the message). There is a direct effect of condition on reaction, as well as indirect effects through PMI and Importance. Conventional regression shows the direct effect of reaction controlling for the indirect effects that go through PMI and import. The \emph{total effect} of condition on reaction is their covariance divided by the condition variance and is known as the \emph{c} effect ($c= \frac{\sigma_{xy}}{\sigma^2_x}=\frac{ .125}{.25} = .5$). If we label the paths from cond to PMI ($a_1=.48$) and from condition to import ($a_2 = .63$), then the \emph{indirect effects} are the sum of the products through the two mediators ($b_1 = .40, b_2 = .32 \to a_1 b_1 + a_2 b_2 = .48 * .40 + .63*.32 = .4$) then the \emph{direct effect} is the total less the indirect effect (c' = c - ab = .1). We say that the effect of the experimental manipulation is \emph{mediated} through its effect on perceived importance and perceived media influence. The error associated with the mediating term (ab) or the sum of product terms ($a_1 b_1 + a_2 b_2$) needs to be found by bootstrapping the model multiple times \citep{preacher:15,hayes:13,preacher:07,mackinnon:08}. By default, the \pfun{mediate} function in \Rpkg{psych} does 5,000 bootstrap iterations. See the Appendix~\ref{app:mediation} for sample output. Other packages
in \R{} that are specifically designed to test mediation hypotheses include the \Rpkg{mediation} \citep{mediation} and \Rpkg{MBESS} \citep{MBESS} packages.
\begin{figure}[htbp]
\begin{center}
%\includegraphics[width=7cm]{regression.pdf}
%\includegraphics[width=7cm]{mediation.pdf}
\includegraphics[width=15cm]{combined.pdf}
\caption{Regression and mediation approaches to the \cite{talor:10} data set. The curved lines represent covariances, the straight lines, regressions. Panel A shows the full regression model, Panel B shows the total effect (c=.5) and the direct effect (c' = .1) removing the indirect effect (ab) through PMI (.19) and through Import (.20). }
\label{fig:regression}
\end{center}
\end{figure}
When doing regressions, we sometimes are interested in the interactions of two of the predictor variables. For instance, when examining how women react to discriminatory treatment of a hypothetical other \cite{garcia:10}\footnote{With the kind permission of Donna M. Garcia, Michael T. Schmitt, Nyla R. Branscombe, and Naomi Ellemers, the data are included as the \pfun{Garcia} data set in the \Rpkg{psych} package} considered the interactive effects of beliefs about inequality and type of protest (individual vs. collective vs. none) as they affected the appraisal of the other person. This example of a moderated regression is discussed by \cite{hayes:13}.
Interactions (also known as moderation, or moderated regression) are found by entering the product term of the two interacting variables. There are several questions to ask in this analysis that will change the interpretability of the results. For example, should the data be mean-centered before finding the product term, and should the path models be done using standardized or unstandardized regressions? The recommendation from \cite{aiken:west} and \cite{cohen:03} is to mean center. However, \cite{hayes:13} rejects this advice. In both cases, the interaction terms will be identical, but the main effects will differ depending upon centering or not centering. The argument for mean centering is to remove the artificial correlation between the main effects and the interaction term. For with positive numbers X and Y, their product XY will be highly correlated with X and Y. This means that the linear effects of X and Y will be underestimated. The \pfun{setCor} and \pfun{mediate} functions will by default mean center the data before finding the product term, however this option can be modified. The \pfun{lm} does not and so we need to take an extra-step to do so. The function \pfun{scale} will mean center (and by default standardize). The second question, whether to standardize or not, is one of interpretability. Unstandardized coefficients are in the units of the predictors and the criteria and show how much the DV changes per unit change in each IV. The standardized coefficients, on the other hand are unit free and show how much change occurs per standard deviation change in the predictors. Standardization allows for easier comparison across studies but at the cost of losing the direct meaning of the regression slope. In the Appendix we show the code for mean centering using \pfun{scale} and then using the \pfun{lm} function to do the regression with the interaction term. We also show how the \pfun{setCor} function combines both operations.
\begin{figure}[htbp]
\begin{center}
%\includegraphics[width=7cm]{regression.pdf}
%\includegraphics[width=7cm]{mediation.pdf}
\includegraphics[width=15cm]{moderation.pdf}
\caption{Two ways of showing moderation effects: Panel A, as a path diagram with the product term or Panel B: as a plot of the continuous variable (sexism) showing the individual regression slopes for the three protest conditions. Data from \cite{garcia:10}. }
\label{fig:regression}
\end{center}
\end{figure}
\subsection{Correlation, regression and decision making}
When reporting standardized regression weights ($\beta_i$) the amount of variance in the dependent variable accounted for by the regression model is $R^2 = \Sigma \beta_i r_i$.
However it is important to recognize that the slopes ($\beta_i$) are the optimal fit for the observed data and that the fit will probably not be as good in another sample. This problem of overfitting is particularly problematic in machine learning (see below) when the number of variables used in the regression is very large. Thus, regression functions will report the $R^2$ as well as shrunken or adjusted $R^2$ which estimate what the fit would be in another sample. For n subjects and k variables, the adjusted $\tilde R^2 = 1 - (1-R^2)\frac{n-1}{n-k -1} $ \citep{cohen:03}, that is, there will be more shrinkage for small sample sizes and a large number of predictors.
The $R^2$ for a particular model is maximized by using the regression weights, but because of what is known as the ``Robust beauty of improper linear models" \citep{dawes:79} or the principal that ``it don't make no nevermind" \citep{wainer:76}, as long as the predictors are moderately correlated with the criterion, using unit weights (1, 0, -1) works almost as well. Weights are said to be `fungible' \citep{waller:08,waller:10} in that an infinite set of weights will do almost as good a job as the optimal weights.
Although the variance in the criterion accounted for by the predictors is $R^2$, it is better to report the actual R, which reflects the amount of change in the criterion for unit changes in the predictors \citep{ozer:07}. Change is linear with $R$, not $R^2$. This is particularly important when discussing the correlation of a dichotomous predictor with a dichotomous outcome (e.g., applicants are selected or not selected for a job, they succeed or they fail). Consider the four outcomes shown in Table~\ref{tab:sdt} applied to a decision study by \cite{danielson:54} and elaborated on by \cite{wiggins:73}. Of 504 military inductees, 89 were later diagnosed as having psychiatric problems requiring their discharge. How well could this future diagnosis be predicted? Using a screening test given to all of the inductees, 55\% of the future psychiatric diagnoses could be predicted, with a false alarm (false positive) rate of 19\%. This leads to an accuracy of classification (Valid Positives + Valid Negatives) of .76 and a \emph{sensitivity} of .55 and a \emph{specificity} of .81 (Table~\ref{tab:sdt1}). In this kind of binary decision, the $\phi$ coefficient is a linear function of the difference between the percent of Valid Positives and the number expected due to the base rates (BR) times the selection ratio (SR):
\begin{equation}\phi = \frac{VP - BR * SR}{\sqrt{(BR)(1-BR)(SR)(1-SR)}}
\end{equation}
In the case of BR = SR =.5, 50\% accuracy means a 0 correlation, 60 \% a correlation of .2, 70\% a correlation of .4, etc. That is, the number of correct predictions is a linear function of the correlation \citep{ozer:07,rosenthal:rubin:besd,wiggins:73}.
An alternative approach when considering accuracy in decision making is known as `signal detection theory' which was developed to model the detection of a signal in a background of noise \citep{green:sdt}. \emph{d'} (d-prime) relects the sensitivity of the observer and $\beta$ the criterion the observer was using to make the decision. Similar ideas are seen in the NHST approach to significance testing, where effect size is equivalent to d' and the criterion used (.05, .01) is the decision criterion. The relationship predicted accuracy as a function of the selection ratio, the base rates, and the size of the correlation was discussed by \cite{taylor:russell} who present tables for different values. The equivalance of these various procedures in seen in (Figure~\ref{fig:sdt}) which presents graphically the cell entries in Table~\ref{tab:sdt1}. The \pfun{AUC} (area under the curve) function will take the two by two table of decision theory and report $d', \phi, r_{tetrachoric}$ as well as total accuracy, sensitivity, and specificity.
\begin{table}[htbp]
\caption{The four outcomes of a decision. Subjects above a particular score on the decision axes are accepted, those below are rejected. Similarly, the criterion of success is such that those above a particular value are deemed to have succeed, those below that value to have failed. All numbers are converted into percentages of the total. }
\begin{tabular}{cccccc}
& &\multicolumn{2}{c} {Decision = Predicted Outcome} \cr
& & Accept & Reject \\\cline{3-4}\cr
& Success & Valid Positive (VP) & False Negative (FN) & Base Rate (BR) \cr
Outcome \cr
& Failure & False Positive (FP) & Valid Negative (VN) & 1 - Base Rate (1-BR) \\\cline{3-4} \cr
% & \cline{2-3} \cr
&&Selection Rate (SR) & 1-Selection Rate (1-SR)\cr
\end{tabular}
\begin{tabular}{lll}
Accuracy = & Valid Positive + Valid Negative \cr
Sensitivity = & Valid Positive /(Valid Positive + False Negative) \cr
Specificity = &Valid Negative / (Valid Negative + False Positive) \cr
Phi = &$\frac{ VP - BR * SR}{\sqrt{BR (1-BR) * SR * (1-SR) }}$
\end{tabular}
\label{tab:sdt}
\end{table}
\begin{table}[htbp]
\caption{Applying decision theory to a prediction problem: the case of predicting future psychiatric diagnoses from military inductees. (Data from \cite{danielson:54} as discussed by \cite{wiggins:73}.}
Raw Data
\begin{tabular}{lrrr}
&Predicted Positive & Predicted Negative & Row Totals \cr
True Positive & 49 & 40 & 99 \cr
True Negative & 79 & 336 & 406 \cr
Column Totals & 118 & 376 & 505 \cr
\end{tabular}
Fraction of Total
\begin{tabular}{lrrr}
&Predicted Positive & Predicted Negative & Row Totals \cr
True Positive & .097 & .079 & .196 \cr
True Negative & .157 & .667 & .804 \cr
Column Totals & .234 &.746 & 1.00 \cr
\end{tabular}
\begin{tabular}{lrrrr}
Accuracy = &.097 + .667 = .76 \cr
Sensitivity = & .097/(.097 + .079) = .55 \cr
Specificity = &.667 / (.667+.157) = .81 \cr
Phi = &$\frac{ .097 - .196*.234}{\sqrt{.196 * .804 * .234 * .747 }} = .32$\cr
\end{tabular}
\label{tab:sdt1}
\end{table}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{signaldetection.pdf}
\caption{Signal detection theory converts the frequencies of a 2 x 2 table into normal equivalents and shows the relative risks of false positives and false negatives. The number of valid positives will increase at a cost of increasing false positives. Figure from the \pfun{AUC} function with input from Table~\ref{tab:sdt1}. }
\label{fig:sdt}
\end{center}
\end{figure}
\section{Latent Variable Modeling: EFA, CFA and SEM}
There has long been tension in psychological research between understanding and causal explanation versus empirical prediction and control. The concept of the underlying but unobservable cause that accounts for the patterns of observed correlations was implicit in the measurement models of \cite{spearman:04} and considered explicitly by \cite{borsboom:03}. This is the logic of the reflective latent variable indicate in the paths of Figure~\ref{fig:overview} from the latent variables $\chi_1$ or $\chi_2$ to the observed variables $X_1 ... X_6$. Figure~\ref{fig:overview} represents multiple causal paths: from the latent $\chi_{1..3}$ to the observed $X_{1..9}$ and from the latent $\eta_{1..2}$ to the observed $Y_{1..6}$ as well as from the latent predictors ( $\chi_{1..3}$) to the latent criteria ( $\eta_{1..2}$). In this perspective, items are reflective measures of the latent trait \citep{loevinger:57,bollen:02} and can be thought to be caused by the latent trait. The contrasting approach of prediction and control was traditionally the domain of behaviorists emphasizing the power of environmental stimuli upon particular response patterns. Stimulus-response theory had no need for latent variables; outcomes were perfectly predictable from the stimulus conditions. In personality research this was the appeal of empirical keys for the MMPI \citep{mmpi:43,butcher:89}, or Strong's Vocational Interest Test \citep{strong:27}, and continues now with the statistical learning procedures we will discuss later.
In this empirical approach, scales are composites formed of not necessarily related items. The items are said to be formative indicators that ``cause" the latent variable.
\subsection{Exploratory Factor Analysis}
The original concept for factor analysis was Spearman's recognition that the correlations between a number of cognitive ability tests was attenuated due to poor measurement. When correcting for measurement error (see the reliability section where we discuss such corrections for attenuation) all of the cognitive domains were correlated almost perfectly. The underlying latent factor of these tests was thought to be a measure of general intelligence.
Although the initial calculations were done on tables of correlations, when a kindly mathematician told Thurstone in 1931 that his generalization of Spearman's procedure was just the taking the square root of a matrix \citep{bock:07}, Thurstone immediately applied this new matrix algebra to his ability measures and produced his \emph{Vectors of the Mind} \citep{thurstone:33}. Correlations were no longer arranged in tables, they were now elements of ``correlation matrices". Factor analysis was seen as the approximation of a matrix with one of lesser rank. In modern terminology, factor analysis is just an \emph{eigen decomposition} problem and is a very straight forward procedure.
For any symmetric matrix, $ \vec{R}$ of rank n there is a set of \emph{eigen vectors} that solve the equation $\vec{x_i R} = \lambda_i \vec{x_i}
$ and the set of n eigenvectors are solutions to the equation
\begin{displaymath}
\vec{XR} = \vec{\lambda X}
\end{displaymath}
where $\vec{X}$ is a matrix of orthogonal eigenvectors and $\vec{\lambda }$ is a diagonal matrix of the \emph{eigenvalues}, $\lambda_i$. Finding the eigenvectors and eigenvalues is computationally tedious, but may be done using the \fun{eigen} function. That the vectors making up $ \vec{X}$ are orthogonal means that $\vec{XX'} = \vec{I}$
and they form the \iemph{basis space} for $\vec{R}$ that is: $\vec{R} = \vec{X \lambda X'}$.
In plain terms, it is possible to recreate the correlation matrix $\vec{R}$ in terms of an orthogonal set of vectors (the \iemph{eigenvectors}) scaled by their associated \iemph{eigenvalues}.
We can find the \emph{principal components} of $\vec{R}$ by letting $$\vec{C} = \vec{X}\sqrt{\vec{\lambda}} $$ and therefore
\begin{equation}
\vec{R} = \vec{CC}' .
\label{eq:pca}
\end{equation}
But such a decomposition is not very useful, because the size (rank) of the $\vec{X}$ matrix is the same as the original $\vec{R}$ matrix. However, if the components are in rank order of their eigenvalues, the first k ($k < n$) components will provide the best fit to the $\vec{R}$ matrix when compared to any other set of vectors. Such a principal components analysis (\emph{PCA}) is useful for optimally describing the observed variables. The components are merely weighted sums of the variables and may be used in applied prediction settings. The components are the k orth}gonal sums that best summarize the the total variability of the correlation matrix. The \pfun{pca} function will do this analysis.
An alternative model, the \emph{common factor} model attempts to fit the variance that the n variables have in common and ignores that variance which is unique to each variable.
\begin{equation}
\vec{R} \approx \vec{FF}' + \vec{U}^2.
\label{eq:fa}
\end{equation}
where $\vec{F}$ is of rank k, and $\vec{U}^2$ is a diagonal matrix of rank n. The $\vec{U}^2$ matrix may be thought of as the residual variance when we subtract the model ($\vec{FF}'$) from the data ($\vec{R}$) $\vec{U}^2 = \vec{R} -\vec{FF}' $.
Although it would seem that these two equations (\ref{eq:pca}, \ref{eq:fa}) are quite similar, they are not. For in the first case, the components are formed from linear sums of the variables, while in the second, the variables reflect the linear sums of the factors.
Equation~\ref{eq:pca} can be solved directly for $\vec{C}$, but equation~\ref{eq:fa} has different solutions for $\vec{F}$ depending upon the values in the $\vec{U}^2$ matrix which in turn depend upon the value of k. If we know the amount of variance each variable shares in common with all of the other variables (this is known as the \emph{communality} and is $h^2_i = 1 - \vec{U}^2_i$) then we can solve for the factors. But, unfortunately, we do not know $\vec{U}^2$ unless we know $\vec{F}$. The solution to this conundrum takes advantage of the power of computers to do \emph{iterative} solutions. Make an initial guess of $\vec{U}^2$, solve equation~\ref{eq:fa} for $\vec{F}$ and take the resulting $\vec{U}^2$ as the input for the next iteration. Repeat these steps until the change in $\vec{U}^2$, from one step to the next is very small and then quit \citep{spearman:27,thurstone:33,thurstone:34,thurstone:35}.
Consider the correlation matrix in Table~\ref{tab:thurstone}. A conventional initial estimate for the communalities (diag($\vec{I}$-$\vec{U}^2$)) might be the Squared Multiple Correlation (SMC) of each variable with all the others (the last line of Table~\ref{tab:thurstone} shows these values). Enter these in the diagonal of the matrix and solve for $\vec{F}$. Unfortunately exploratory factor analysis is not quite as simple as this for there are at least four decisions that need to be made: what kind of correlation to use, which factor extraction algorithm to use, how many factors to extract, and what rotation or transformation should be applied?
\begin{table}[htpb]\caption{The \pfun{Thurstone} correlation matrix is a classic data set discussed in detail by R. P. McDonald \citep{mcdonald:85,mcdonald:tt} and and is used as example in the \Rpkg{sem} package as well as in the PROC CALIS manual for SAS. These nine tests were grouped by \cite{thurstone:41} (based on other data) into three factors: Verbal Comprehension, Word Fluency, and Reasoning). The original data came from \cite{thurstone:41} but were reanalyzed by \cite{bechtoldt:61} who broke the data set into two. McDonald, in turn, selected these nine variables from the larger set of 17 found in \pfun{Bechtoldt.2}. The sample size is 213. }
\begin{center}
\begin{scriptsize}
\begin{tabular} {l r r r r r r r r r }
% \multicolumn{ 9 }{l}{ A correlation table from the psych package in R. } \cr
\hline Variable & {Sntnc} & {Vcblr} & {Snt.C} & {Frs.L} & {F.L.W} & {Sffxs} & {Ltt.S} & {Pdgrs} & {Ltt.G}\cr
\hline
Sentences & 1.00 & & & & & & & & \cr
Vocabulary & 0.83 & 1.00 & & & & & & & \cr
Sent.Completion & 0.78 & 0.78 & 1.00 & & & & & & \cr
First.Letters & 0.44 & 0.49 & 0.46 & 1.00 & & & & & \cr
Four.Letter.Words & 0.43 & 0.46 & 0.42 & 0.67 & 1.00 & & & & \cr
Suffixes & 0.45 & 0.49 & 0.44 & 0.59 & 0.54 & 1.00 & & & \cr
Letter.Series & 0.45 & 0.43 & 0.40 & 0.38 & 0.40 & 0.29 & 1.00 & & \cr
Pedigrees & 0.54 & 0.54 & 0.53 & 0.35 & 0.37 & 0.32 & 0.56 & 1.00 & \cr
Letter.Group & 0.38 & 0.36 & 0.36 & 0.42 & 0.45 & 0.32 & 0.60 & 0.45 & 1.00 \cr
\hline
SMC & 0.74 & 0.75 & 0.67 & 0.55 & 0.52 & 0.43 & 0.48 & 0.45 & 0.43 \cr
\end{tabular}
\end{scriptsize}
\end{center}
\label{tab:thurstone}
\end{table}
\subsection{Which correlation?}
If the data are continuous (or have at least 8-10 response levels), then the normal Pearson r is the appropriate measure of relationship. But if the data are dichotomous (as would be the case for items if scoring correct/correct on an ability test) or polytomous (as is normally the case when scoring personality questionnaires with a 1-5 or 1-6 rating scale), then it is better to use the \pfun{tetrachoric} correlation (for dichotomous items) or its generalization to polytomous items, the \pfun{polychoric} correlation. The principal reason for doing so is that the Pearson correlation for items that differ in their mean endorsement rate can not have a high correlation value and is attenuated. As discussed earlier, the tetrachoric is the modeled correlation of the latent traits affecting the scores on the items not the observed scores of the items themselves.
Unfortunately, using tetrachoric correlations will frequently produce correlation matrices which are said to be non-positive-definite, which means some of the eigen values of the matrix are negative. With appropriate assumptions, such matrices can be corrected (\emph{smoothed}) by adding a small number to any negative eigen value, adjusting the positive ones to keep the same total, and then recreating the matrix from the original eigen vectors and the adjusted eigen values \citep{wothke:93}. This is done in the \pfun{cor.smooth} function.
\subsubsection{Factor extraction}
Factors are approximate solutions to Equation~\ref{eq:fa} and have a degree of misfit. Each factoring method attempts to minimize this misfit. The basic fitting equation is
\begin{equation}
E = \frac{1}{2} tr[(\vec{R} - \vec{FF}') \vec{W}]^2
\end{equation}
where \emph{tr} means the trace (sum of the diagonals) of a matrix. If $\vec{W}$ is the identity matrix, minimizing E is equivalent to ordinary least squares (OLS); if $\vec{W} = \vec{R}^{-1}$, it is generalized least squares (GLS), and if $\vec{W} = \vec{FF'}^{-1}$ it is maximum likelihood (ML) \citep{loehlin:04}. Maximum likelihood \citep{lawley:62,lawley:63} has the advantage that under normal theory it finds the model that maximizes the likelihood of the data given the model, but with the disadvantage that it requires taking the inverse of the model. GLS is a close approximation of ML, but requires that the original correlation matrix be invertible. OLS does not require taking inverses but will not produce `optimal' solutions (in the ML sense). OLS (and the variant known as minimum residual \citep{harman:1966} has the advantage that it is more robust to violations of the model and will produce meaningful solutions even in the presence of many, minor, `nuisance' factors \citep{maccallum:07}. Empirically, although not minimizing the ML criterion, \emph{minres} solutions are very close to it. All of these factor extraction techniques are available in the \pfun{fa} function in the \Rpkg{psych} package as are alpha factoring \citep{kaiser:65} and minimum rank factoring \citep{shapiro:mrfa}.
\subsubsection{Number of factors}
An unsolved problem in EFA is how many factors to extract. Henry Kaiser is said to have solved the problem every day before breakfast, but the challenge is to find \emph{the} solution \citep{horn:79}. Perhaps the best known solution \citep{kaiser:70} is also the worst: extract as many factors as the number of principal components with eigen values larger than 1. This procedure, although the default for many commercial packages, routinely will extract too many factors \citep{revelle:vss}. Statistical criteria (e.g., extract factors as long as the $\chi^2$ of the residual matrix is significant) suffer from the problem of being dependent upon sample size: the larger the sample, the more factors are extracted. An appealing technique is to plot the successive eigen values and look for a sharp break. Where the \emph{scree}
of trivial factors suddenly jumps to larger values, stop factoring \citep{cattell:scree}. Another useful technique involving the plot of the eigen values is to compare observed values versus those from random data \citep{horn:65}. When the observed eigen values are less than those from random data, too many factors have been extracted. This is a useful rule of thumb, but seems to break down with more than about 500-1000 subjects, at which point the random eigen values are all essentially 1.0. Yet another approach is to plot the size of the average minimum partial correlation of the residual matrix. Where this achieves a minimum is an appropriate place to stop \citep{velicer:76}. For factoring items, a comparison of the goodness of fit of models which zero out all except the largest loading for each item seems to produce a reasonable estimate \citep{revelle:vss}. Finally, continuing the the extraction of factors as long as they are interpretable is not unreasonable advice, although those of us unable to interpret many factors will tend to be biased towards extracting fewer. The \pfun{nfactors} function applies all of these tests, but unfortunately the typical result is that none of them agree.
\subsubsection{Rotations and Transformation}
Given a factor solution $\vec{F}$ with elements (loadings) of $f_{ij}$ what is the best way to interpret it? The loadings reflect the correlation of the factors with the items and differ by item and by factor. The sum of the squared loadings for each item (row wise) is the amount of variance in that item accounted for by all of the factors. This is known as the communality ($h^2_i = \Sigma{f_{ij}^2}$). Items with high communality are well explained by all of the factors, those with low communality are badly explained. For the same value of communality, a variable is said to be more complex if several variables are needed to explain its variance (have high loadings) and less complex if just one variable has a high loading. An index of item complexity is $c_i = \frac{(\Sigma f_{ij}^2)^2}{\Sigma f_{ij}^4}$ which will achieve a minimum of 1 if all of the explained variance in an item is due to one factor \citep{hofmann:78}. A similar measure of factor complexity is to do the operation column wise. Multiplying the $\vec{F}$ matrix by a orthogonal transformation matrix($\vec{T}$) will not change the communalities but can change the item and factor complexities. In the orthogonal case, this is known as rotation, if the resulting solution has correlated factors, we should refer to this as an oblique transformation. We want to chose a transformation that provides a more `simple structure' \citep{thurstone:47} than the original $\vec{F}$ matrix. A number of different solutions to this problem take advantage of the \Rpkg{GPArotation} package \citep{GPA} and are included in the \pfun{fa} function. \cite{browne:01} discusses how many of these are part of the \citep{crawford:70} family of rotations. Some of the most frequently used include \emph{Varimax} \citep{kaiser:58,kaiser:70} and Quartimax \citep{neuhaus} for orthogonal rotations and \emph{oblimin} \citep{harman:1976,jennrich:79}, \emph{Promax} \citep{promax} \emph{Bifactor} \citep{holzinger:37,reise:12} and \emph{Geomin} \citep{yates:88} for oblique solutions. Unfortunately, some of these rotation procedures achieve local minima in their fitting functions and it is recommended to do multiple random restarts to confirm solutions.
The net result of an oblique tranformation is the factor \emph{pattern} matrix ($\vec{F}$) and the factor \emph{structure} matrix ($\vec{S} = \vec{F \phi}$) where $\vec{\phi}$ is the correlation between the factors. When reporting an oblique transformation, it is important to show both the pattern ($\vec{F}$) and the correlation ($\vec{\phi}$).
\subsubsection{Factor score indeterminancy}
A problem with factor analysis is that although the model is well defined at the structure level (modeling the covariances of the variables) it is indeterminate at the individual score level \citep{grice:01}. Factor scores are best estimates of an individual's score but should not be equated with the factor. Factor score estimates correlate with the latent factors, but this correlation may be far from unity. The \pfun{fa} function returns the correlation of the factor scores with the factor. If the correlation is less than .707 (an $R^2$ of .5), then two estimates of the factor score vector may actually be negatively correlated with each other. Correlations of the factors with the factor score estimates are a function of the number of variables marking the factor as well as the communality of the variables.
\if{FALSE}
\begin{table}[htpb]\caption{Comparing the Minimum Residual, Maximum LIkelhood, and Alpha factor analyses.}
\begin{center}
\begin{scriptsize}
\begin{tabular} {l r r r r r r r r r }
% \multicolumn{ 9 }{l}{ A table from the psych package in R } \cr
\hline Variable & {MR1} & {MR2} & {MR3} & {MLE1} & {MLE.2} & {MLE.3} & {alp.1} & {alp.2} & {alp.3}\cr
\hline
Sentences & 0.90 & -0.03 & 0.04 & 0.91 & -0.04 & 0.04 & 0.90 & -0.03 & 0.04 \cr
Vocabulary & 0.89 & 0.06 & -0.03 & 0.89 & 0.06 & -0.03 & 0.89 & 0.07 & -0.03 \cr
Sent.Completion & 0.84 & 0.03 & 0.00 & 0.83 & 0.04 & 0.00 & 0.84 & 0.03 & 0.01 \cr
First.Letters & 0.00 & 0.85 & 0.00 & 0.00 & 0.86 & 0.01 & 0.00 & 0.85 & 0.00 \cr
Four.Letter.Words & -0.02 & 0.75 & 0.10 & -0.01 & 0.74 & 0.10 & -0.02 & 0.75 & 0.11 \cr
Suffixes & 0.18 & 0.63 & -0.08 & 0.18 & 0.63 & -0.08 & 0.18 & 0.62 & -0.08 \cr
Letter.Series & 0.03 & -0.01 & 0.84 & 0.03 & -0.01 & 0.84 & 0.03 & -0.01 & 0.84 \cr
Pedigrees & 0.38 & -0.05 & 0.46 & 0.37 & -0.05 & 0.47 & 0.38 & -0.05 & 0.46 \cr
Letter.Group & -0.06 & 0.21 & 0.63 & -0.06 & 0.21 & 0.64 & -0.06 & 0.21 & 0.63 \cr
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\label{default}
\end{table}
\fi
\begin{table}[htpb]\caption{The \emph{minres} solution using the \pfun{fa} function of the \pfun{Thurstone} data set. The factor solution was then transformed to simple structure using the \pfun{oblimin} transformation. $h^2$ is the communality estimate, $u^2$ is the unique variance associated with the variable, \emph{com} is the degree of item complexity. The \emph{pattern} coefficients are shown as well as the correlation ($\phi$) between the factors. Because this is an oblique solution, the correlation matrix) is reproduced by $\vec{F \phi F}' + \vec{U}^2$. The sums of squares for an oblique solution are the diagonal elements of $\vec{\phi F}' \vec{F} $.}
\begin{center}
\begin{scriptsize}
\begin{tabular} {l r r r r r r }
%\multicolumn{ 6 }{l}{ A factor analysis table from the psych package in R } \cr
\hline Variable & MR1 & MR2 & MR3 & h2 & u2 & com \cr
\hline
Sentences & \bf{ 0.90} & -0.03 & 0.04 & 0.82 & 0.18 & 1.01 \cr
Vocabulary & \bf{ 0.89} & 0.06 & -0.03 & 0.84 & 0.16 & 1.01 \cr
Sent.Completion & \bf{ 0.84} & 0.03 & 0.00 & 0.74 & 0.26 & 1.00 \cr
First.Letters & 0.00 & \bf{ 0.85} & 0.00 & 0.73 & 0.27 & 1.00 \cr
Four.Letter.Words & -0.02 & \bf{ 0.75} & 0.10 & 0.63 & 0.37 & 1.04 \cr
Suffixes & 0.18 & \bf{ 0.63} & -0.08 & 0.50 & 0.50 & 1.20 \cr
Letter.Series & 0.03 & -0.01 & \bf{ 0.84} & 0.73 & 0.27 & 1.00 \cr
Pedigrees & \bf{ 0.38} & -0.05 & \bf{ 0.46} & 0.51 & 0.49 & 1.96 \cr
Letter.Group & -0.06 & 0.21 & \bf{ 0.63} & 0.52 & 0.48 & 1.25 \cr
\hline \cr SS loadings & 2.65 & 1.87 & 1.49 & \cr
\cr
\hline \cr
MR1 & 1.00 & 0.59 & 0.53 \cr
MR2 & 0.59 & 1.00 & 0.52 \cr
MR3 & 0.53 & 0.52 & 1.00 \cr
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\label{default}
\end{table}
\begin{table}[htpb]\caption{Comparing the Varimax orthogonally rotated PCA ($RC_{1..3}$) Minimum Residual ($MR_{1..3}$), obliquely transformed PCA ($TC_{1..3}$) and oblique Minimum Residual ($MR_{1..3}$) solutions. In order to show the structure more clearly, loadings $> .30$ are boldfaced.}
\begin{center}
\begin{scriptsize}
\begin{tabular} {l r r r r r r r r r r r r }
% \multicolumn{ 12 }{l}{ A table from the psych package in R } \cr
\hline Variable & {RC1} & {RC2} & {RC3} & {MR1} & {MR2} & {MR3} & {TC1} & {TC2} & {TC3} & {MR1} & {MR2} & {MR3}\cr
\hline
Sentences & \bf{0.86} & 0.24 & 0.23 & \bf{ 0.90} & 0.01 & 0.03 & \bf{0.83} & 0.25 & 0.26 & \bf{ 0.90} & -0.03 & 0.04 \cr
Vocabulary & \bf{0.85} & \bf{0.31} & 0.19 & \bf{ 0.88} & 0.10 & -0.02 & \bf{0.83} & \bf{0.32} & 0.22 & \bf{ 0.89} & 0.06 & -0.03 \cr
Sent.Completion & \bf{0.85} & 0.26 & 0.19 & \bf{ 0.89} & 0.04 & -0.01 & \bf{0.78} & 0.28 & 0.23 & \bf{ 0.84} & 0.03 & 0.00 \cr
First.Letters & 0.23 & \bf{0.82} & 0.23 & 0.03 & \bf{ 0.84} & 0.07 & 0.23 & \bf{0.79} & 0.23 & 0.00 & \bf{ 0.85} & 0.00 \cr
Four.Letter.Words & 0.18 & \bf{0.79} & \bf{0.30} & -0.03 & \bf{ 0.81} & 0.16 & 0.21 & \bf{0.71} & 0.29 & -0.02 & \bf{ 0.75} & 0.10 \cr
Suffixes & \bf{0.31} & \bf{0.77} & 0.06 & 0.17 & \bf{ 0.79} & -0.14 & \bf{0.31} & \bf{0.62} & 0.13 & 0.18 & \bf{ 0.63} & -0.08 \cr
Letter.Series & 0.25 & 0.16 & \bf{0.83} & 0.10 & -0.01 & \bf{ 0.84} & 0.23 & 0.18 & \bf{0.80} & 0.03 & -0.01 & \bf{ 0.84} \cr
Pedigrees & \bf{0.53} & 0.08 & \bf{0.61} & \bf{ 0.49} & -0.14 & \bf{ 0.55} & \bf{0.45} & 0.17 & \bf{0.52} & \bf{ 0.38} & -0.05 & \bf{ 0.46} \cr
Letter.Group & 0.10 & \bf{0.31} & \bf{0.80} & -0.11 & 0.21 & \bf{ 0.82} & 0.16 & \bf{0.31} & \bf{0.63} & -0.06 & 0.21 & \bf{ 0.63} \cr
\hline
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\label{default}
\end{table}
\subsection{Confirmatory Factor Analysis}
With the introduction of statistical measures of fit, it is now possible to fit and then test particular factor models \citep{joreskog:78,rindskopf:88}. Because most models do not fit in an absolute sense, model comparison is recommended. Fit statistics include $\chi^2$, the Root Mean Square Error of Approximation (RMSEA) which adjusts the $\chi^2$ for the degrees of freedom and sample size (N) (RMSEA = $\sqrt{\frac{\chi^2 -df}{df (N-1)}}$), the standard deviation of the residuals (RMSR), the Akaike Information Criterion ($AIC = \chi^2 + k (k-1) -2df$) which also considers the number of variables in the model (k), and the Baysian Information Criterion ($BIC = \chi^2 + ln(N)(k(k+1)/2 + df)$). These fits are actually estimates of misfit. That is, the larger the $\chi^2$, the less well the model fits the data. The question then becomes how bad is bad? \cite{barrett:07} gives very strict interpretation of what makes a good model; \cite{marsh:04} suggests there is no golden rule of fit, and \cite{loehlin:17} give a very useful discussion of how to report fit statistics. The most important thing to remember is that this is a model comparison procedure where we compare multiple models to see which is better, not which is correct.
EFA is seen as a hypothesis generation procedure and CFA as a hypothesis confirmation procedure: the initial model might be derived from an EFA and then tested using CFA on a different data set. A powerful but easy to use package to do this is \Rpkg{lavaan} \citep{lavaan}. Other CFA packages include \Rpkg{sem} \citep{sem:16} and \Rpkg{OpenMX} \citep{OpenMX}. \Rpkg{lavaan} syntax is very straightforward, and allows one to specify and test any particular model.
An important use of CFA is evaluating whether the factor structure of a set of variables is the same across time, or across groups. These are important questions when comparing people across groups or people over time, for it is not possible to make comparisons if the measures are different. Three tests of \emph{factor invariance} are typically considered: configural, metric, and scaler (also known as weak, strong, and strict). Configural asks whether the structure is the same across groups, metric asks whether the loadings are the same (or do not differ very much), and scaler ask whether the means and intercepts of the factors are the same. Testing for invariance is thus a set of comparisons of structures across groups. The \pfun{measurementInvariance} function in the \Rpkg{semTools} package \citep{semTools} has been developed to do this in \Rpkg{lavaan}.
\subsection{Structural Equation Modeling}
Combining observations, latent variables and regression into one structural model (see Figure~\ref{fig:overview}) seems to be an obvious step. How to combine the path tracing rules of \cite{wright:20,wright:21} with factor analysis and reliability theory by using modern estimation algorithms was, however, an important insight. Developed independently by \cite{keesling}, \cite{joreskog:77} and \cite{wiley:73} the use of fitting regression models with latent variables was soon identified with a computer algorithm for Linear Structural Relations (LISREL) \citep{joreskog:78,joreskog:93} (see \cite{tarka:18} for a thorough history). Very influential texts on SEM include those of \cite{bollen:89,bollen:02}, \cite{loehlin:17} and \cite{mulaik:09}. How to report SEM is discussed by \cite{mcdonald:02} and others.
In addition to LISREL, the development of the proprietary programs EQS \citep{bentler:eqs} and MPLUS \citep{mplus} made SEM available to many. Now, with the introduction into \R{} of the \Rpkg{sem} \citep{sem}, \Rpkg{lavaan} \citep{lavaan} and OpenMx \citep{OpenMX} packages, SEM and CFA are part of the open source armamentarium for all. Bayesian approaches are available in the \Rpkg{blavaan} \citep{blavaan} which takes advantage of the \Rpkg{lavaan} package.
Combining formative causal variables with reflective indicator variables is done in MIMIC models (multiple indicators, multiple causes) where a latent variable is seen as formed from a set of causal variables but in turn causes another latent variable which is indicated by a number of reflective measured variables. An early example of a MIMIC model is the causative effect of education, occupation, and income on the latent variable of social status which in turn effects the latent variable of social participation which is measured by church attendance, memberships and the number of friends seen \citep{joreskog:mimic}.
Perhaps one of the most powerful uses of SEM techniques is in examining growth curves over time. Given a set of participants at time 1, what happens to them over weeks, months, or years? Growth curve models allow for the separation of trait and state effects \citep{cole:05} as well as an examination of change \citep{mcardle:lca,mcardle:09}. Tutorials for using \Rpkg{lavaan} include growth curve analysis and are included in the help pages for \Rpkg{lavaan}.
In an important generalization of the problem of fungible regression weights \citep{waller:08,waller:10} \cite{maccallum:fungible} showed how with a very small decrease in fit (increase in RMSEA), path coefficients in equivalent models that fit equally well can actually differ in sign. This is just one of the many cautions in how to interpret SEM results. For SEM fit statistics are merely fits of a model to the data. What is needed is comparisons of the fit of alternative/equivalent models. It is important to realize that reversing the direction of causal arrows in many SEM models does not change the fit, but drastically changes the interpretation \citep{maccallum:93}.
\section{Reliability: correlating a test with a test just like it}
A powerful use of correlation is assessing reliability. All measures are contaminated with an unknown amount of error. Reliability is just the fraction of the measures that is not error and is the correlation of a measure with another measure that is just like the first measure \citep{rc:reliability,rc:pa:18}. Using $V_x$ to represent observed total test variance and $\sigma^2_e$ to represent unobserved error variance then the reliability ($r_{xx}$ ) of a measure is
\begin{equation}
r_{xx} = 1 - \frac{\sigma^2_e}{V_x}.
\end{equation}
In terms of the observed and latent variables in Figure~\ref{fig:basic}, $x = \chi + \epsilon_1$, and a test just like it is $x' = \chi + \epsilon_2$ with correlation $r_{xx}.$
To infer the latent correlation between $\chi$ and $\eta$, $r_{\chi\eta}$, we can correct the observed correlation $r_{xy}$ for the reliabilities $r_{xx}$ and $r_{yy}$
\begin{equation}
r_{\chi \eta} = \frac{r_{xy}}{\sqrt{ r_{xx}r_{yy}}},
\label{eq:attenuation}
\end{equation}
Equation~\ref{eq:attenuation} was proposed by \cite{spearman:rho} and the problem since then has been how to estimate the reliability. This is important because if we underestimate the reliability, we will overestimate the disattenuated correlation (Equation~\ref{eq:attenuation}). %Although not there are not quite as many ways of estimating reliability as there are psychometricians, at times it seems so.
There are several different ways to estimate reliability. All agree on the basic principle that items represent some unknown amount of latent score and another unknown amount of error score; the problem is how to measure the relative contributions. Before the era of modern computers, several shortcuts were proposed that -- with some very strong assumptions -- would allow reliability to be found from the total test variance and the sum of the item variances. Equivalent forms of this procedure are known as \citep[KR20,][] {kuder:37} \citep[$\lambda_3$, ][]{guttman:45} and \citep[$\alpha$, ][]{cronbach:51}. Essentially these coefficients are a function of the average inter-item correlation and do not depend upon the structure of the test items.
They are all available in the \pfun{alpha} function in \Rpkg{psych}.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}
\tikzstyle{trait}=[circle, draw, minimum size=1cm]
\tikzstyle{int}=[circle, draw, minimum size=.5cm]
\tikzstyle{obs}=[draw=black, rectangle, inner sep=2pt, inner ysep=6pt]
\path[use as bounding box] (1,0) rectangle (10,10);
\draw(3,8)node {Construct 1};
\draw(9,8)node {Construct 2};
\draw(.5,5.2)node {Latent };
\draw(.5,4.6)node {Measures};
\draw(.5,2.2)node {Observed};
\draw(.5,1.7) node { Measures};
\node(x1) at( 3,5) [trait]{$\chi$};
\node(x2) at( 9,5) [trait]{$\eta$};
\node(X11) at( 2.5,2) [obs]{$x'$};
\node(X12) at( 3.5,2) [obs]{$x$};
\node(X21) at( 8.5,2) [obs]{$y$};
\node(X22) at( 9.5,2) [obs]{$y'$};
\node(e1) at( 2.,.5) [int]{$\delta_1$};
\node(e2) at( 4,.5) [int]{$\delta_2$};
\node(e3) at( 8.,.5) [int]{$\epsilon_1$};
\node(e4) at( 10,.5) [int]{$\epsilon_2$};
\draw[<->](x1) to [bend left,looseness=1.2] node[below] {$\rho_{\chi\eta}$} node[above] {$Validity$} (x2);
% \draw[<->](x2) to [bend left,looseness=1.05] node[below] {$\rho_{23}$} node[above] {$Stability$}(xn);
\draw[->](x1) to node[left] {$\sqrt{r_{xx'}}$} (X11);
\draw[->](x1) to node[right] {$\sqrt{r_{xx'}}$} (X12);
\draw[->](x2) to node[left] {$\sqrt{r_{yy'}}$} (X21);
\draw[->](x2) to node[right] {$\sqrt{r_{yy'}}$} (X22);
\draw[->](e1) to (X11);
\draw[->](e2) to (X12);
\draw[->](e3) to (X21);
\draw[->](e4) to (X22);
\draw[<->] (X11.south) to [bend right,looseness=1.4] node[below] {$r_{xx'}$} (X12.south);
\draw[<->] (X21.south) to [bend right,looseness=1.4] node[below] {$r_{yy'}$} (X22.south);
% \draw[<->] (X31.south) to [bend right,looseness=1.4] node[below] {$r_{x_3x_3'}$} (X32.south);
\draw[<->] (X12.south) to [bend right,looseness=1.4] node[above] {$r_{xy}$} node[below] {$Observed$} (X21.south);
% \draw[<->] (X22.south) to [bend right,looseness=1.4] node[below] {$r_{x_2'x_3}$} (X31.south);
\end{tikzpicture}
\caption{The basic concept of reliability and correcting for attenuation. All four observed variables (x, x', y, y') reflect the latent variables $\chi $ and $\eta $) but are contaminated by error ($\delta_{1..2}, \epsilon_{1..2} $). Adjusting observed correlations ($r_{xy}$) by reliabilities ($r_{xx'}$, $r_{yy'}$) estimates underlying latent correlations ($\rho_{\chi\eta}$). (See Equation~\ref{eq:attenuation}). Observed variables and correlations are shown in conventional Roman fonts, latent variables and latent paths in Greek fonts. }
\label{fig:basic}
\end{center}
\end{figure}
\subsection{Model based reliability measures}
Procedures that take into account the internal structure of the test using factor analysis (so called model based procedures) include $\omega_t$ and $\omega_h$ of \cite{mcdonald:tt}, and various estimates of the \emph{greatest lower bound} of the test reliability \citep{bentler:17}. By applying factor analytic techniques, it is possible to estimate the amount of variance in a test that is attributable to a
general factor ($\omega_h$) \citep{rz:09,zinbarg:pm:05}, as well as the general plus all group factors (total reliability or $\omega_{t}$). With modern computing techniques, these model based estimates are easy to find (e.g., \pfun{omega} will find $\omega_h$ and $\omega_t$ as well as $\alpha$). Estimates of $\omega_h$ and $\omega_t$ are preferred over $\alpha$ because they are sensitive to the actual structure of the test. $\alpha$ is an estimate based upon the average correlation and the very strong assumption that all the items are equally good measures of the latent trait. (More formally, the items are assumed to be $\tau$ equivalent: they all have equal loadings on the latent trait). This was a reasonable assumption to make as a short cut before we had computers. Now, there is no reason to settle for the shortcut. It is not uncommon to find tests with moderate levels of $\alpha$ that actually measure two unrelated or only partially related constructs. It is only by applying model based techniques that we can identify the problem \citep[e.g.,][]{rocklin:81}.
\subsection{Reliabilty of raters}
Rather than evaluating the reliability of a test, sometimes we want to know how much raters agree with each other when making judgements. If the ratings are numeric, this is done by finding one of several Intraclass Correlations (ICC). Depending upon whether we view raters as random or fixed, and whether the raters rate all subjects or just one each, and whether we pool the judgements of different raters, we end up with six different ICCs \citep{shrout:79,rc:pa:18}, all of which are found by the \pfun{ICC} function. \pfun{ICC} uses the power of \R{} to chain functions together: it performs a one or two way analysis of variance, extracts the mean squares or estimates of variance components, and find the appropriate ratio of variance components.
If the ratings are categorical rather than numeric, it is possible to compare the agreement of two raters in terms of Cohen's Kappa statistic \citep{cohen:60,cohen:68} which corrects the observed proportions of agreement for the expectations given the marginal base rates. This is done by the \pfun{cohen.kappa} function. An example of such ratings is given by \cite{guo:16} who had raters evaluate the presence or absence of particular themes in a set of life narratives.
\section{Structure vs. Process}
Factor analysis and estimates of reliability typically examine the structure of personality items and tests. In terms of Cattell's data box of people by measures by occassions \citep{cattell:46a,cattell:db} this an example of what Cattell called ``R" analysis (people over measures). They do not tell us about how people differ over time. Repeated measures allow us to examine the process of change. With the use of multi-level modeling techniques, it is possible to examine individual differences in within person structure over time (Cattell called this ``S" analysis).
\subsection{Statistical analysis of within subject variability}
Although initially done using daily diaries \citep{bolgeretal:03}, with the use of personal digital assistants and now cell phone apps, it is possible to collect intensive within subject data once to many times per day for several weeks or even months \citep{fisher:15,wfr:11,wbr:jrp:17,wbr:ejp:16,wr:paid:17}.
Excellent reviews of how to analyze these intensive longitudinal data include \cite{hamaker:15} and \cite{hamaker:17}. \cite{shrout:12a} and \cite{rw:paid:17} provide useful tutorials for examining multilevel reliabilty, and the \pfun{multilevel.reliability} function will do all these calculations in \R{}. A basic question is whether the data are \emph{ergodic} (each individual subject can be seen as representing the entire group) or whether the patterning of each individual is a meaningful signal in its own right. Different approaches to ergodicity include those of \cite{nesselroade:mbr:15} and \cite{rw:mbr:16}.
Functions to do these within subject analyses include multilevel regression using \Rpkg{lme4} and \Rpkg{nlme}, correlational structures within and between groups using \pfun{statsBy} and examining factor structures for invariance across subjects using \pfun{measurementinvariance} in the \Rpkg{semTools} package. The \Rpkg{multilevel} package \citep{multilevel:16} comes with very useful documentation on doing some of the more complicated forms of multilevel analyses.
\subsection{Computational modeling of process}
Another very powerful approach to studying within person processes is computational modeling. This is not a statistical approach so much as a way to develop and test the plausibility of theories. It is however, an important use of computer analysis for it can compare and test the fit of alternative dynamic models. Read and Miller and their colleagues \citep{read:97,read:10,yang:ingredients:14,read:18} have implemented a neural network model of the structure and dynamics of individual differences. \citep{read:10}. \cite{pickering:08} has implemented several different representations of Gray's Reinforcement Sensitivity Theory \citep{gray:91,gray:mcnaughton:00}. Although written in MatLab, it is straight forward to translate them into \R{}. A model of the dynamics of action \citep[DOA,][]{atkinson:birch} was implemented as program for main frame computers \citep{atkinson:77} and then reparameterized as the Cues-Tendency Action model and implemented as the \pfun{cta} function \citep{rev:doa,rc:jrp:15}. The CTA model has been combined with the RST model into the \pfun{cta.rst} function by \cite{brown:17} and provided good fits to empirical data.
\section{Other statistical techniques }
\subsection{Aggregating data by geographic location}
Most personality analysis focuses on individuals, but many interesting questions may be asked concerning differences in the aggregated personality of groups. Whether aggregating the scores of subjects by college major, socioeconomic status, or geographic location, the typical first step is the same: find the mean score for each group. The \pfun{statsBy} function provides mean values for any number of variables by a selected grouping variable, as well as a suite of statistics and output for analysis of aggregated variables. For example, the \pfun{statsBy} function outputs a correlation matrix for both between groups (the \pfun{rbg} object) and within groups (the \pfun{rwg} object). The correlation between aggregated groups is known to sociologists as the ecological correlation \citep{robinson:50}. The \pfun{rbg} object weights correlations by the number of subjects in each group due to the fact that estimates of a mean are more accurate with more subjects. A non-weighted between groups correlation matrix could be obtained by applying the \pfun{cor} function to the \pfun{mean} object of \pfun{statsBy} output.
When analyzing aggregated data, it is critical to keep in mind that a correlation between two variables may not be consistent between groups and within groups \citep{yule:1903,simpson:1951}. \cite{kievit:13} provide some excellent illustrations of this phenomenon, known as the Yule-Simpson paradox, where the within group correlations are of the opposite sign of the between groups correlations; for example, although a higher dosage of a medication is positively related to the likelihood of patient recovery across genders, dosage is negatively correlated with patient recovery within each gender. Another striking example of the effect is the finding that although at the aggregate level, the University of California seemed to discriminate against women in their admission policy, the individual departments actually discriminate in their favor \citep{bickel:75}. The \Rpkg{simpsons} package \citep{simpsons:12} allows for detailed examination of data that can produce this effect. The important point, as made by \cite{kievit:13} and \cite{robinson:50} is not to dismiss aggregate level relationships but to realize that that the level of generalization depends upon the level of analysis.
How large are the effects of aggregation? Two coefficients (intraclass correlations or ICCs) are reported when examining aggregated data. ICCs describe variance ratios for each aggregated variable in terms of within and between group variance components. %The most often-used estimates are ICC1 and ICC2.
ICC1 is an effect size that indicates the percentage of variance in subjects' scores that is explained by group membership \citep{shrout:79}. Although sometimes expressed in terms of the between group and within group means squares from the traditional analysis of variance approach, a clearer definition may be expressed as the variance ratio of between group variance ($\sigma^2_{bg}$) to the total variance. The total is the sum of the between ($\sigma^2_{bg}$) and within group ($\sigma^2_{wg} $) variance. ICC2 (also known as ICC1k) takes into account the average number of observations within group ($\bar{n}$) and is a measure of the reliability of the group mean differences. It is the Spearman-Brown reliability formula applied to ICC1:
\begin{equation}\label{eq:ICC1}
ICC1 = \frac{\sigma^2_{bg}}{\sigma^2_{bg} + \sigma^2_{wg}} ~~~~~~~~~~~~~~~ICC2 = \frac{\sigma^2_{bg}}{\sigma^2_{bg} + \frac{\sigma^2_{wg}}{\bar{n}}}.
\end{equation}
%ICC2 (also known as ICC1k) takes into account the average number of observations within group ($\bar{n}$) and is a measure of the reliability of the group mean differences. It is the Spearman-Brown formula reliability formula applied to ICC1 and is
%\begin{equation}\label{eq:ICC2}
%ICC2 = \frac{\sigma^2_{bg}}{\sigma^2_{bg} + \frac{\sigma^2_{wg}}{\bar{n}}}
%\end{equation} \wrc{Try to do this from basic compoents}
In plain English, ICC2 indicates the extent to which the aggregated scores of a variable are reliably different from one another. Assuming one's current data are a random sample, if a new random sample is collected with the same average number of participants per group, ICC2 estimates the correlation between the group scores of the first sample and the second sample \citep{james:82}. ICC2 indicates that aggregated scores are still reliable (i.e., a high ICC2) even if there is a miniscule amount of variance explained in aggregation (i.e., a low ICC1), provided one has enough subjects per group (i.e., a large $\bar{n}$). The \pfun{statsBy} function outputs ICC1 (the \pfun{ICC1} object), ICC2 (the \pfun{ICC2} object) and the number of subjects in each group who responded to each variable (the \pfun{n} object).
In the last two decades, international collaborations and online personality assessments have collected enormous data sets with samples an order of magnitude larger than what was possible a few decades ago \citep[e.g.,][]{gosling:web,rcwfbe}. \emph{Geographical psychology} is a subfield that has taken advantage of these large data sets, investigating how and why psychological phenomena are aggregated by residence of geographic locations \citep{rentfrow:16}, both large \citep[e.g., countries;][]{mccrae:terracciano:08} and small \citep[e.g., postal codes;][]{jokela:15} . Relatively straightforward correlational analyses at an aggregated geographic level \citep[e.g.,][]{rentfrow:08} are replicable \citep[e.g., ][]{elleman:18}. Geographical psychology researchers have started to explore novel approaches, such as determining the extent to which a psychological phenomenon is spatially clustered between locations; for a psychological variable, is a given location more similar to neighbor locations than more distant locations \citep[e.g.,][]{jokela:15}. \cite{bleidorn:16} explored both individual and aggregated levels of their data to calculate ``person-city personality fit'' and found that this fit was related to the life satisfaction of individuals. The complexities of spatial analysis are beyond the scope of this chapter but see \cite{rentfrow:16} for an overview and \cite{rentfrow:14} for details. The \pfun{spdep} package \citep{spdep} supplies functions pertaining to spatial autocorrelation, weights, statistics, and models.
\subsection{Statistical Learning Theory}
The study of individual differences has expanded beyond academic personality research to computer scientists who are taking advantage of the ``big data" possible to collect though web based techniques. Algorithms popular among computer scientists to analyze individual differences data in a prediction context are known generically as ``machine learning" or ``statistical learning theory." Although some of these techniques are new, some of them repackage traditional methods with new labels \citep[e.g., the $\phi$ coefficient of ][ has been `rediscovered' as a measure of fit but with a new name: the Matthews Correlation Coefficient] {pearson:1913}. We include a brief discussion of these techniques as many personality researchers will likely interact with computer scientists and it is worth learning the ``new" vocabulary.
Machine learning is a term without a universally agreed-upon definition. In general, it concerns the prediction of outcome variables from models trained on other datasets and it can be used to refer to a broad range of techniques from logistic regression to neural networks \citep{hastie:01}. The core emphasis in machine learning is on generalization of algorithmic performance to new data, meaning that researchers must shift their focus away from explanation of underlying constructs and toward prediction when using these techniques \citep{yarkoni:17}. For the purposes of this chapter, it is useful to illustrate some of the methods in the domain of machine learning that are distinct from the traditional regression-based modeling that is more common in psychology.
One such method falls underneath the umbrella term of classification and regression tree (CART) methods. CART methods involve the recursive separation of observations into distinct subgroups with a goal of enhancing subgroup homogeneity. The algorithm will first segment the observations based on the value of the variable that it has found to lead to the largest reduction in impurity, the exact measure of which depends on the type of problem at hand (e.g., Gini impurity index, entropy). Each of the subgroups created in this split will then be considered separately for further partitioning, until a desired fit to the training data has been reached. This process can be computed automatically through the \Rpkg{rpart} package \citep{rpart}. CART methods are readily amenable to the creation of attractive output in the form of decision trees, which graphically display a series of sequential steps constituting the final partitions determined by the algorithm. These figures are interpretable by non-statisticians with little training, enhancing the applicability of these methods to a variety of applied contexts. Unfortunately, there is a downside to these methods: classification and regression trees tend to overfit the training sample data, meaning that they tend to extend beyond interpretable signal in a dataset to incorporate noise.% In other words, they tend to have high variance.
Decision tree methods tend to produce models with a high amount of variability.
As such, while predictive accuracy may be acceptable on the training dataset, these methods tend to not perform as well as simpler models when applied to out-of-sample data. Fortunately, their predictive capabilities can be dramatically enhanced by incorporating a class of techniques called ensemble methods, which aggregate many different trees to harness their power.
In general, ensemble methods take advantage of the power of averages to create estimates that have lower variance than any of the constituent observations. The logic behind the utility of averages can be understood through an extension of the Central Limit Theorem, in which the variance of the mean of a group of n independent observations, each with a variance of $\sigma^2$, is $\sigma^2/n$. As such, if we were able to create trees from many different training datasets and aggregate the results, the resultant model would have lower variance than any individual model alone \citep{james:13}. Unfortunately, this is not typically a feasible process, as it would be rare to have many different training datasets at the ready. However, we are able to approximate this process through a method of bootstrap random sampling in which we repeatedly take random samples with replacement from our training dataset, creating many different bootstrapped datasets. This method of aggregation can be applied to a variety of statistical techniques, but applying this method to decision trees, we are able to fit a tree to each of these bootstrapped training datasets, aggregate the results, and get an ultimate prediction that does not suffer as much from the high variance concerns of the individual trees. Thus, ensemble methods form a relatively simple means by which the predictive power of classification and regression trees can be enhanced, making them compelling options for personality researchers with the goal of enhancing prediction.
Three of the most popular ensemble techniques used in the context of tree-based machine learning methods are bagging, random forests, and boosting. In bagging, observations are randomly sampled from the training dataset through bootstrapping methods to create many distinct datasets. A different tree is then grown based on each of these bootstrapped datasets, resulting in many trees, each of which have a potentially different set of decision rules on which they based their predictions. The results of these trees are then aggregated to create a final prediction for each observation. Predictions are evaluated on the `out-of-bag' observations that were left out of the bootstrap samples, providing a validation set of all observations not used in the creation of a subset of trees \citep{james:13}. Random forest methods can be viewed as special forms of bagging in which variables are randomly selected alongside the observations in the training dataset. More specifically, during the process of growing the tree from the training dataset, variables are randomly selected at each node to be used in the creation of the tree. Why add this additional step? While the general process of bagging reduces variance by bootstrapping multiple trees, each of the trees will be correlated due to the inclusion of identical variables at every split. In certain cases (i.e., when one variable is much more important in the prediction of the outcome than others) this will lead to many trees that are nearly identical. Averaging trees that are very similar will not lead to as large a reduction in variance as averaging uncorrelated trees. Random forest methods address this by changing the variables made available at each potential node split in the trees, effectively decorrelating the trees and theoretically decreasing the variance of a given predictive model. In general, if p is the number of predictors, $\sqrt{p}$ is the number of variables selected at each split for classification problems and p/3 variables are selected at each split for regression problems, although this number should be determined based on hyperparameter tuning given the dataset at hand \citep{hastie:01}. Both bagging and random forest techniques can be computed in \R{} using the \Rpkg{randomForest} package \citep{liaw:02}. By setting the `mtry' argument to equal the number of variables in the model, a bagged model will be run, otherwise it will be random forest.
Boosting methods are conceptually similar to bagging and random forests, but alter the way that the trees update their information. Specifically, boosting involves sequentially updating each tree by fitting the residuals of the tree before it. In this manner, boosting attempts to target the areas of weakness of the previous trees and update them accordingly. The boosting algorithm completes this procedure slowly to avoid overfitting, each time fitting the residuals of the previous model before adding this fitted tree back to the original tree after applying a shrinkage parameter, thus updating the residuals. The power in the predictive ability of boosting comes from its slow progression and sequential growth based on residuals of previous trees. However, this practice of fitting residuals can lead to overfitting if the tree grows too fast and thus we want the algorithm to proceed slowly. Boosting tends to outperform bagging and random forest on prediction metrics, but may not be as conceptually clear as those methods due to the fitting of residuals. The preference is up to the researcher. Popular packages in \R{} to fit boosted models include \Rpkg{xgboost} \citep{xgboost} and \Rpkg{gbm} \citep{gbm}. \emph{Python} is another language commonly used in machine learning as it allows somewhat faster processing of very large data sets.
Taken together, these ensemble methods are referred to as ``black box" methods, which means that their exact inner workings remain unknown to the practitioner. In other words, while a researcher may be able to tell that a random forest model provides excellent predictive accuracy, they would not be able to view each decision tree used in the crafting of the prediction. As such, these ensemble methods are best used when the major aim of a project is prediction and they may not be appropriate for situations in which a precise theoretical model is desired. This may be discomforting to personality psychologists who have largely been trained to prioritize theoretical modeling, but it is simply another possible way to analyze data with a predictive focus. Researchers are not left completely in the dark about the inner workings of the models, however: they are able to influence the way that the model constructs and aggregates the individual trees in these ensemble methods through the selection of various hyperparameters. These hyperparameters include the number of trees grown in all three methods, number of variables selected at each node in random forests, and how slowly the trees grow in boosting. Adjusting these hyperparameters to influence the performance of the ensemble methods is critical to predictive performance and can be done with through trial and error or cross validation.
The aforementioned methods are intriguing in that they provide the tools for creating a more prediction-focused study of individual differences. While promising in their potential impact on the field in the future, machine learning results should generally be viewed through a lens of caution, as many complicated methods may be outperformed by comparatively simpler methods of linear and logistic regression. A shift toward a more predictive science would be welcomed, but we must be sure to select methods to suit the problem at hand and not just apply these methods with abandon.
\section{Conclusion}
Personality research has come a long way from the simple correlation of \cite{galton:88}, \cite{pearson:95}, and \cite{spearman:rho}. Advances in the past few years have brought powerful computation to the desk or lap of the individual researcher. Open source software has made complex research questions answerable by people anywhere in the world. Computational model techniques that used to take days on multi-million dollar computers can now be done in seconds on very affordable laptops. Data can be shared across the web, analyses can be duplicated using published and open source computer code. Statistical testing and modeling of psychological data and theory has never been easier for those willing to learn the modern methods.
%\bibliography{all}
\newpage
%\bibliography{../../all}
\begin{thebibliography}{}
\bibitem [\protect \citeauthoryear {%
Aiken%
\ \BBA {} West%
}{%
Aiken%
\ \BBA {} West%
}{%
{\protect \APACyear {1991}}%
}]{%
aiken:west}
\APACinsertmetastar {%
aiken:west}%
\begin{APACrefauthors}%
Aiken, L\BPBI S.%
\BCBT {}\ \BBA {} West, S\BPBI G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1991}.
\newblock
\APACrefbtitle {Multiple regression testing and interpretation.} {Multiple
regression testing and interpretation.}
\newblock
\APACaddressPublisher{}{Sage Publications, Inc.}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Algina%
, Keselman%
\BCBL {}\ \BBA {} Penfield%
}{%
Algina%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2005}}%
}]{%
algina:05}
\APACinsertmetastar {%
algina:05}%
\begin{APACrefauthors}%
Algina, J.%
, Keselman, H\BPBI J.%
\BCBL {}\ \BBA {} Penfield, R\BPBI D.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2005}{}{}.
\newblock
{\BBOQ}\APACrefatitle {An Alternative to {Cohen's }Standardized Mean Difference
Effect Size: A Robust Parameter and Confidence Interval in the Two
Independent Groups Case.} {An alternative to {Cohen's }standardized mean
difference effect size: A robust parameter and confidence interval in the two
independent groups case.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Methods}{10}{3}{317 - 328}.
\newblock
\begin{APACrefDOI} \doi{10.1037/1082-989X.10.3.317} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Atkinson%
\ \BBA {} Birch%
}{%
Atkinson%
\ \BBA {} Birch%
}{%
{\protect \APACyear {1970}}%
}]{%
atkinson:birch}
\APACinsertmetastar {%
atkinson:birch}%
\begin{APACrefauthors}%
Atkinson, J\BPBI W.%
\BCBT {}\ \BBA {} Birch, D.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1970}.
\newblock
\APACrefbtitle {The dynamics of action} {The dynamics of action}.
\newblock
\APACaddressPublisher{New York, N.Y.}{John Wiley}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Atkinson%
, Bongort%
\BCBL {}\ \BBA {} Price%
}{%
Atkinson%
\ \protect \BOthers {.}}{%
{\protect \APACyear {1977}}%
}]{%
atkinson:77}
\APACinsertmetastar {%
atkinson:77}%
\begin{APACrefauthors}%
Atkinson, J\BPBI W.%
, Bongort, K.%
\BCBL {}\ \BBA {} Price, L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1977}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Explorations using computer simulation to comprehend
thematic apperceptive measurement of motivation} {Explorations using computer
simulation to comprehend thematic apperceptive measurement of
motivation}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Motivation and Emotion}{1}{1}{1-27}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF00997578} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Barrett%
}{%
Barrett%
}{%
{\protect \APACyear {2007}}%
}]{%
barrett:07}
\APACinsertmetastar {%
barrett:07}%
\begin{APACrefauthors}%
Barrett, P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2007}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{Structural equation modelling: Adjudging model fit}}
{{Structural equation modelling: Adjudging model fit}}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Personality and Individual
Differences}{42}{5}{815-824}.
\newblock
\begin{APACrefDOI} \doi{10.1016/j.paid.2006.09.018} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bates%
, Maechler%
, Bolker%
\BCBL {}\ \BBA {} Walker%
}{%
Bates%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2015}}%
}]{%
lme4}
\APACinsertmetastar {%
lme4}%
\begin{APACrefauthors}%
Bates, D.%
, Maechler, M.%
, Bolker, B.%
\BCBL {}\ \BBA {} Walker, S.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2015}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Fitting Linear Mixed-Effects Models Using lme4} {Fitting
linear mixed-effects models using lme4}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Statistical Software, 67(1),
1-48}{67}{1}{1-48}.
\newblock
\APACrefnote{R package version 1.1-8}
\newblock
\begin{APACrefDOI} \doi{10.18637/jss.v067.i01.} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bechtoldt%
}{%
Bechtoldt%
}{%
{\protect \APACyear {1961}}%
}]{%
bechtoldt:61}
\APACinsertmetastar {%
bechtoldt:61}%
\begin{APACrefauthors}%
Bechtoldt, H.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1961}{}{}.
\newblock
{\BBOQ}\APACrefatitle {An empirical study of the factor analysis stability
hypothesis} {An empirical study of the factor analysis stability
hypothesis}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{26}{4}{405-432}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02289771} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Becker%
, Chambers%
\BCBL {}\ \BBA {} Wilks%
}{%
Becker%
\ \protect \BOthers {.}}{%
{\protect \APACyear {1988}}%
}]{%
S}
\APACinsertmetastar {%
S}%
\begin{APACrefauthors}%
Becker, R\BPBI A.%
, Chambers, J\BPBI M.%
\BCBL {}\ \BBA {} Wilks, A\BPBI R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1988}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The new {S} language} {The new {S} language}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Pacific Grove, Ca.: Wadsworth \& Brooks, 1988}{}{}{}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bentler%
}{%
Bentler%
}{%
{\protect \APACyear {1995}}%
}]{%
bentler:eqs}
\APACinsertmetastar {%
bentler:eqs}%
\begin{APACrefauthors}%
Bentler, P\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1995}.
\newblock
\APACrefbtitle {{EQS} structural equations program manual} {{EQS} structural
equations program manual}.
\newblock
\APACaddressPublisher{Encino, CA.}{Multivariate Software, Inc.}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bentler%
}{%
Bentler%
}{%
{\protect \APACyear {2017}}%
}]{%
bentler:17}
\APACinsertmetastar {%
bentler:17}%
\begin{APACrefauthors}%
Bentler, P\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2017}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Specificity-enhanced reliability coefficients.}
{Specificity-enhanced reliability coefficients.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Methods}{22}{3}{527 - 540}.
\newblock
\begin{APACrefDOI} \doi{10.1037/met0000092} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bernaards%
\ \BBA {} Jennrich%
}{%
Bernaards%
\ \BBA {} Jennrich%
}{%
{\protect \APACyear {2005}}%
}]{%
GPA}
\APACinsertmetastar {%
GPA}%
\begin{APACrefauthors}%
Bernaards, C.%
\BCBT {}\ \BBA {} Jennrich, R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2005}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{Gradient projection algorithms and software for
arbitrary rotation criteria in factor analysis}} {{Gradient projection
algorithms and software for arbitrary rotation criteria in factor
analysis}}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Educational and Psychological
Measurement}{65}{5}{676-696}.
\newblock
\begin{APACrefDOI} \doi{10.1177/0013164404272507} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bickel%
, Hammel%
\BCBL {}\ \BBA {} O'Connell%
}{%
Bickel%
\ \protect \BOthers {.}}{%
{\protect \APACyear {1975}}%
}]{%
bickel:75}
\APACinsertmetastar {%
bickel:75}%
\begin{APACrefauthors}%
Bickel, P\BPBI J.%
, Hammel, E\BPBI A.%
\BCBL {}\ \BBA {} O'Connell, J\BPBI W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1975}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Sex Bias in Graduate Admissions: Data from {Berkeley}}
{Sex bias in graduate admissions: Data from {Berkeley}}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Science}{187}{4175}{398-404}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bivand%
\ \BBA {} Piras%
}{%
Bivand%
\ \BBA {} Piras%
}{%
{\protect \APACyear {2015}}%
}]{%
spdep}
\APACinsertmetastar {%
spdep}%
\begin{APACrefauthors}%
Bivand, R.%
\BCBT {}\ \BBA {} Piras, G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2015}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Comparing Implementations of Estimation Methods for
Spatial Econometrics.} {Comparing implementations of estimation methods for
spatial econometrics.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Statistical Software}{63}{18}{1-36}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bleidorn%
\ \protect \BOthers {.}}{%
Bleidorn%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2016}}%
}]{%
bleidorn:16}
\APACinsertmetastar {%
bleidorn:16}%
\begin{APACrefauthors}%
Bleidorn, W.%
, Sch{\"o}nbrodt, F.%
, Gebauer, J\BPBI E.%
, Rentfrow, P\BPBI J.%
, Potter, J.%
\BCBL {}\ \BBA {} Gosling, S\BPBI D.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {To Live Among Like-Minded Others: Exploring the Links
Between Person-City Personality Fit and Self-Esteem} {To live among
like-minded others: Exploring the links between person-city personality fit
and self-esteem}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Science}{}{}{}.
\newblock
\begin{APACrefDOI} \doi{10.1177/0956797615627133} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bliese%
}{%
Bliese%
}{%
{\protect \APACyear {2016}}%
}]{%
multilevel:16}
\APACinsertmetastar {%
multilevel:16}%
\begin{APACrefauthors}%
Bliese, P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {multilevel: Multilevel Functions} {multilevel:
Multilevel functions}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{https://CRAN.R-project.org/package=multilevel}
\end{APACrefURL}
\newblock
\APACrefnote{R package version 2.6}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bock%
}{%
Bock%
}{%
{\protect \APACyear {2007}}%
}]{%
bock:07}
\APACinsertmetastar {%
bock:07}%
\begin{APACrefauthors}%
Bock, R\BPBI D.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2007}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Rethinking {Thurstone}} {Rethinking {Thurstone}}.{\BBCQ}
\newblock
\BIn{} R.~Cudeck\ \BBA {} R\BPBI C.~MacCallum\ (\BEDS), \APACrefbtitle {Factor
analysis at 100: Historical developments and future directions} {Factor
analysis at 100: Historical developments and future directions}\
(\BPG~35-45).
\newblock
\APACaddressPublisher{Mahwah, NJ}{Lawrence Erlbaum Associates Publishers}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bolger%
, Davis%
\BCBL {}\ \BBA {} Rafaeli%
}{%
Bolger%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2003}}%
}]{%
bolgeretal:03}
\APACinsertmetastar {%
bolgeretal:03}%
\begin{APACrefauthors}%
Bolger, N.%
, Davis, A.%
\BCBL {}\ \BBA {} Rafaeli, E.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2003}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Diary methods: Capturing life as it is lived} {Diary
methods: Capturing life as it is lived}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Annual Review of Psychology}{54}{}{579-616}.
\newblock
\begin{APACrefDOI} \doi{10.1146/annurev.psych.54.101601.145030}
\end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bollen%
}{%
Bollen%
}{%
{\protect \APACyear {1989}}%
}]{%
bollen:89}
\APACinsertmetastar {%
bollen:89}%
\begin{APACrefauthors}%
Bollen, K\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1989}.
\newblock
\APACrefbtitle {Structural equations with latent variables} {Structural
equations with latent variables}.
\newblock
\APACaddressPublisher{New York}{Wiley}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bollen%
}{%
Bollen%
}{%
{\protect \APACyear {2002}}%
}]{%
bollen:02}
\APACinsertmetastar {%
bollen:02}%
\begin{APACrefauthors}%
Bollen, K\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2002}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Latent variables in psychology and the social sciences}
{Latent variables in psychology and the social sciences}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Annual Review of Psychology}{53}{}{605-634}.
\newblock
\begin{APACrefDOI} \doi{10.1146/annurev.psych.53.100901.135239}
\end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Borsboom%
, Mellenbergh%
\BCBL {}\ \BBA {} van Heerden%
}{%
Borsboom%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2003}}%
}]{%
borsboom:03}
\APACinsertmetastar {%
borsboom:03}%
\begin{APACrefauthors}%
Borsboom, D.%
, Mellenbergh, G\BPBI J.%
\BCBL {}\ \BBA {} van Heerden, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2003}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The theoretical status of latent variables} {The
theoretical status of latent variables}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Review}{110}{2}{203-219}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0033-295X.110.2.203} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Brogden%
}{%
Brogden%
}{%
{\protect \APACyear {1946}}%
}]{%
brogden:46}
\APACinsertmetastar {%
brogden:46}%
\begin{APACrefauthors}%
Brogden, H\BPBI E.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1946}{}{}.
\newblock
{\BBOQ}\APACrefatitle {On the interpretation of the correlation coefficient as
a measure of predictive efficiency.} {On the interpretation of the
correlation coefficient as a measure of predictive efficiency.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Educational Psychology}{37}{2}{65 - 76}.
\newblock
\begin{APACrefDOI} \doi{10.1037/h0061548} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Bromley%
}{%
Bromley%
}{%
{\protect \APACyear {1982}}%
}]{%
bromley:82}
\APACinsertmetastar {%
bromley:82}%
\begin{APACrefauthors}%
Bromley, A\BPBI G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1982}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{Charles Babbage's Analytical Engine}, 1838} {{Charles
Babbage's Analytical Engine}, 1838}.{\BBCQ}
\newblock
\APACjournalVolNumPages{IEEE annals of the history of
computing}{4}{3}{196--217}.
\newblock
\begin{APACrefDOI} \doi{10.1109/MAHC.1982.10028} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
A\BPBI D.~Brown%
}{%
A\BPBI D.~Brown%
}{%
{\protect \APACyear {2017}}%
}]{%
brown:17}
\APACinsertmetastar {%
brown:17}%
\begin{APACrefauthors}%
Brown, A\BPBI D.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2017}.
\unskip\
\newblock
\APACrefbtitle {The Dynamics of Affect: Using {Newtonian} Mechanics,
Reinforcement Sensitivity Theory, and the Cues-Tendencies-Actions Model to
Simulate Individual Differences in Emotional Experience} {The dynamics of
affect: Using {Newtonian} mechanics, reinforcement sensitivity theory, and
the cues-tendencies-actions model to simulate individual differences in
emotional experience}\ \APACtypeAddressSchool {\BUPhD}{}{}.
\unskip\
\newblock
\APACaddressSchool {}{Northwestern University}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
W.~Brown%
}{%
W.~Brown%
}{%
{\protect \APACyear {1910}}%
}]{%
brown:10}
\APACinsertmetastar {%
brown:10}%
\begin{APACrefauthors}%
Brown, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1910}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Some experimental results in the correlation of mental
abilities} {Some experimental results in the correlation of mental
abilities}.{\BBCQ}
\newblock
\APACjournalVolNumPages{British Journal of Psychology}{3}{3}{296-322}.
\newblock
\begin{APACrefDOI} \doi{10.1111/j.2044-8295.1910.tb00207.x} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Browne%
}{%
Browne%
}{%
{\protect \APACyear {2001}}%
}]{%
browne:01}
\APACinsertmetastar {%
browne:01}%
\begin{APACrefauthors}%
Browne, M\BPBI W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2001}{}{}.
\newblock
{\BBOQ}\APACrefatitle {An Overview of Analytic Rotation in Exploratory Factor
Analysis} {An overview of analytic rotation in exploratory factor
analysis}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate Behavioral Research}{36}{1}{111-150}.
\newblock
\begin{APACrefDOI} \doi{10.1207/S15327906MBR3601_05} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Butcher%
, Dahlstrom%
, Graham%
, Tellegen%
\BCBL {}\ \BBA {} Kaemmer%
}{%
Butcher%
\ \protect \BOthers {.}}{%
{\protect \APACyear {1989}}%
}]{%
butcher:89}
\APACinsertmetastar {%
butcher:89}%
\begin{APACrefauthors}%
Butcher, J\BPBI N.%
, Dahlstrom, W.%
, Graham, J.%
, Tellegen, A.%
\BCBL {}\ \BBA {} Kaemmer, B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1989}.
\newblock
\APACrefbtitle {{MMPI-2}: Manual for administration and scoring} {{MMPI-2}:
Manual for administration and scoring}.
\newblock
\APACaddressPublisher{}{Minneapolis: University of Minnesota Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cattell%
}{%
Cattell%
}{%
{\protect \APACyear {1946}}%
}]{%
cattell:46a}
\APACinsertmetastar {%
cattell:46a}%
\begin{APACrefauthors}%
Cattell, R\BPBI B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1946}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Personality structure and measurement. {I. The}
operational determination of trait unities} {Personality structure and
measurement. {I. The} operational determination of trait unities}.{\BBCQ}
\newblock
\APACjournalVolNumPages{British Journal of Psychology}{36}{}{88-102}.
\newblock
\begin{APACrefDOI} \doi{10.1111/j.2044-8295.1946.tb01110.x} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cattell%
}{%
Cattell%
}{%
{\protect \APACyear {1966}}%
{\protect \APACexlab {{\protect \BCnt {1}}}}}]{%
cattell:db}
\APACinsertmetastar {%
cattell:db}%
\begin{APACrefauthors}%
Cattell, R\BPBI B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1966{\protect \BCnt {1}}}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The data box: Its ordering of total resources in terms
of possible relational systems} {The data box: Its ordering of total
resources in terms of possible relational systems}.{\BBCQ}
\newblock
\BIn{} R\BPBI B.~Cattell\ (\BED), \APACrefbtitle {Handbook of multivariate
experimental psychology} {Handbook of multivariate experimental psychology}\
(\BPG~67-128).
\newblock
\APACaddressPublisher{Chicago}{Rand-McNally}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cattell%
}{%
Cattell%
}{%
{\protect \APACyear {1966}}%
{\protect \APACexlab {{\protect \BCnt {2}}}}}]{%
cattell:scree}
\APACinsertmetastar {%
cattell:scree}%
\begin{APACrefauthors}%
Cattell, R\BPBI B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1966{\protect \BCnt {2}}}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The scree test for the number of factors} {The scree
test for the number of factors}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate Behavioral Research}{1}{2}{245-276}.
\newblock
\begin{APACrefDOI} \doi{10.1207/s15327906mbr0102_10} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Champely%
}{%
Champely%
}{%
{\protect \APACyear {2018}}%
}]{%
pwr}
\APACinsertmetastar {%
pwr}%
\begin{APACrefauthors}%
Champely, S.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{}{}.
\newblock
{\BBOQ}\APACrefatitle {pwr: Basic Functions for Power Analysis} {pwr: Basic
functions for power analysis}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{https://CRAN.R-project.org/package=pwr}
\end{APACrefURL}
\newblock
\APACrefnote{R package version 1.2-2}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Chen%
\ \protect \BOthers {.}}{%
Chen%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2018}}%
}]{%
xgboost}
\APACinsertmetastar {%
xgboost}%
\begin{APACrefauthors}%
Chen, T.%
, He, T.%
, Benesty, M.%
, Khotilovich, V.%
, Tang, Y.%
, Cho, H.%
\BDBL {}Li, Y.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{}{}.
\newblock
{\BBOQ}\APACrefatitle {xgboost: Extreme Gradient Boosting} {xgboost: Extreme
gradient boosting}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{https://CRAN.R-project.org/package=xgboost}
\end{APACrefURL}
\newblock
\APACrefnote{R package version 0.71.1}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cohen%
}{%
Cohen%
}{%
{\protect \APACyear {1960}}%
}]{%
cohen:60}
\APACinsertmetastar {%
cohen:60}%
\begin{APACrefauthors}%
Cohen, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1960}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A coefficient of agreement for nominal scales} {A
coefficient of agreement for nominal scales}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Educational and Psychological
Measurement}{20}{37-46}{}.
\newblock
\begin{APACrefDOI} \doi{10.1177/001316446002000104} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cohen%
}{%
Cohen%
}{%
{\protect \APACyear {1962}}%
}]{%
cohen:62}
\APACinsertmetastar {%
cohen:62}%
\begin{APACrefauthors}%
Cohen, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1962}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The statistical power of abnormal-social psychological
research: a review.} {The statistical power of abnormal-social psychological
research: a review.}{\BBCQ}
\newblock
\APACjournalVolNumPages{The Journal of Abnormal and Social
Psychology}{65}{3}{145-153}.
\newblock
\begin{APACrefDOI} \doi{10.1037/h0045186} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cohen%
}{%
Cohen%
}{%
{\protect \APACyear {1968}}%
}]{%
cohen:68}
\APACinsertmetastar {%
cohen:68}%
\begin{APACrefauthors}%
Cohen, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1968}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Weighted kappa: Nominal scale agreement provision for
scaled disagreement or partial credit} {Weighted kappa: Nominal scale
agreement provision for scaled disagreement or partial credit}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Bulletin}{70}{4}{213-220}.
\newblock
\begin{APACrefDOI} \doi{10.1037/h0026256} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cohen%
}{%
Cohen%
}{%
{\protect \APACyear {1988}}%
}]{%
cohen:88}
\APACinsertmetastar {%
cohen:88}%
\begin{APACrefauthors}%
Cohen, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1988}.
\newblock
\APACrefbtitle {Statistical power analysis for the behavioral sciences}
{Statistical power analysis for the behavioral sciences}\ (\PrintOrdinal{2nd
ed}\ \BEd).
\newblock
\APACaddressPublisher{Hillsdale, N.J.}{L. Erlbaum Associates}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cohen%
}{%
Cohen%
}{%
{\protect \APACyear {1992}}%
}]{%
cohen:92}
\APACinsertmetastar {%
cohen:92}%
\begin{APACrefauthors}%
Cohen, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1992}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A power primer.} {A power primer.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological bulletin}{112}{1}{155-159}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0033-2909.112.1.155} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cohen%
}{%
Cohen%
}{%
{\protect \APACyear {1994}}%
}]{%
cohen:94}
\APACinsertmetastar {%
cohen:94}%
\begin{APACrefauthors}%
Cohen, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1994}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The earth is round ($p<. 05$).} {The earth is round
($p<. 05$).}{\BBCQ}
\newblock
\APACjournalVolNumPages{American psychologist}{49}{12}{997-1003}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0003-066X.49.12.997} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cohen%
, Cohen%
, West%
\BCBL {}\ \BBA {} Aiken%
}{%
Cohen%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2003}}%
}]{%
cohen:03}
\APACinsertmetastar {%
cohen:03}%
\begin{APACrefauthors}%
Cohen, J.%
, Cohen, P.%
, West, S\BPBI G.%
\BCBL {}\ \BBA {} Aiken, L\BPBI S.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2003}.
\newblock
\APACrefbtitle {Applied multiple regression/correlation analysis for the
behavioral sciences} {Applied multiple regression/correlation analysis for
the behavioral sciences}\ (\PrintOrdinal{3rd ed}\ \BEd).
\newblock
\APACaddressPublisher{Mahwah, N.J.}{L. Erlbaum Associates}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cole%
, Martin%
\BCBL {}\ \BBA {} Steiger%
}{%
Cole%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2005}}%
}]{%
cole:05}
\APACinsertmetastar {%
cole:05}%
\begin{APACrefauthors}%
Cole, D\BPBI A.%
, Martin, N\BPBI C.%
\BCBL {}\ \BBA {} Steiger, J\BPBI H.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2005}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Empirical and Conceptual Problems With Longitudinal
Trait-State Models: Introducing a Trait-State-Occasion Model.} {Empirical and
conceptual problems with longitudinal trait-state models: Introducing a
trait-state-occasion model.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Methods}{10}{1}{3--20}.
\newblock
\begin{APACrefDOI} \doi{10.1037/1082-989X.10.1.3} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Crawford%
\ \BBA {} Ferguson%
}{%
Crawford%
\ \BBA {} Ferguson%
}{%
{\protect \APACyear {1970}}%
}]{%
crawford:70}
\APACinsertmetastar {%
crawford:70}%
\begin{APACrefauthors}%
Crawford, C\BPBI B.%
\BCBT {}\ \BBA {} Ferguson, G\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1970}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A general rotation criterion and its use in orthogonal
rotation} {A general rotation criterion and its use in orthogonal
rotation}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{35}{3}{321--332}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02310792} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Cronbach%
}{%
Cronbach%
}{%
{\protect \APACyear {1951}}%
}]{%
cronbach:51}
\APACinsertmetastar {%
cronbach:51}%
\begin{APACrefauthors}%
Cronbach, L\BPBI J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1951}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Coefficient alpha and the internal structure of tests}
{Coefficient alpha and the internal structure of tests}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{16}{}{297-334}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02310555} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Danielson%
\ \BBA {} Clark%
}{%
Danielson%
\ \BBA {} Clark%
}{%
{\protect \APACyear {1954}}%
}]{%
danielson:54}
\APACinsertmetastar {%
danielson:54}%
\begin{APACrefauthors}%
Danielson, J\BPBI R.%
\BCBT {}\ \BBA {} Clark, J\BPBI H.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1954}{10.1002/1097-4679(195404)10:2<137::AID-JCLP2270100207>3.0.CO;2-2}{}.
\newblock
{\BBOQ}\APACrefatitle {A personality inventory for induction screening} {A
personality inventory for induction screening}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Clinical Psychology}{10}{2}{137 - 143}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Dawes%
}{%
Dawes%
}{%
{\protect \APACyear {1979}}%
}]{%
dawes:79}
\APACinsertmetastar {%
dawes:79}%
\begin{APACrefauthors}%
Dawes, R\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1979}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The robust beauty of improper linear models in decision
making} {The robust beauty of improper linear models in decision
making}.{\BBCQ}
\newblock
\APACjournalVolNumPages{American Psychologist}{34}{7}{571-582}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0003-066X.34.7.571} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Dixon%
\ \BBA {} Brown%
}{%
Dixon%
\ \BBA {} Brown%
}{%
{\protect \APACyear {1979}}%
}]{%
bmdp}
\APACinsertmetastar {%
bmdp}%
\begin{APACrefauthors}%
Dixon, W\BPBI J.%
\BCBT {}\ \BBA {} Brown, M\BPBI B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1979}.
\newblock
\APACrefbtitle {{BMDP-79}: Biomedical computer programs P-series} {{BMDP-79}:
Biomedical computer programs p-series}.
\newblock
\APACaddressPublisher{}{Univ of California Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Eckart%
\ \BBA {} Young%
}{%
Eckart%
\ \BBA {} Young%
}{%
{\protect \APACyear {1936}}%
}]{%
Eckart}
\APACinsertmetastar {%
Eckart}%
\begin{APACrefauthors}%
Eckart, C.%
\BCBT {}\ \BBA {} Young, G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1936}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The approximation of one matrix by another of lower
rank} {The approximation of one matrix by another of lower rank}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{1}{3}{211--218}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02288367} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Efron%
}{%
Efron%
}{%
{\protect \APACyear {1979}}%
}]{%
efron:79}
\APACinsertmetastar {%
efron:79}%
\begin{APACrefauthors}%
Efron, B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1979}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Bootstrap Methods: Another Look at the Jackknife}
{Bootstrap methods: Another look at the jackknife}.{\BBCQ}
\newblock
\APACjournalVolNumPages{The Annals of Statistics}{7}{1-26}{}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Efron%
\ \BBA {} Gong%
}{%
Efron%
\ \BBA {} Gong%
}{%
{\protect \APACyear {1983}}%
}]{%
efron:83}
\APACinsertmetastar {%
efron:83}%
\begin{APACrefauthors}%
Efron, B.%
\BCBT {}\ \BBA {} Gong, G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1983}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A Leisurely Look at the Bootstrap, the Jackknife, and
Cross-Validation} {A leisurely look at the bootstrap, the jackknife, and
cross-validation}.{\BBCQ}
\newblock
\APACjournalVolNumPages{The American Statistician}{37}{1}{36-48}.
\newblock
\begin{APACrefDOI} \doi{10.1080/00031305.1983.10483087} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Elleman%
, Condon%
, Russin%
\BCBL {}\ \BBA {} Revelle%
}{%
Elleman%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2018}}%
}]{%
elleman:18}
\APACinsertmetastar {%
elleman:18}%
\begin{APACrefauthors}%
Elleman, L\BPBI G.%
, Condon, D\BPBI M.%
, Russin, S\BPBI E.%
\BCBL {}\ \BBA {} Revelle, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The personality of{ U.S.} states: Stability from 1999 to
2015} {The personality of{ U.S.} states: Stability from 1999 to 2015}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Research in Personality}{72}{}{64 - 72}.
\newblock
\APACrefnote{Special issue of Replication of Critical Findings in Personality
Psychology}
\newblock
\begin{APACrefDOI} \doi{10.1016/j.jrp.2016.06.022} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Erceg-Hurn%
\ \BBA {} Mirosevich%
}{%
Erceg-Hurn%
\ \BBA {} Mirosevich%
}{%
{\protect \APACyear {2008}}%
}]{%
erceg:08}
\APACinsertmetastar {%
erceg:08}%
\begin{APACrefauthors}%
Erceg-Hurn, D\BPBI M.%
\BCBT {}\ \BBA {} Mirosevich, V\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2008}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Modern robust statistical methods: an easy way to
maximize the accuracy and power of your research.} {Modern robust statistical
methods: an easy way to maximize the accuracy and power of your
research.}{\BBCQ}
\newblock
\APACjournalVolNumPages{American Psychologist}{63}{7}{591}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0003-066X.63.7.591} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Eysenck%
}{%
Eysenck%
}{%
{\protect \APACyear {1944}}%
}]{%
eysenck:44}
\APACinsertmetastar {%
eysenck:44}%
\begin{APACrefauthors}%
Eysenck, H\BPBI J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1944}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Types of personality: a factorial study of seven hundred
neurotics} {Types of personality: a factorial study of seven hundred
neurotics}.{\BBCQ}
\newblock
\APACjournalVolNumPages{The British Journal of Psychiatry}{90}{381}{851--861}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
A\BPBI J.~Fisher%
}{%
A\BPBI J.~Fisher%
}{%
{\protect \APACyear {2015}}%
}]{%
fisher:15}
\APACinsertmetastar {%
fisher:15}%
\begin{APACrefauthors}%
Fisher, A\BPBI J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2015}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Toward a dynamic model of psychological assessment:
Implications for personalized care.} {Toward a dynamic model of psychological
assessment: Implications for personalized care.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Consulting and Clinical
Psychology}{83}{4}{825 - 836}.
\newblock
\begin{APACrefDOI} \doi{10.1037/ccp0000026} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
R\BPBI A.~Fisher%
}{%
R\BPBI A.~Fisher%
}{%
{\protect \APACyear {1921}}%
}]{%
fisher:21}
\APACinsertmetastar {%
fisher:21}%
\begin{APACrefauthors}%
Fisher, R\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1921}{}{}.
\newblock
{\BBOQ}\APACrefatitle {On the ``probable error" of a coefficient of correlation
deduced from a small sample} {On the ``probable error" of a coefficient of
correlation deduced from a small sample}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Metron}{1}{}{3-32}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
R\BPBI A.~Fisher%
}{%
R\BPBI A.~Fisher%
}{%
{\protect \APACyear {1925}}%
}]{%
fisher:25}
\APACinsertmetastar {%
fisher:25}%
\begin{APACrefauthors}%
Fisher, R\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1925}.
\newblock
\APACrefbtitle {Statistical methods for research workers} {Statistical methods
for research workers}.
\newblock
\APACaddressPublisher{Edinburgh}{Oliver and Boyd}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Fox%
, Nie%
\BCBL {}\ \BBA {} Byrnes%
}{%
Fox%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2013}}%
}]{%
sem}
\APACinsertmetastar {%
sem}%
\begin{APACrefauthors}%
Fox, J.%
, Nie, Z.%
\BCBL {}\ \BBA {} Byrnes, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2013}{}{}.
\newblock
{\BBOQ}\APACrefatitle {sem: Structural Equation Models} {sem: Structural
equation models}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{http://CRAN.R-project.org/package=sem} \end{APACrefURL}
\newblock
\APACrefnote{R package version 3.1-3}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Fox%
, Nie%
\BCBL {}\ \BBA {} Byrnes%
}{%
Fox%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2016}}%
}]{%
sem:16}
\APACinsertmetastar {%
sem:16}%
\begin{APACrefauthors}%
Fox, J.%
, Nie, Z.%
\BCBL {}\ \BBA {} Byrnes, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {sem: Structural Equation Models} {sem: Structural
equation models}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{https://CRAN.R-project.org/package=sem}
\end{APACrefURL}
\newblock
\APACrefnote{R package version 3.1-7}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Galton%
}{%
Galton%
}{%
{\protect \APACyear {1886}}%
}]{%
galton:86}
\APACinsertmetastar {%
galton:86}%
\begin{APACrefauthors}%
Galton, F.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1886}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Regression Towards Mediocrity in Hereditary Stature}
{Regression towards mediocrity in hereditary stature}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of the Anthropological Institute of Great
Britain and Ireland}{15}{}{246-263}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Galton%
}{%
Galton%
}{%
{\protect \APACyear {1888}}%
}]{%
galton:88}
\APACinsertmetastar {%
galton:88}%
\begin{APACrefauthors}%
Galton, F.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1888}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Co-relations and their measurement} {Co-relations and
their measurement}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Proceedings of the Royal Society. London
Series}{45}{}{135-145}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Garcia%
, Schmitt%
, Branscombe%
\BCBL {}\ \BBA {} Ellemers%
}{%
Garcia%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2010}}%
}]{%
garcia:10}
\APACinsertmetastar {%
garcia:10}%
\begin{APACrefauthors}%
Garcia, D\BPBI M.%
, Schmitt, M\BPBI T.%
, Branscombe, N\BPBI R.%
\BCBL {}\ \BBA {} Ellemers, N.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2010}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Women's reactions to ingroup members who protest
discriminatory treatment: The importance of beliefs about inequality and
response appropriateness} {Women's reactions to ingroup members who protest
discriminatory treatment: The importance of beliefs about inequality and
response appropriateness}.{\BBCQ}
\newblock
\APACjournalVolNumPages{European Journal of Social
Psychology}{40}{5}{733--745}.
\newblock
\begin{APACrefDOI} \doi{10.1002/ejsp.644} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Gosling%
, Vazire%
, Srivastava%
\BCBL {}\ \BBA {} John%
}{%
Gosling%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2004}}%
}]{%
gosling:web}
\APACinsertmetastar {%
gosling:web}%
\begin{APACrefauthors}%
Gosling, S\BPBI D.%
, Vazire, S.%
, Srivastava, S.%
\BCBL {}\ \BBA {} John, O\BPBI P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2004}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Should We Trust Web-Based Studies? {A} Comparative
Analysis of Six Preconceptions About Internet Questionnaires} {Should we
trust web-based studies? {A} comparative analysis of six preconceptions about
internet questionnaires}.{\BBCQ}
\newblock
\APACjournalVolNumPages{American Psychologist}{59}{2}{93-104}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0003-066X.59.2.93} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Gray%
}{%
Gray%
}{%
{\protect \APACyear {1991}}%
}]{%
gray:91}
\APACinsertmetastar {%
gray:91}%
\begin{APACrefauthors}%
Gray, J\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1991}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The neuropsychology of temperament} {The neuropsychology
of temperament}.{\BBCQ}
\newblock
\BIn{} J.~Strelau\ \BBA {} A.~Angleitner\ (\BEDS), \APACrefbtitle {Explorations
in temperament: International perspectives on theory and measurement}
{Explorations in temperament: International perspectives on theory and
measurement}\ (\BPG~105-128).
\newblock
\APACaddressPublisher{New York, NY}{Plenum Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Gray%
\ \BBA {} McNaughton%
}{%
Gray%
\ \BBA {} McNaughton%
}{%
{\protect \APACyear {2000}}%
}]{%
gray:mcnaughton:00}
\APACinsertmetastar {%
gray:mcnaughton:00}%
\begin{APACrefauthors}%
Gray, J\BPBI A.%
\BCBT {}\ \BBA {} McNaughton, N.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2000}.
\newblock
\APACrefbtitle {The Neuropsychology of anxiety: An enquiry into the functions
of the septo-hippocampal system} {The neuropsychology of anxiety: An enquiry
into the functions of the septo-hippocampal system}\ (\PrintOrdinal{2nd}\
\BEd).
\newblock
\APACaddressPublisher{Oxford}{Oxford University Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Green%
\ \BBA {} Swets%
}{%
Green%
\ \BBA {} Swets%
}{%
{\protect \APACyear {1966}}%
}]{%
green:sdt}
\APACinsertmetastar {%
green:sdt}%
\begin{APACrefauthors}%
Green, D\BPBI M.%
\BCBT {}\ \BBA {} Swets, J\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1966}.
\newblock
\APACrefbtitle {Signal Detection Theory and Psychophysics} {Signal detection
theory and psychophysics}.
\newblock
\APACaddressPublisher{Oxford}{John Wiley}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Grice%
}{%
Grice%
}{%
{\protect \APACyear {2001}}%
}]{%
grice:01}
\APACinsertmetastar {%
grice:01}%
\begin{APACrefauthors}%
Grice, J\BPBI W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2001}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Computing and evaluating factor scores} {Computing and
evaluating factor scores}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Methods}{6}{4}{430-450}.
\newblock
\begin{APACrefDOI} \doi{0.1037/1082-989X.6.4.430} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Guo%
, Klevan%
\BCBL {}\ \BBA {} McAdams%
}{%
Guo%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2016}}%
}]{%
guo:16}
\APACinsertmetastar {%
guo:16}%
\begin{APACrefauthors}%
Guo, J.%
, Klevan, M.%
\BCBL {}\ \BBA {} McAdams, D\BPBI P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Personality Traits, Ego Development, and the Redemptive
Self} {Personality traits, ego development, and the redemptive self}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Personality and Social Psychology
Bulletin}{42}{11}{1551-1563}.
\newblock
\begin{APACrefDOI} \doi{10.1177/0146167216665093} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Guttman%
}{%
Guttman%
}{%
{\protect \APACyear {1945}}%
}]{%
guttman:45}
\APACinsertmetastar {%
guttman:45}%
\begin{APACrefauthors}%
Guttman, L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1945}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A basis for analyzing test-retest reliability} {A basis
for analyzing test-retest reliability}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{10}{4}{255--282}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02288892} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Hamaker%
, Ceulemans%
, Grasman%
\BCBL {}\ \BBA {} Tuerlinckx%
}{%
Hamaker%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2015}}%
}]{%
hamaker:15}
\APACinsertmetastar {%
hamaker:15}%
\begin{APACrefauthors}%
Hamaker, E\BPBI L.%
, Ceulemans, E.%
, Grasman, R.%
\BCBL {}\ \BBA {} Tuerlinckx, F.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2015}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Modeling affect dynamics: State of the art and future
challenges} {Modeling affect dynamics: State of the art and future
challenges}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Emotion Review}{7}{4}{316--322}.
\newblock
\begin{APACrefDOI} \doi{10.1177/1754073915590619} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Hamaker%
\ \BBA {} Wichers%
}{%
Hamaker%
\ \BBA {} Wichers%
}{%
{\protect \APACyear {2017}}%
}]{%
hamaker:17}
\APACinsertmetastar {%
hamaker:17}%
\begin{APACrefauthors}%
Hamaker, E\BPBI L.%
\BCBT {}\ \BBA {} Wichers, M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2017}{}{}.
\newblock
{\BBOQ}\APACrefatitle {No Time Like the Present} {No time like the
present}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Current Directions in Psychological
Science}{26}{1}{10-15}.
\newblock
\begin{APACrefDOI} \doi{10.1177/0963721416666518} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Harman%
}{%
Harman%
}{%
{\protect \APACyear {1976}}%
}]{%
harman:1976}
\APACinsertmetastar {%
harman:1976}%
\begin{APACrefauthors}%
Harman, H\BPBI H.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1976}.
\newblock
\APACrefbtitle {Modern factor analysis} {Modern factor analysis}\
(\PrintOrdinal{3d ed., rev}\ \BEd).
\newblock
\APACaddressPublisher{Chicago}{University of Chicago Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Harman%
\ \BBA {} Jones%
}{%
Harman%
\ \BBA {} Jones%
}{%
{\protect \APACyear {1966}}%
}]{%
harman:1966}
\APACinsertmetastar {%
harman:1966}%
\begin{APACrefauthors}%
Harman, H\BPBI H.%
\BCBT {}\ \BBA {} Jones, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1966}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Factor analysis by minimizing residuals (minres)}
{Factor analysis by minimizing residuals (minres)}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{31}{3}{351--368}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02289468} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Hastie%
, Tibshirani%
\BCBL {}\ \BBA {} Friedman%
}{%
Hastie%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2001}}%
}]{%
hastie:01}
\APACinsertmetastar {%
hastie:01}%
\begin{APACrefauthors}%
Hastie, T.%
, Tibshirani, R.%
\BCBL {}\ \BBA {} Friedman, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2001}.
\newblock
\APACrefbtitle {The elements of statistical learning: Data mining, inference,
and prediction} {The elements of statistical learning: Data mining,
inference, and prediction}\ (\PrintOrdinal{2}\ \BEd).
\newblock
\APACaddressPublisher{}{Springer-Verlag New York, Inc., New York}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Hathaway%
\ \BBA {} McKinley%
}{%
Hathaway%
\ \BBA {} McKinley%
}{%
{\protect \APACyear {1943}}%
}]{%
mmpi:43}
\APACinsertmetastar {%
mmpi:43}%
\begin{APACrefauthors}%
Hathaway, S.%
\BCBT {}\ \BBA {} McKinley, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1943}{}{}.
\newblock
\APACrefbtitle {{Manual for administering and scoring the MMPI}.} {{Manual for
administering and scoring the MMPI}.}
\newblock
\APACaddressPublisher{}{Minneapolis: University of Minnesota Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Hayes%
}{%
Hayes%
}{%
{\protect \APACyear {2013}}%
}]{%
hayes:13}
\APACinsertmetastar {%
hayes:13}%
\begin{APACrefauthors}%
Hayes, A\BPBI F.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2013}.
\newblock
\APACrefbtitle {Introduction to mediation, moderation, and conditional process
analysis: A regression-based approach} {Introduction to mediation,
moderation, and conditional process analysis: A regression-based approach}.
\newblock
\APACaddressPublisher{New York}{Guilford Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Hendrickson%
\ \BBA {} White%
}{%
Hendrickson%
\ \BBA {} White%
}{%
{\protect \APACyear {1964}}%
}]{%
promax}
\APACinsertmetastar {%
promax}%
\begin{APACrefauthors}%
Hendrickson, A\BPBI E.%
\BCBT {}\ \BBA {} White, P\BPBI O.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1964}{}{}.
\newblock
{\BBOQ}\APACrefatitle {PROMAX: A quick method for rotation to oblique simple
structure} {Promax: A quick method for rotation to oblique simple
structure}.{\BBCQ}
\newblock
\APACjournalVolNumPages{British Journal of Statistical
Psychology}{17}{}{65-70}.
\newblock
\begin{APACrefDOI} \doi{10.1111/j.2044-8317.1964.tb00244.x} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Henrich%
, Heine%
\BCBL {}\ \BBA {} Norenzayan%
}{%
Henrich%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2010}}%
}]{%
weird:10}
\APACinsertmetastar {%
weird:10}%
\begin{APACrefauthors}%
Henrich, J.%
, Heine, S\BPBI J.%
\BCBL {}\ \BBA {} Norenzayan, A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2010}{6}{}.
\newblock
{\BBOQ}\APACrefatitle {The weirdest people in the world?} {The weirdest people
in the world?}{\BBCQ}
\newblock
\APACjournalVolNumPages{Behavioral and Brain Sciences}{33}{}{61--83}.
\newblock
\begin{APACrefDOI} \doi{10.1017/S0140525X0999152X} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Hofmann%
}{%
Hofmann%
}{%
{\protect \APACyear {1978}}%
}]{%
hofmann:78}
\APACinsertmetastar {%
hofmann:78}%
\begin{APACrefauthors}%
Hofmann, R\BPBI J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1978}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Complexity and simplicity as objective indices
descriptive of factor solutions} {Complexity and simplicity as objective
indices descriptive of factor solutions}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate Behavioral Research}{13}{2}{247-250}.
\newblock
\begin{APACrefDOI} \doi{10.1207/s15327906mbr1302_9} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Holzinger%
\ \BBA {} Swineford%
}{%
Holzinger%
\ \BBA {} Swineford%
}{%
{\protect \APACyear {1937}}%
}]{%
holzinger:37}
\APACinsertmetastar {%
holzinger:37}%
\begin{APACrefauthors}%
Holzinger, K.%
\BCBT {}\ \BBA {} Swineford, F.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1937}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The Bi-factor method} {The bi-factor method}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{2}{1}{41--54}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02287965} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Horn%
}{%
Horn%
}{%
{\protect \APACyear {1965}}%
}]{%
horn:65}
\APACinsertmetastar {%
horn:65}%
\begin{APACrefauthors}%
Horn, J\BPBI L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1965}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A rationale and test for the number of factors in factor
analysis} {A rationale and test for the number of factors in factor
analysis}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{30}{2}{179--185}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02289447} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Horn%
\ \BBA {} Engstrom%
}{%
Horn%
\ \BBA {} Engstrom%
}{%
{\protect \APACyear {1979}}%
}]{%
horn:79}
\APACinsertmetastar {%
horn:79}%
\begin{APACrefauthors}%
Horn, J\BPBI L.%
\BCBT {}\ \BBA {} Engstrom, R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1979}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Cattell's scree test in relation to {Bartlett's}
chi-square test and other observations on the number of factors problem}
{Cattell's scree test in relation to {Bartlett's} chi-square test and other
observations on the number of factors problem}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate Behavioral Research}{14}{3}{283-300}.
\newblock
\begin{APACrefDOI} \doi{10.1207/s15327906mbr1403_1} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Isaacson%
}{%
Isaacson%
}{%
{\protect \APACyear {2014}}%
}]{%
isaacson}
\APACinsertmetastar {%
isaacson}%
\begin{APACrefauthors}%
Isaacson, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2014}.
\newblock
\APACrefbtitle {The Innovators: How a Group of Inventors, Hackers, Geniuses and
Geeks Created the Digital Revolution} {The innovators: How a group of
inventors, hackers, geniuses and geeks created the digital revolution}.
\newblock
\APACaddressPublisher{}{Simon and Schuster}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
G.~James%
, Witten%
, Hastie%
\BCBL {}\ \BBA {} Tibshirani%
}{%
G.~James%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2013}}%
}]{%
james:13}
\APACinsertmetastar {%
james:13}%
\begin{APACrefauthors}%
James, G.%
, Witten, D.%
, Hastie, T.%
\BCBL {}\ \BBA {} Tibshirani, R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2013}.
\newblock
\APACrefbtitle {An introduction to statistical learning} {An introduction to
statistical learning}\ (\BVOL~112).
\newblock
\APACaddressPublisher{}{Springer}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
L\BPBI R.~James%
}{%
L\BPBI R.~James%
}{%
{\protect \APACyear {1982}}%
}]{%
james:82}
\APACinsertmetastar {%
james:82}%
\begin{APACrefauthors}%
James, L\BPBI R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1982}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Aggregation bias in estimates of perceptual agreement.}
{Aggregation bias in estimates of perceptual agreement.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Applied Psychology}{67}{2}{219 - 229}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0021-9010.67.2.219} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Jennrich%
}{%
Jennrich%
}{%
{\protect \APACyear {1979}}%
}]{%
jennrich:79}
\APACinsertmetastar {%
jennrich:79}%
\begin{APACrefauthors}%
Jennrich, R\BPBI I.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1979}{Jun}{01}.
\newblock
{\BBOQ}\APACrefatitle {Admissible values of $\gamma$ in direct oblimin
rotation} {Admissible values of $\gamma$ in direct oblimin rotation}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{44}{2}{173--177}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02293969} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Jokela%
, Bleidorn%
, Lamb%
, Gosling%
\BCBL {}\ \BBA {} Rentfrow%
}{%
Jokela%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2015}}%
}]{%
jokela:15}
\APACinsertmetastar {%
jokela:15}%
\begin{APACrefauthors}%
Jokela, M.%
, Bleidorn, W.%
, Lamb, M\BPBI E.%
, Gosling, S\BPBI D.%
\BCBL {}\ \BBA {} Rentfrow, P\BPBI J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2015}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Geographically varying associations between personality
and life satisfaction in the London metropolitan area} {Geographically
varying associations between personality and life satisfaction in the london
metropolitan area}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Proceedings of the National Academy of
Sciences}{112}{3}{725--730}.
\newblock
\begin{APACrefDOI} \doi{10.1073/pnas.1415800112} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
J\"{o}reskog%
}{%
J\"{o}reskog%
}{%
{\protect \APACyear {1977}}%
}]{%
joreskog:77}
\APACinsertmetastar {%
joreskog:77}%
\begin{APACrefauthors}%
J\"{o}reskog, K\BPBI G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1977}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Applications of statistics: proceedings of the symposium
held at {Wright State University}} {Applications of statistics: proceedings
of the symposium held at {Wright State University}}.{\BBCQ}
\newblock
\BIn{} P.~Krishnaiah\ (\BED), (\BCHAPS\ Structural Equation Models in the
social sciences: Spcification, estimation, and testing).
\newblock
\APACaddressPublisher{}{North Holland}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
J\"{o}reskog%
}{%
J\"{o}reskog%
}{%
{\protect \APACyear {1978}}%
}]{%
joreskog:78}
\APACinsertmetastar {%
joreskog:78}%
\begin{APACrefauthors}%
J\"{o}reskog, K\BPBI G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1978}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Structural analysis of covariance and correlation
matrices} {Structural analysis of covariance and correlation
matrices}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{43}{4}{443--477}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02293808} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
J{\"o}reskog%
\ \BBA {} Goldberger%
}{%
J{\"o}reskog%
\ \BBA {} Goldberger%
}{%
{\protect \APACyear {1975}}%
}]{%
joreskog:mimic}
\APACinsertmetastar {%
joreskog:mimic}%
\begin{APACrefauthors}%
J{\"o}reskog, K\BPBI G.%
\BCBT {}\ \BBA {} Goldberger, A\BPBI S.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1975}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Estimation of a Model with Multiple Indicators and
Multiple Causes of a Single Latent Variable,} {Estimation of a model with
multiple indicators and multiple causes of a single latent variable,}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of the American Statistical
Association}{70}{351a}{631-639}.
\newblock
\begin{APACrefDOI} \doi{10.1080/01621459.1975.10482485} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Joreskog%
\ \BBA {} Sorbom%
}{%
Joreskog%
\ \BBA {} Sorbom%
}{%
{\protect \APACyear {1993}}%
}]{%
joreskog:93}
\APACinsertmetastar {%
joreskog:93}%
\begin{APACrefauthors}%
Joreskog, K\BPBI G.%
\BCBT {}\ \BBA {} Sorbom, D.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1993}.
\newblock
\APACrefbtitle {LISREL 8: Structural equation modeling with the SIMPLIS command
language.} {Lisrel 8: Structural equation modeling with the simplis command
language.}
\newblock
\APACaddressPublisher{Lisrel 8}{Lawrence Erlbaum Associates, Inc}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Judd%
\ \BBA {} McClelland%
}{%
Judd%
\ \BBA {} McClelland%
}{%
{\protect \APACyear {1989}}%
}]{%
judd:mc}
\APACinsertmetastar {%
judd:mc}%
\begin{APACrefauthors}%
Judd, C\BPBI M.%
\BCBT {}\ \BBA {} McClelland, G\BPBI H.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1989}.
\newblock
\APACrefbtitle {Data analysis : a model-comparison approach} {Data analysis : a
model-comparison approach}.
\newblock
\APACaddressPublisher{San Diego}{Harcourt Brace Jovanovich}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Kaiser%
}{%
Kaiser%
}{%
{\protect \APACyear {1958}}%
}]{%
kaiser:58}
\APACinsertmetastar {%
kaiser:58}%
\begin{APACrefauthors}%
Kaiser, H\BPBI F.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1958}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The varimax criterion for analytic rotation in factor
analysis} {The varimax criterion for analytic rotation in factor
analysis}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{23}{3}{187--200}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02289233} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Kaiser%
}{%
Kaiser%
}{%
{\protect \APACyear {1970}}%
}]{%
kaiser:70}
\APACinsertmetastar {%
kaiser:70}%
\begin{APACrefauthors}%
Kaiser, H\BPBI F.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1970}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A second generation little jiffy} {A second generation
little jiffy}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{35}{4}{401-415}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02291817} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Kaiser%
\ \BBA {} Caffrey%
}{%
Kaiser%
\ \BBA {} Caffrey%
}{%
{\protect \APACyear {1965}}%
}]{%
kaiser:65}
\APACinsertmetastar {%
kaiser:65}%
\begin{APACrefauthors}%
Kaiser, H\BPBI F.%
\BCBT {}\ \BBA {} Caffrey, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1965}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Alpha factor analysis} {Alpha factor analysis}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{30}{1}{1--14}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02289743} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Keesling%
}{%
Keesling%
}{%
{\protect \APACyear {1972}}%
}]{%
keesling}
\APACinsertmetastar {%
keesling}%
\begin{APACrefauthors}%
Keesling, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1972}.
\unskip\
\newblock
\APACrefbtitle {Maximum likelihood approaches to causal flow analysis.}
{Maximum likelihood approaches to causal flow analysis.}\
\APACtypeAddressSchool {\BUPhD}{}{}.
\unskip\
\newblock
\APACaddressSchool {}{University of Chicago}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Kelley%
}{%
Kelley%
}{%
{\protect \APACyear {2017}}%
}]{%
MBESS}
\APACinsertmetastar {%
MBESS}%
\begin{APACrefauthors}%
Kelley, K.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2017}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{MBESS: The MBESS R} Package} {{MBESS: The MBESS R}
package}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{https://CRAN.R-project.org/package=MBESS}
\end{APACrefURL}
\newblock
\APACrefnote{R package version 4.4.1}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Kievit%
\ \BBA {} Epskamp%
}{%
Kievit%
\ \BBA {} Epskamp%
}{%
{\protect \APACyear {2012}}%
}]{%
simpsons:12}
\APACinsertmetastar {%
simpsons:12}%
\begin{APACrefauthors}%
Kievit, R\BPBI A.%
\BCBT {}\ \BBA {} Epskamp, S.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2012}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Simpsons: Detecting {Simpson's Paradox}. {R} package
version 0.1.0.} {Simpsons: Detecting {Simpson's Paradox}. {R} package version
0.1.0.}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{http://CRAN. R- project.org/package=Simpsons}
\end{APACrefURL}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Kievit%
, Frankenhuis%
, Waldorp%
\BCBL {}\ \BBA {} Borsboom%
}{%
Kievit%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2013}}%
}]{%
kievit:13}
\APACinsertmetastar {%
kievit:13}%
\begin{APACrefauthors}%
Kievit, R\BPBI A.%
, Frankenhuis, W\BPBI E.%
, Waldorp, L\BPBI J.%
\BCBL {}\ \BBA {} Borsboom, D.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2013}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Simpson's paradox in psychological science: a practical
guide.} {Simpson's paradox in psychological science: a practical
guide.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Frontiers in Psychology}{4}{513}{1-14}.
\newblock
\begin{APACrefDOI} \doi{10.3389/fpsyg.2013.00513} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Kuder%
\ \BBA {} Richardson%
}{%
Kuder%
\ \BBA {} Richardson%
}{%
{\protect \APACyear {1937}}%
}]{%
kuder:37}
\APACinsertmetastar {%
kuder:37}%
\begin{APACrefauthors}%
Kuder, G.%
\BCBT {}\ \BBA {} Richardson, M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1937}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The theory of the estimation of test reliability} {The
theory of the estimation of test reliability}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{2}{3}{151-160}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02288391} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Lawley%
\ \BBA {} Maxwell%
}{%
Lawley%
\ \BBA {} Maxwell%
}{%
{\protect \APACyear {1962}}%
}]{%
lawley:62}
\APACinsertmetastar {%
lawley:62}%
\begin{APACrefauthors}%
Lawley, D\BPBI N.%
\BCBT {}\ \BBA {} Maxwell, A\BPBI E.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1962}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Factor Analysis as a Statistical Method} {Factor
analysis as a statistical method}.{\BBCQ}
\newblock
\APACjournalVolNumPages{The Statistician}{12}{3}{209--229}.
\newblock
\begin{APACrefDOI} \doi{10.2307/2986915} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Lawley%
\ \BBA {} Maxwell%
}{%
Lawley%
\ \BBA {} Maxwell%
}{%
{\protect \APACyear {1963}}%
}]{%
lawley:63}
\APACinsertmetastar {%
lawley:63}%
\begin{APACrefauthors}%
Lawley, D\BPBI N.%
\BCBT {}\ \BBA {} Maxwell, A\BPBI E.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1963}.
\newblock
\APACrefbtitle {Factor analysis as a statistical method} {Factor analysis as a
statistical method}.
\newblock
\APACaddressPublisher{London}{Butterworths}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Lee%
, MacCallum%
\BCBL {}\ \BBA {} Browne%
}{%
Lee%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2018}}%
}]{%
maccallum:fungible}
\APACinsertmetastar {%
maccallum:fungible}%
\begin{APACrefauthors}%
Lee, T.%
, MacCallum, R\BPBI C.%
\BCBL {}\ \BBA {} Browne, M\BPBI W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Fungible parameter estimates in structural equation
modeling.} {Fungible parameter estimates in structural equation
modeling.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Methods}{23}{1}{58 - 75}.
\newblock
\begin{APACrefDOI} \doi{10.1037/met0000130} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Liaw%
\ \BBA {} et al.%
}{%
Liaw%
\ \BBA {} et al.%
}{%
{\protect \APACyear {2002}}%
}]{%
liaw:02}
\APACinsertmetastar {%
liaw:02}%
\begin{APACrefauthors}%
Liaw, A.%
\BCBT {}\ \BBA {} et al., M\BPBI W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2002}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Classification and regression by randomForest}
{Classification and regression by randomforest}.{\BBCQ}
\newblock
\APACjournalVolNumPages{R news}{2}{3}{18--22}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Loehlin%
}{%
Loehlin%
}{%
{\protect \APACyear {2004}}%
}]{%
loehlin:04}
\APACinsertmetastar {%
loehlin:04}%
\begin{APACrefauthors}%
Loehlin, J\BPBI C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2004}.
\newblock
\APACrefbtitle {Latent variable models: an introduction to factor, path, and
structural equation analysis} {Latent variable models: an introduction to
factor, path, and structural equation analysis}\ (\PrintOrdinal{4th}\ \BEd).
\newblock
\APACaddressPublisher{Mahwah, N.J.}{L. Erlbaum Associates}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Loehlin%
\ \BBA {} Beaujean%
}{%
Loehlin%
\ \BBA {} Beaujean%
}{%
{\protect \APACyear {2017}}%
}]{%
loehlin:17}
\APACinsertmetastar {%
loehlin:17}%
\begin{APACrefauthors}%
Loehlin, J\BPBI C.%
\BCBT {}\ \BBA {} Beaujean, A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2017}.
\newblock
\APACrefbtitle {Latent variable models: an introduction to factor, path, and
structural equation analysis} {Latent variable models: an introduction to
factor, path, and structural equation analysis}\ (\PrintOrdinal{5th}\ \BEd).
\newblock
\APACaddressPublisher{Mahwah, N.J.}{Routledge}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Loevinger%
}{%
Loevinger%
}{%
{\protect \APACyear {1957}}%
}]{%
loevinger:57}
\APACinsertmetastar {%
loevinger:57}%
\begin{APACrefauthors}%
Loevinger, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1957}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Objective tests as instruments of psychological theory}
{Objective tests as instruments of psychological theory}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Reports Monograph Supplement
9}{3}{}{635--694}.
\newblock
\begin{APACrefDOI} \doi{10.2466/pr0.1957.3.3.635} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Lovelace%
}{%
Lovelace%
}{%
{\protect \APACyear {1842}}%
}]{%
lovelace:42}
\APACinsertmetastar {%
lovelace:42}%
\begin{APACrefauthors}%
Lovelace, A\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1842}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Sketch of the Analytical Engine invented by {Charles
Babbage}, by {LF Menabrea}, Officer of the Military Engineers, with notes
upon the memoir by the Translator} {Sketch of the analytical engine invented
by {Charles Babbage}, by {LF Menabrea}, officer of the military engineers,
with notes upon the memoir by the translator}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Taylor's Scientific Memoirs}{3}{}{666--731}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
MacCallum%
, Browne%
\BCBL {}\ \BBA {} Cai%
}{%
MacCallum%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2007}}%
}]{%
maccallum:07}
\APACinsertmetastar {%
maccallum:07}%
\begin{APACrefauthors}%
MacCallum, R\BPBI C.%
, Browne, M\BPBI W.%
\BCBL {}\ \BBA {} Cai, L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2007}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Factor analysis models as approximations} {Factor
analysis models as approximations}.{\BBCQ}
\newblock
\BIn{} R.~Cudeck\ \BBA {} R\BPBI C.~MacCallum\ (\BEDS), \APACrefbtitle {Factor
analysis at 100: Historical developments and future directions} {Factor
analysis at 100: Historical developments and future directions}\
(\BPG~153-175).
\newblock
\APACaddressPublisher{Mahwah, NJ}{Lawrence Erlbaum Associates Publishers}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
MacCallum%
, Wegener%
, Uchino%
\BCBL {}\ \BBA {} Fabrigar%
}{%
MacCallum%
\ \protect \BOthers {.}}{%
{\protect \APACyear {1993}}%
}]{%
maccallum:93}
\APACinsertmetastar {%
maccallum:93}%
\begin{APACrefauthors}%
MacCallum, R\BPBI C.%
, Wegener, D\BPBI T.%
, Uchino, B\BPBI N.%
\BCBL {}\ \BBA {} Fabrigar, L\BPBI R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1993}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The problem of equivalent models in applications of
covariance structure analysis.} {The problem of equivalent models in
applications of covariance structure analysis.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological bulletin.}{114}{1}{185--199}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0033-2909.114.1.185} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
MacKinnon%
}{%
MacKinnon%
}{%
{\protect \APACyear {2008}}%
}]{%
mackinnon:08}
\APACinsertmetastar {%
mackinnon:08}%
\begin{APACrefauthors}%
MacKinnon, D\BPBI P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2008}.
\newblock
\APACrefbtitle {Introduction to statistical mediation analysis} {Introduction
to statistical mediation analysis}.
\newblock
\APACaddressPublisher{New York, NY US}{Lawrence Erlbaum Associates Taylor \&
Francis Group}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Mair%
, Schoenbrodt%
\BCBL {}\ \BBA {} Wilcox%
}{%
Mair%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2017}}%
}]{%
WRS2}
\APACinsertmetastar {%
WRS2}%
\begin{APACrefauthors}%
Mair, P.%
, Schoenbrodt, F.%
\BCBL {}\ \BBA {} Wilcox, R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2017}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{WRS2: Wilcox robust estimation and testing}} {{WRS2:
Wilcox robust estimation and testing}}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{https://cran.r-project.org/web/packages/WRS2/}
\end{APACrefURL}
\newblock
\APACrefnote{0.9-2}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Mardia%
}{%
Mardia%
}{%
{\protect \APACyear {1970}}%
}]{%
mardia:70}
\APACinsertmetastar {%
mardia:70}%
\begin{APACrefauthors}%
Mardia, K\BPBI V.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1970}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Measures of Multivariate Skewness and Kurtosis with
Applications} {Measures of multivariate skewness and kurtosis with
applications}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Biometrika}{57}{3}{pp. 519-530}.
\newblock
\begin{APACrefDOI} \doi{10.1093/biomet/57.3.519} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Marsh%
, Hau%
\BCBL {}\ \BBA {} Wen%
}{%
Marsh%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2004}}%
}]{%
marsh:04}
\APACinsertmetastar {%
marsh:04}%
\begin{APACrefauthors}%
Marsh, H\BPBI W.%
, Hau, K\BHBI T.%
\BCBL {}\ \BBA {} Wen, Z.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2004}{}{}.
\newblock
{\BBOQ}\APACrefatitle {In Search of Golden Rules: Comment on Hypothesis-Testing
Approaches to Setting Cutoff Values for Fit Indexes and Dangers in
Overgeneralizing Hu and Bentler's (1999) Findings} {In search of golden
rules: Comment on hypothesis-testing approaches to setting cutoff values for
fit indexes and dangers in overgeneralizing hu and bentler's (1999)
findings}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Structural Equation Modeling: A Multidisciplinary
Journal}{11}{3}{320-341}.
\newblock
\begin{APACrefDOI} \doi{10.1207/s15328007sem1103_2} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
McArdle%
}{%
McArdle%
}{%
{\protect \APACyear {2009}}%
}]{%
mcardle:09}
\APACinsertmetastar {%
mcardle:09}%
\begin{APACrefauthors}%
McArdle, J\BPBI J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2009}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Latent variable modeling of differences and changes with
longitudinal data} {Latent variable modeling of differences and changes with
longitudinal data}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Annual Review of Psychology}{60}{}{577-605}.
\newblock
\begin{APACrefDOI} \doi{10.1146/annurev.psych.60.110707.163612}
\end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
McArdle%
\ \BBA {} Bell%
}{%
McArdle%
\ \BBA {} Bell%
}{%
{\protect \APACyear {2000}}%
}]{%
mcardle:lca}
\APACinsertmetastar {%
mcardle:lca}%
\begin{APACrefauthors}%
McArdle, J\BPBI J.%
\BCBT {}\ \BBA {} Bell, R\BPBI Q.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2000}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Recent trends in modeling longitudinal data by latent
growth curve methods} {Recent trends in modeling longitudinal data by latent
growth curve methods}.{\BBCQ}
\newblock
\BIn{} T\BPBI D.~Little, K\BPBI U.~Schnabel\BCBL {}\ \BBA {} J.~Baumert\
(\BEDS), \APACrefbtitle {Modeling longitudinal and multiple-group data:
practical issues, applied approaches, and scientific examples} {Modeling
longitudinal and multiple-group data: practical issues, applied approaches,
and scientific examples}\ (\BPG~69-107).
\newblock
\APACaddressPublisher{Mahwah, NJ}{Lawrence Erlbaum Associates}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
McCrae%
\ \BBA {} Terracciano%
}{%
McCrae%
\ \BBA {} Terracciano%
}{%
{\protect \APACyear {2008}}%
}]{%
mccrae:terracciano:08}
\APACinsertmetastar {%
mccrae:terracciano:08}%
\begin{APACrefauthors}%
McCrae, R\BPBI R.%
\BCBT {}\ \BBA {} Terracciano, A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2008}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Multilevel analysis of individuals and cultures.}
{Multilevel analysis of individuals and cultures.}{\BBCQ}
\newblock
\BIn{} F\BPBI J\BPBI R.~van~der Vijver, D\BPBI A.~van Hemert\BCBL {}\ \BBA {}
Y.~Poortinga\ (\BEDS), (\BPGS\ 249--283.).
\newblock
\APACaddressPublisher{New York, NY}{Taylor \& Francis Group/Lawrence Erlbaum
Associates}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
McDonald%
}{%
McDonald%
}{%
{\protect \APACyear {1985}}%
}]{%
mcdonald:85}
\APACinsertmetastar {%
mcdonald:85}%
\begin{APACrefauthors}%
McDonald, R\BPBI P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1985}.
\newblock
\APACrefbtitle {Factor Analysis and Related Methods} {Factor analysis and
related methods}.
\newblock
\APACaddressPublisher{Hillsdale, NJ:}{Erlbaum.}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
McDonald%
}{%
McDonald%
}{%
{\protect \APACyear {1999}}%
}]{%
mcdonald:tt}
\APACinsertmetastar {%
mcdonald:tt}%
\begin{APACrefauthors}%
McDonald, R\BPBI P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1999}.
\newblock
\APACrefbtitle {Test theory: {A} unified treatment} {Test theory: {A} unified
treatment}.
\newblock
\APACaddressPublisher{Mahwah, N.J.}{L. Erlbaum Associates}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
McDonald%
\ \BBA {} Ho%
}{%
McDonald%
\ \BBA {} Ho%
}{%
{\protect \APACyear {2002}}%
}]{%
mcdonald:02}
\APACinsertmetastar {%
mcdonald:02}%
\begin{APACrefauthors}%
McDonald, R\BPBI P.%
\BCBT {}\ \BBA {} Ho, M\BHBI H\BPBI R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2002}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Principles and practice in reporting structural equation
analyses.} {Principles and practice in reporting structural equation
analyses.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological methods}{7}{1}{64-82}.
\newblock
\begin{APACrefDOI} \doi{0.1037/1082-989X.7.1.64} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Merkle%
\ \BBA {} Rosseel%
}{%
Merkle%
\ \BBA {} Rosseel%
}{%
{\protect \APACyear {2016}}%
}]{%
blavaan}
\APACinsertmetastar {%
blavaan}%
\begin{APACrefauthors}%
Merkle, E\BPBI C.%
\BCBT {}\ \BBA {} Rosseel, Y.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{November}{}.
\newblock
{\BBOQ}\APACrefatitle {blavaan: Bayesian structural equation matrix models via
parameter expansion} {blavaan: Bayesian structural equation matrix models via
parameter expansion}.{\BBCQ}
\newblock
\APACjournalVolNumPages{arXiv}{1511.05604}{}{}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Mulaik%
}{%
Mulaik%
}{%
{\protect \APACyear {2009}}%
}]{%
mulaik:09}
\APACinsertmetastar {%
mulaik:09}%
\begin{APACrefauthors}%
Mulaik, S\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2009}.
\newblock
\APACrefbtitle {Linear causal modeling with structural equations} {Linear
causal modeling with structural equations}.
\newblock
\APACaddressPublisher{Boca Raton}{CRC Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Muth{\'e}n%
\ \BBA {} Muth{\'e}n%
}{%
Muth{\'e}n%
\ \BBA {} Muth{\'e}n%
}{%
{\protect \APACyear {2007}}%
}]{%
mplus}
\APACinsertmetastar {%
mplus}%
\begin{APACrefauthors}%
Muth{\'e}n, L.%
\BCBT {}\ \BBA {} Muth{\'e}n, B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2007}.
\newblock
\APACrefbtitle {Mplus User's Guide} {Mplus user's guide}\ (\PrintOrdinal{Fifth
Edition}\ \BEd).
\newblock
\APACaddressPublisher{Los Angeles, CA}{Muth{\'e}n \& Muth{\'e}n}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Neale%
\ \protect \BOthers {.}}{%
Neale%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2016}}%
}]{%
OpenMX}
\APACinsertmetastar {%
OpenMX}%
\begin{APACrefauthors}%
Neale, M\BPBI C.%
, Hunter, M\BPBI D.%
, Pritikin, J\BPBI N.%
, Zahery, M.%
, Brick, T\BPBI R.%
, Kickpatrick, R\BPBI M.%
\BDBL {}Boker, S\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Open{M}x 2.0: {E}xtended structural equation and
statistical modeling} {Open{M}x 2.0: {E}xtended structural equation and
statistical modeling}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{}{}{}.
\newblock
\begin{APACrefDOI} \doi{10.1007/s11336-014-9435-8} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Nesselroade%
\ \BBA {} Molenaar%
}{%
Nesselroade%
\ \BBA {} Molenaar%
}{%
{\protect \APACyear {2016}}%
}]{%
nesselroade:mbr:15}
\APACinsertmetastar {%
nesselroade:mbr:15}%
\begin{APACrefauthors}%
Nesselroade, J\BPBI R.%
\BCBT {}\ \BBA {} Molenaar, P\BPBI C\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Some Behavioral Science Measurement Concerns and
Proposals} {Some behavioral science measurement concerns and
proposals}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate Behavioral Research}{51}{}{396-412}.
\newblock
\begin{APACrefDOI} \doi{10.1080/00273171.2015.1050481} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Neuhaus%
\ \BBA {} Wrigley%
}{%
Neuhaus%
\ \BBA {} Wrigley%
}{%
{\protect \APACyear {1954}}%
}]{%
neuhaus}
\APACinsertmetastar {%
neuhaus}%
\begin{APACrefauthors}%
Neuhaus, J.%
\BCBT {}\ \BBA {} Wrigley, C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1954}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The quartimax method: an analytical approach to
orthogonal simple structure} {The quartimax method: an analytical approach to
orthogonal simple structure}.{\BBCQ}
\newblock
\APACjournalVolNumPages{British Journal of Statistical Psychology}{7}{}{81-91}.
\newblock
\begin{APACrefDOI} \doi{10.1111/j.2044-8317.1954.tb00147.x} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Ozer%
}{%
Ozer%
}{%
{\protect \APACyear {2007}}%
}]{%
ozer:07}
\APACinsertmetastar {%
ozer:07}%
\begin{APACrefauthors}%
Ozer, D\BPBI J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2007}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Evaluating effect size in personality research}
{Evaluating effect size in personality research}.{\BBCQ}
\newblock
\BIn{} R\BPBI W.~Robins, R\BPBI C.~Fraley\BCBL {}\ \BBA {} R\BPBI F.~Krueger\
(\BEDS), \APACrefbtitle {Handbook of research methods in personality
psychology} {Handbook of research methods in personality psychology}\
(\BPG~495-501).
\newblock
\APACaddressPublisher{New York, NY}{Guilford Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Pearson%
}{%
Pearson%
}{%
{\protect \APACyear {1895}}%
}]{%
pearson:95}
\APACinsertmetastar {%
pearson:95}%
\begin{APACrefauthors}%
Pearson, K.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1895}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Note on regression and inheritance in the case of two
parents} {Note on regression and inheritance in the case of two
parents}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Proceedings of the Royal Society. London
Series}{LVIII}{}{240-242}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Pearson%
}{%
Pearson%
}{%
{\protect \APACyear {1896}}%
}]{%
pearson:96}
\APACinsertmetastar {%
pearson:96}%
\begin{APACrefauthors}%
Pearson, K.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1896}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Mathematical contributions to the theory of evolution.
III. Regression, heredity, and panmixia.} {Mathematical contributions to the
theory of evolution. iii. regression, heredity, and panmixia.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Philisopical Transactions of the Royal Society of
London. Series A}{187}{}{254-318}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Pearson%
}{%
Pearson%
}{%
{\protect \APACyear {1920}}%
}]{%
pearson:20}
\APACinsertmetastar {%
pearson:20}%
\begin{APACrefauthors}%
Pearson, K.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1920}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Notes on the history of correlation} {Notes on the
history of correlation}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Biometrika}{13}{1}{25-45}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Pearson%
\ \BBA {} Heron%
}{%
Pearson%
\ \BBA {} Heron%
}{%
{\protect \APACyear {1913}}%
}]{%
pearson:1913}
\APACinsertmetastar {%
pearson:1913}%
\begin{APACrefauthors}%
Pearson, K.%
\BCBT {}\ \BBA {} Heron, D.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1913}{}{}.
\newblock
{\BBOQ}\APACrefatitle {On Theories of Association} {On theories of
association}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Biometrika}{9}{1/2}{159-315}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Pek%
\ \BBA {} Flora%
}{%
Pek%
\ \BBA {} Flora%
}{%
{\protect \APACyear {2018}}%
}]{%
pek:flora:18}
\APACinsertmetastar {%
pek:flora:18}%
\begin{APACrefauthors}%
Pek, J.%
\BCBT {}\ \BBA {} Flora, D\BPBI B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Reporting Effect Sizes in Original Psychological
Research: A Discussion and Tutorial.} {Reporting effect sizes in original
psychological research: A discussion and tutorial.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Methods}{25}{2}{208-225}.
\newblock
\begin{APACrefDOI} \doi{10.1037/met0000126} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Pickering%
}{%
Pickering%
}{%
{\protect \APACyear {2008}}%
}]{%
pickering:08}
\APACinsertmetastar {%
pickering:08}%
\begin{APACrefauthors}%
Pickering, A\BPBI D.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2008}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Formal and computational models of reinforcement
sensitivity theory} {Formal and computational models of reinforcement
sensitivity theory}.{\BBCQ}
\newblock
\BIn{} P\BPBI J.~Corr\ (\BED), \APACrefbtitle {The Reinforcement Sensivity
Theory} {The reinforcement sensivity theory}\ (\BPGS\ 453--481).
\newblock
\APACaddressPublisher{Cambridge}{Cambridge University Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Plato%
}{%
Plato%
}{%
{\protect \APACyear {1892}}%
}]{%
plato}
\APACinsertmetastar {%
plato}%
\begin{APACrefauthors}%
Plato.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1892}.
\newblock
\APACrefbtitle {The {Republic} : the complete and unabridged {Jowett}
translation} {The {Republic} : the complete and unabridged {Jowett}
translation}\ (\PrintOrdinal{3rd}\ \BEd; B.~Jowett, \BED{}).
\newblock
\APACaddressPublisher{Oxford}{Oxford Univeristy Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Preacher%
}{%
Preacher%
}{%
{\protect \APACyear {2015}}%
}]{%
preacher:15}
\APACinsertmetastar {%
preacher:15}%
\begin{APACrefauthors}%
Preacher, K\BPBI J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2015}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Advances in Mediation Analysis: A Survey and Synthesis
of New Developments} {Advances in mediation analysis: A survey and synthesis
of new developments}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Annual Review of Psychology}{66}{}{825-852}.
\newblock
\begin{APACrefDOI} \doi{10.1146/annurev-psych-010814-015258} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Preacher%
, Rucker%
\BCBL {}\ \BBA {} Hayes%
}{%
Preacher%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2007}}%
}]{%
preacher:07}
\APACinsertmetastar {%
preacher:07}%
\begin{APACrefauthors}%
Preacher, K\BPBI J.%
, Rucker, D\BPBI D.%
\BCBL {}\ \BBA {} Hayes, A\BPBI F.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2007}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Addressing moderated mediation hypotheses: Theory,
methods, and prescriptions} {Addressing moderated mediation hypotheses:
Theory, methods, and prescriptions}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate behavioral research}{42}{1}{185--227}.
\newblock
\begin{APACrefDOI} \doi{10.1080/00273170701341316} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
{R Core Team}%
}{%
{R Core Team}%
}{%
{\protect \APACyear {2018}}%
}]{%
R}
\APACinsertmetastar {%
R}%
\begin{APACrefauthors}%
{R Core Team}.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{}{}.
\newblock
{\BBOQ}\APACrefatitle {R: A Language and Environment for Statistical Computing}
{R: A language and environment for statistical computing}{\BBCQ}\
[\bibcomputersoftwaremanual].
\newblock
\APACaddressPublisher{Vienna, Austria}{}.
\newblock
\begin{APACrefURL} \url{https://www.R-project.org/} \end{APACrefURL}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Read%
, Brown%
, Wang%
\BCBL {}\ \BBA {} Miller%
}{%
Read%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2018}}%
}]{%
read:18}
\APACinsertmetastar {%
read:18}%
\begin{APACrefauthors}%
Read, S\BPBI J.%
, Brown, A\BPBI D.%
, Wang, P.%
\BCBL {}\ \BBA {} Miller, L\BPBI C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The Virtual Personalities Neural Network Model:
Neurobiological Underpinnings.} {The virtual personalities neural network
model: Neurobiological underpinnings.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Personality Neuroscience.}{}{}{}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Read%
\ \protect \BOthers {.}}{%
Read%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2010}}%
}]{%
read:10}
\APACinsertmetastar {%
read:10}%
\begin{APACrefauthors}%
Read, S\BPBI J.%
, Monroe, B\BPBI M.%
, Brownstein, A\BPBI L.%
, Yang, Y.%
, Chopra, G.%
\BCBL {}\ \BBA {} Miller, L\BPBI C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2010}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A neural network model of the structure and dynamics of
human personality.} {A neural network model of the structure and dynamics of
human personality.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Review}{117}{1}{61 - 92}.
\newblock
\begin{APACrefDOI} \doi{10.1037/a0018131} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Read%
, Vanman%
\BCBL {}\ \BBA {} Miller%
}{%
Read%
\ \protect \BOthers {.}}{%
{\protect \APACyear {1997}}%
}]{%
read:97}
\APACinsertmetastar {%
read:97}%
\begin{APACrefauthors}%
Read, S\BPBI J.%
, Vanman, E\BPBI J.%
\BCBL {}\ \BBA {} Miller, L\BPBI C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1997}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Connectionism, Parallel Constraint Satisfaction
Processes, and Gestalt Principles: (Re)Introducing Cognitive Dynamics to
Social Psychology} {Connectionism, parallel constraint satisfaction
processes, and gestalt principles: (re)introducing cognitive dynamics to
social psychology}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Personality and Social Psychology Review}{1}{1}{26-53}.
\newblock
\begin{APACrefDOI} \doi{10.1207/s15327957pspr0101_3} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Reise%
}{%
Reise%
}{%
{\protect \APACyear {2012}}%
}]{%
reise:12}
\APACinsertmetastar {%
reise:12}%
\begin{APACrefauthors}%
Reise, S\BPBI P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2012}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The Rediscovery of Bifactor Measurement Models} {The
rediscovery of bifactor measurement models}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate Behavioral Research}{47}{5}{667-696}.
\newblock
\APACrefnote{PMID: 24049214}
\newblock
\begin{APACrefDOI} \doi{10.1080/00273171.2012.715555} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rentfrow%
}{%
Rentfrow%
}{%
{\protect \APACyear {2014}}%
}]{%
rentfrow:14}
\APACinsertmetastar {%
rentfrow:14}%
\begin{APACrefauthors}%
Rentfrow, P\BPBI J.%
\end{APACrefauthors}%
\ (\BED).
\unskip\
\newblock
\APACrefYear{2014}.
\newblock
\APACrefbtitle {Geographical Psychology: Exploring the Interaction of
Environment and Behavior} {Geographical psychology: Exploring the interaction
of environment and behavior}.
\newblock
\APACaddressPublisher{}{American Psychological Association}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rentfrow%
, Gosling%
\BCBL {}\ \BBA {} Potter%
}{%
Rentfrow%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2008}}%
}]{%
rentfrow:08}
\APACinsertmetastar {%
rentfrow:08}%
\begin{APACrefauthors}%
Rentfrow, P\BPBI J.%
, Gosling, S\BPBI D.%
\BCBL {}\ \BBA {} Potter, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2008}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A Theory of the Emergence, Persistence, and Expression
of Geographic Variation in Psychological Characteristics} {A theory of the
emergence, persistence, and expression of geographic variation in
psychological characteristics}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Perspectives on Psychological Science}{3}{5}{339-369}.
\newblock
\begin{APACrefDOI} \doi{10.1111/j.1745-6924.2008.00084.x} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rentfrow%
\ \BBA {} Jokela%
}{%
Rentfrow%
\ \BBA {} Jokela%
}{%
{\protect \APACyear {2016}}%
}]{%
rentfrow:16}
\APACinsertmetastar {%
rentfrow:16}%
\begin{APACrefauthors}%
Rentfrow, P\BPBI J.%
\BCBT {}\ \BBA {} Jokela, M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Geographical Psychology: The Spatial Organization of
Psychological Phenomena} {Geographical psychology: The spatial organization
of psychological phenomena}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Current Directions in Psychological
Science}{25}{6}{393-398}.
\newblock
\begin{APACrefDOI} \doi{10.1177/0963721416658446} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
}{%
Revelle%
}{%
{\protect \APACyear {1986}}%
}]{%
rev:doa}
\APACinsertmetastar {%
rev:doa}%
\begin{APACrefauthors}%
Revelle, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1986}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Motivation and efficiency of cognitive performance.}
{Motivation and efficiency of cognitive performance.}{\BBCQ}
\newblock
\BIn{} D\BPBI R.~Brown\ \BBA {} J.~Veroff\ (\BEDS), \APACrefbtitle {Frontiers
of Motivational Psychology: Essays in honor of {J. W. Atkinson}} {Frontiers
of motivational psychology: Essays in honor of {J. W. Atkinson}}\
(\BPG~105-131).
\newblock
\APACaddressPublisher{New York}{Springer}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
}{%
Revelle%
}{%
{\protect \APACyear {2007}}%
}]{%
rev:ea07}
\APACinsertmetastar {%
rev:ea07}%
\begin{APACrefauthors}%
Revelle, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2007}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Experimental Approaches to the Study of Personality}
{Experimental approaches to the study of personality}.{\BBCQ}
\newblock
\BIn{} R.~Robins, R\BPBI C.~Fraley\BCBL {}\ \BBA {} R\BPBI F.~Krueger\ (\BEDS),
\APACrefbtitle {Handbook of research methods in personality psychology.}
{Handbook of research methods in personality psychology.}\ (\BPG~37-61).
\newblock
\APACaddressPublisher{New York}{Guilford}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
}{%
Revelle%
}{%
{\protect \APACyear {2018}}%
}]{%
psych}
\APACinsertmetastar {%
psych}%
\begin{APACrefauthors}%
Revelle, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{April}{}.
\newblock
{\BBOQ}\APACrefatitle {psych: Procedures for Personality and Psychological
Research} {psych: Procedures for personality and psychological
research}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\APACaddressPublisher{https://cran.r-project.org/web/packages=psych}{}.
\newblock
\begin{APACrefURL} \url{https://CRAN.R-project.org/package=psych}
\end{APACrefURL}
\newblock
\APACrefnote{R package version 1.8.4}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
\ \protect \BOthers {.}}{%
Revelle%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2016}}%
}]{%
rcwfbe}
\APACinsertmetastar {%
rcwfbe}%
\begin{APACrefauthors}%
Revelle, W.%
, Condon, D.%
, Wilt, J.%
, French, J\BPBI A.%
, Brown, A\BPBI D.%
\BCBL {}\ \BBA {} Elleman, L\BPBI G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Web and phone based data collection using planned
missing designs} {Web and phone based data collection using planned missing
designs}.{\BBCQ}
\newblock
\BIn{} G\BPBI B.~Nigel G.~Fielding Raymond M.~Lee\ (\BED), \APACrefbtitle {The
Sage Handbook of Online Research Methods} {The sage handbook of online
research methods}\ (\PrintOrdinal{2nd}\ \BEd, \BPG~578-595).
\newblock
\APACaddressPublisher{}{SAGE Publications}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
\ \BBA {} Condon%
}{%
Revelle%
\ \BBA {} Condon%
}{%
{\protect \APACyear {2015}}%
}]{%
rc:jrp:15}
\APACinsertmetastar {%
rc:jrp:15}%
\begin{APACrefauthors}%
Revelle, W.%
\BCBT {}\ \BBA {} Condon, D\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2015}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A model for personality at three levels} {A model for
personality at three levels}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Research in Personality}{56}{}{70-81}.
\newblock
\begin{APACrefDOI} \doi{10.1016/j.jrp.2014.12.006} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
\ \BBA {} Condon%
}{%
Revelle%
\ \BBA {} Condon%
}{%
{\protect \APACyear {2018}}%
{\protect \APACexlab {{\protect \BCnt {1}}}}}]{%
rc:reliability}
\APACinsertmetastar {%
rc:reliability}%
\begin{APACrefauthors}%
Revelle, W.%
\BCBT {}\ \BBA {} Condon, D\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018{\protect \BCnt {1}}}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Reliability} {Reliability}.{\BBCQ}
\newblock
\BIn{} P.~Irwing, T.~Booth\BCBL {}\ \BBA {} D\BPBI J.~Hughes\ (\BEDS),
\APACrefbtitle {The {Wiley Handbook of Psychometric Testing:} A
Multidisciplinary Reference on Survey, Scale and Test Development.} {The
{Wiley Handbook of Psychometric Testing:} a multidisciplinary reference on
survey, scale and test development.}
\newblock
\APACaddressPublisher{London}{John Wily \& Sons}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
\ \BBA {} Condon%
}{%
Revelle%
\ \BBA {} Condon%
}{%
{\protect \APACyear {2018}}%
{\protect \APACexlab {{\protect \BCnt {2}}}}}]{%
rc:pa:18}
\APACinsertmetastar {%
rc:pa:18}%
\begin{APACrefauthors}%
Revelle, W.%
\BCBT {}\ \BBA {} Condon, D\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018{\protect \BCnt {2}}}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Reliability from $\alpha$ to $\omega$: A Tutorial}
{Reliability from $\alpha$ to $\omega$: A tutorial}.{\BBCQ}
\newblock
\APACjournalVolNumPages{(under review)}{}{}{}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
\ \BBA {} Rocklin%
}{%
Revelle%
\ \BBA {} Rocklin%
}{%
{\protect \APACyear {1979}}%
}]{%
revelle:vss}
\APACinsertmetastar {%
revelle:vss}%
\begin{APACrefauthors}%
Revelle, W.%
\BCBT {}\ \BBA {} Rocklin, T.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1979}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{Very Simple Structure} - Alternative Procedure for
Estimating the Optimal Number of Interpretable Factors} {{Very Simple
Structure} - alternative procedure for estimating the optimal number of
interpretable factors}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate Behavioral Research}{14}{4}{403-414}.
\newblock
\begin{APACrefDOI} \doi{10.1207/s15327906mbr1404\_2} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
\ \BBA {} Wilt%
}{%
Revelle%
\ \BBA {} Wilt%
}{%
{\protect \APACyear {2016}}%
}]{%
rw:mbr:16}
\APACinsertmetastar {%
rw:mbr:16}%
\begin{APACrefauthors}%
Revelle, W.%
\BCBT {}\ \BBA {} Wilt, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The data box and within subject analyses: A comment on
{Nesselroade and Molenaar}} {The data box and within subject analyses: A
comment on {Nesselroade and Molenaar}}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate Behavioral Research}{51}{2-3}{419-421}.
\newblock
\begin{APACrefDOI} \doi{10.1080/00273171.2015.1086955} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
\ \BBA {} Wilt%
}{%
Revelle%
\ \BBA {} Wilt%
}{%
{\protect \APACyear {2017}}%
}]{%
rw:paid:17}
\APACinsertmetastar {%
rw:paid:17}%
\begin{APACrefauthors}%
Revelle, W.%
\BCBT {}\ \BBA {} Wilt, J\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2017}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Analyzing dynamic data: a tutorial} {Analyzing dynamic
data: a tutorial}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Personality and Individual Differences}{}{}{}.
\newblock
\begin{APACrefDOI} \doi{/10.1016/j.paid.2017.08.020} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Revelle%
\ \BBA {} Zinbarg%
}{%
Revelle%
\ \BBA {} Zinbarg%
}{%
{\protect \APACyear {2009}}%
}]{%
rz:09}
\APACinsertmetastar {%
rz:09}%
\begin{APACrefauthors}%
Revelle, W.%
\BCBT {}\ \BBA {} Zinbarg, R\BPBI E.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2009}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Coefficients alpha, beta, omega and the glb: comments on
{Sijtsma}} {Coefficients alpha, beta, omega and the glb: comments on
{Sijtsma}}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{74}{1}{145-154}.
\newblock
\begin{APACrefDOI} \doi{10.1007/s11336-008-9102-z} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Ridgeway%
\ \protect \BOthers {.}}{%
Ridgeway%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2017}}%
}]{%
gbm}
\APACinsertmetastar {%
gbm}%
\begin{APACrefauthors}%
Ridgeway, G.%
\BCBT {}\ \BOthersPeriod {.}
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2017}{}{}.
\newblock
{\BBOQ}\APACrefatitle {gbm: Generalized Boosted Regression Models} {gbm:
Generalized boosted regression models}{\BBCQ}\ [\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{https://CRAN.R-project.org/package=gbm}
\end{APACrefURL}
\newblock
\APACrefnote{R package version 2.1.3}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rindskopf%
\ \BBA {} Rose%
}{%
Rindskopf%
\ \BBA {} Rose%
}{%
{\protect \APACyear {1988}}%
}]{%
rindskopf:88}
\APACinsertmetastar {%
rindskopf:88}%
\begin{APACrefauthors}%
Rindskopf, D.%
\BCBT {}\ \BBA {} Rose, T.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1988}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Some Theory and Applications of Confirmatory
Second-Order Factor Analysis} {Some theory and applications of confirmatory
second-order factor analysis}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Multivariate Behavioral Research}{23}{1}{51-67}.
\newblock
\begin{APACrefDOI} \doi{10.1207/s15327906mbr2301_3} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Robinson%
}{%
Robinson%
}{%
{\protect \APACyear {1950}}%
}]{%
robinson:50}
\APACinsertmetastar {%
robinson:50}%
\begin{APACrefauthors}%
Robinson, W\BPBI S.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1950}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Ecological Correlations and the Behavior of Individuals}
{Ecological correlations and the behavior of individuals}.{\BBCQ}
\newblock
\APACjournalVolNumPages{American Sociological Review}{15}{3}{351--357}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rocklin%
\ \BBA {} Revelle%
}{%
Rocklin%
\ \BBA {} Revelle%
}{%
{\protect \APACyear {1981}}%
}]{%
rocklin:81}
\APACinsertmetastar {%
rocklin:81}%
\begin{APACrefauthors}%
Rocklin, T.%
\BCBT {}\ \BBA {} Revelle, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1981}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The measurement of extraversion: A comparison of the
{Eysenck Personality Inventory} and the {Eysenck Personality Questionnaire}}
{The measurement of extraversion: A comparison of the {Eysenck Personality
Inventory} and the {Eysenck Personality Questionnaire}}.{\BBCQ}
\newblock
\APACjournalVolNumPages{British Journal of Social Psychology}{20}{4}{279-284}.
\newblock
\begin{APACrefDOI} \doi{10.1111/j.2044-8309.1981.tb00498.x} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rodgers%
}{%
Rodgers%
}{%
{\protect \APACyear {2010}}%
}]{%
rodgers:10}
\APACinsertmetastar {%
rodgers:10}%
\begin{APACrefauthors}%
Rodgers, J\BPBI L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2010}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The epistemology of mathematical and statistical
modeling: A quiet methodological revolution.} {The epistemology of
mathematical and statistical modeling: A quiet methodological
revolution.}{\BBCQ}
\newblock
\APACjournalVolNumPages{American Psychologist}{65}{1}{1 - 12}.
\newblock
\begin{APACrefDOI} \doi{10.1037/a0018326} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rosenthal%
}{%
Rosenthal%
}{%
{\protect \APACyear {1994}}%
}]{%
rosenthal:94}
\APACinsertmetastar {%
rosenthal:94}%
\begin{APACrefauthors}%
Rosenthal, R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1994}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Parametric measures of effect size} {Parametric measures
of effect size}.{\BBCQ}
\newblock
\APACjournalVolNumPages{The handbook of research synthesis}{}{}{231--244}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rosenthal%
\ \BBA {} Rubin%
}{%
Rosenthal%
\ \BBA {} Rubin%
}{%
{\protect \APACyear {1982}}%
}]{%
rosenthal:rubin:besd}
\APACinsertmetastar {%
rosenthal:rubin:besd}%
\begin{APACrefauthors}%
Rosenthal, R.%
\BCBT {}\ \BBA {} Rubin, D\BPBI B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1982}{}{}.
\newblock
{\BBOQ}\APACrefatitle {A simple, general purpose display of magnitude of
experimental effect.} {A simple, general purpose display of magnitude of
experimental effect.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Educational Psychology}{74}{2}{166 - 169}.
\newblock
\begin{APACrefURL}
\url{http://search.ebscohost.com.turing.library.northwestern.edu/login.aspx?direct=true&db=pdh&AN=1982-22591-001&site=ehost-live}
\end{APACrefURL}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rosnow%
\ \BBA {} Rosenthal%
}{%
Rosnow%
\ \BBA {} Rosenthal%
}{%
{\protect \APACyear {2003}}%
}]{%
rosnow:03}
\APACinsertmetastar {%
rosnow:03}%
\begin{APACrefauthors}%
Rosnow, R\BPBI L.%
\BCBT {}\ \BBA {} Rosenthal, R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2003}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Effect sizes for experimenting psychologists} {Effect
sizes for experimenting psychologists}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Canadian Journal of Experimental Psychology/Revue
canadienne de psychologie exp{\'e}rimentale}{57}{3}{221-237}.
\newblock
\begin{APACrefDOI} \doi{10.1037/h0087427} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Rosseel%
}{%
Rosseel%
}{%
{\protect \APACyear {2012}}%
}]{%
lavaan}
\APACinsertmetastar {%
lavaan}%
\begin{APACrefauthors}%
Rosseel, Y.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2012}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{lavaan}: An {R} Package for Structural Equation
Modeling} {{lavaan}: An {R} package for structural equation modeling}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Statistical Software}{48}{2}{1--36}.
\newblock
\begin{APACrefDOI} \doi{10.18637/jss.v048.i02} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
semTools Contributors%
}{%
semTools Contributors%
}{%
{\protect \APACyear {2016}}%
}]{%
semTools}
\APACinsertmetastar {%
semTools}%
\begin{APACrefauthors}%
semTools Contributors.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{semTools}: Useful tools for structural equation
modeling} {{semTools}: Useful tools for structural equation modeling}{\BBCQ}\
[\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{http://cran.r-project.org/package=semTools}
\end{APACrefURL}
\newblock
\APACrefnote{R package version 0.4-13}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Shapiro%
\ \BBA {} ten Berge%
}{%
Shapiro%
\ \BBA {} ten Berge%
}{%
{\protect \APACyear {2002}}%
}]{%
shapiro:mrfa}
\APACinsertmetastar {%
shapiro:mrfa}%
\begin{APACrefauthors}%
Shapiro, A.%
\BCBT {}\ \BBA {} ten Berge, J\BPBI M.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2002}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Statistical inference of minimum rank factor analysis}
{Statistical inference of minimum rank factor analysis}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{67}{1}{79-94}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02294710} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Shrout%
\ \BBA {} Fleiss%
}{%
Shrout%
\ \BBA {} Fleiss%
}{%
{\protect \APACyear {1979}}%
}]{%
shrout:79}
\APACinsertmetastar {%
shrout:79}%
\begin{APACrefauthors}%
Shrout, P\BPBI E.%
\BCBT {}\ \BBA {} Fleiss, J\BPBI L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1979}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Intraclass correlations: Uses in assessing rater
reliability} {Intraclass correlations: Uses in assessing rater
reliability}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Bulletin}{86}{2}{420-428}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0033-2909.86.2.420} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Shrout%
\ \BBA {} Lane%
}{%
Shrout%
\ \BBA {} Lane%
}{%
{\protect \APACyear {2012}}%
}]{%
shrout:12a}
\APACinsertmetastar {%
shrout:12a}%
\begin{APACrefauthors}%
Shrout, P\BPBI E.%
\BCBT {}\ \BBA {} Lane, S\BPBI P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2012}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Psychometrics} {Psychometrics}.{\BBCQ}
\newblock
\BIn{} \APACrefbtitle {Handbook of research methods for studying daily life.}
{Handbook of research methods for studying daily life.}
\newblock
\APACaddressPublisher{}{Guilford Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Simpson%
}{%
Simpson%
}{%
{\protect \APACyear {1951}}%
}]{%
simpson:1951}
\APACinsertmetastar {%
simpson:1951}%
\begin{APACrefauthors}%
Simpson, E\BPBI H.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1951}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The Interpretation of Interaction in Contingency Tables}
{The interpretation of interaction in contingency tables}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of the Royal Statistical Society. Series B
(Methodological)}{13}{2}{238--241}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Spearman%
}{%
Spearman%
}{%
{\protect \APACyear {1904}}%
{\protect \APACexlab {{\protect \BCnt {1}}}}}]{%
spearman:04}
\APACinsertmetastar {%
spearman:04}%
\begin{APACrefauthors}%
Spearman, C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1904{\protect \BCnt {1}}}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{``General Intelligence,"} Objectively determined and
measured} {{``General Intelligence,"} objectively determined and
measured}.{\BBCQ}
\newblock
\APACjournalVolNumPages{American Journal of Psychology}{15}{2}{201-292}.
\newblock
\begin{APACrefDOI} \doi{10.2307/1412107} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Spearman%
}{%
Spearman%
}{%
{\protect \APACyear {1904}}%
{\protect \APACexlab {{\protect \BCnt {2}}}}}]{%
spearman:rho}
\APACinsertmetastar {%
spearman:rho}%
\begin{APACrefauthors}%
Spearman, C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1904{\protect \BCnt {2}}}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The Proof and Measurement of Association between Two
Things} {The proof and measurement of association between two things}.{\BBCQ}
\newblock
\APACjournalVolNumPages{The American Journal of Psychology}{15}{1}{72-101}.
\newblock
\begin{APACrefDOI} \doi{10.2307/1412159} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Spearman%
}{%
Spearman%
}{%
{\protect \APACyear {1910}}%
}]{%
spearman:10}
\APACinsertmetastar {%
spearman:10}%
\begin{APACrefauthors}%
Spearman, C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1910}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Correlation calculated from faulty data} {Correlation
calculated from faulty data}.{\BBCQ}
\newblock
\APACjournalVolNumPages{British Journal of Psychology}{3}{3}{271-295}.
\newblock
\begin{APACrefDOI} \doi{10.1111/j.2044-8295.1910.tb00206.x} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Spearman%
}{%
Spearman%
}{%
{\protect \APACyear {1927}}%
}]{%
spearman:27}
\APACinsertmetastar {%
spearman:27}%
\begin{APACrefauthors}%
Spearman, C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1927}.
\newblock
\APACrefbtitle {The abilities of man} {The abilities of man}.
\newblock
\APACaddressPublisher{Oxford England}{Macmillan}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
SPSS%
}{%
SPSS%
}{%
{\protect \APACyear {2008}}%
}]{%
SPSS}
\APACinsertmetastar {%
SPSS}%
\begin{APACrefauthors}%
SPSS.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2008}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Version 17.0} {Version 17.0}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Chicago: SPSS Inc}{}{}{}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Streiner%
}{%
Streiner%
}{%
{\protect \APACyear {2003}}%
}]{%
streiner:03}
\APACinsertmetastar {%
streiner:03}%
\begin{APACrefauthors}%
Streiner, D\BPBI L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2003}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Unicorns Do Exist: A Tutorial on
{\^a}Proving{\^a} the Null Hypothesis} {Unicorns do exist: A tutorial
on {\^a}proving{\^a} the null hypothesis}.{\BBCQ}
\newblock
\APACjournalVolNumPages{The Canadian Journal of Psychiatry}{48}{11}{756-761}.
\newblock
\begin{APACrefURL} \url{https://doi.org/10.1177/070674370304801108}
\end{APACrefURL}
\newblock
\APACrefnote{PMID: 14733457}
\newblock
\begin{APACrefDOI} \doi{10.1177/070674370304801108} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Strong%
}{%
Strong%
}{%
{\protect \APACyear {1927}}%
}]{%
strong:27}
\APACinsertmetastar {%
strong:27}%
\begin{APACrefauthors}%
Strong, E\BPBI K.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1927}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{Vocational interest test}} {{Vocational interest
test}}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Educational Record}{8}{2}{107-121}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Student%
}{%
Student%
}{%
{\protect \APACyear {1908}}%
}]{%
student:t}
\APACinsertmetastar {%
student:t}%
\begin{APACrefauthors}%
Student.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1908}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The probable error of a mean} {The probable error of a
mean}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Biometrika}{6}{1}{1-25}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Tal-Or%
, Cohen%
, Tsfati%
\BCBL {}\ \BBA {} Gunther%
}{%
Tal-Or%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2010}}%
}]{%
talor:10}
\APACinsertmetastar {%
talor:10}%
\begin{APACrefauthors}%
Tal-Or, N.%
, Cohen, J.%
, Tsfati, Y.%
\BCBL {}\ \BBA {} Gunther, A\BPBI C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2010}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Testing Causal Direction in the Influence of Presumed
Media Influence} {Testing causal direction in the influence of presumed media
influence}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Communication Research}{37}{6}{801-824}.
\newblock
\begin{APACrefDOI} \doi{10.1177/0093650210362684} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Tarka%
}{%
Tarka%
}{%
{\protect \APACyear {2018}}%
}]{%
tarka:18}
\APACinsertmetastar {%
tarka:18}%
\begin{APACrefauthors}%
Tarka, P.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{}{01}.
\newblock
{\BBOQ}\APACrefatitle {An overview of structural equation modeling: its
beginnings, historical development, usefulness and controversies in the
social sciences} {An overview of structural equation modeling: its
beginnings, historical development, usefulness and controversies in the
social sciences}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Quality {\&} Quantity}{52}{1}{313--354}.
\newblock
\begin{APACrefDOI} \doi{10.1007/s11135-017-0469-8} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Taylor%
\ \BBA {} Russell%
}{%
Taylor%
\ \BBA {} Russell%
}{%
{\protect \APACyear {1939}}%
}]{%
taylor:russell}
\APACinsertmetastar {%
taylor:russell}%
\begin{APACrefauthors}%
Taylor, H\BPBI C.%
\BCBT {}\ \BBA {} Russell, J\BPBI T.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1939}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The relationship of validity coefficients to the
practical effectiveness of tests in selection: discussion and tables.} {The
relationship of validity coefficients to the practical effectiveness of tests
in selection: discussion and tables.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Applied Psychology}{23}{5}{565 - 578}.
\newblock
\begin{APACrefDOI} \doi{10.1037/h0057079} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Therneau%
\ \BBA {} Atkinson%
}{%
Therneau%
\ \BBA {} Atkinson%
}{%
{\protect \APACyear {2018}}%
}]{%
rpart}
\APACinsertmetastar {%
rpart}%
\begin{APACrefauthors}%
Therneau, T.%
\BCBT {}\ \BBA {} Atkinson, B.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2018}{}{}.
\newblock
{\BBOQ}\APACrefatitle {rpart: Recursive Partitioning and Regression Trees}
{rpart: Recursive partitioning and regression trees}{\BBCQ}\
[\bibcomputersoftwaremanual].
\newblock
\begin{APACrefURL} \url{https://CRAN.R-project.org/package=rpart}
\end{APACrefURL}
\newblock
\APACrefnote{R package version 4.1-13}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Thurstone%
}{%
Thurstone%
}{%
{\protect \APACyear {1933}}%
}]{%
thurstone:33}
\APACinsertmetastar {%
thurstone:33}%
\begin{APACrefauthors}%
Thurstone, L\BPBI L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1933}.
\newblock
\APACrefbtitle {The theory of multiple factors} {The theory of multiple
factors}.
\newblock
\APACaddressPublisher{Ann Arbor, Michigan}{Edwards Brothers}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Thurstone%
}{%
Thurstone%
}{%
{\protect \APACyear {1934}}%
}]{%
thurstone:34}
\APACinsertmetastar {%
thurstone:34}%
\begin{APACrefauthors}%
Thurstone, L\BPBI L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1934}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The vectors of mind.} {The vectors of mind.}{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Review}{41}{1}{1}.
\newblock
\begin{APACrefDOI} \doi{10.1037/h0075959} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Thurstone%
}{%
Thurstone%
}{%
{\protect \APACyear {1935}}%
}]{%
thurstone:35}
\APACinsertmetastar {%
thurstone:35}%
\begin{APACrefauthors}%
Thurstone, L\BPBI L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1935}.
\newblock
\APACrefbtitle {The vectors of mind: multiple-factor analysis for the isolation
of primary traits} {The vectors of mind: multiple-factor analysis for the
isolation of primary traits}.
\newblock
\APACaddressPublisher{Chicago}{Univ. of Chicago Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Thurstone%
}{%
Thurstone%
}{%
{\protect \APACyear {1947}}%
}]{%
thurstone:47}
\APACinsertmetastar {%
thurstone:47}%
\begin{APACrefauthors}%
Thurstone, L\BPBI L.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1947}.
\newblock
\APACrefbtitle {Multiple-factor analysis: a development and expansion of The
vectors of the mind} {Multiple-factor analysis: a development and expansion
of the vectors of the mind}.
\newblock
\APACaddressPublisher{Chicago, Ill.}{The University of Chicago Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Thurstone%
\ \BBA {} Thurstone%
}{%
Thurstone%
\ \BBA {} Thurstone%
}{%
{\protect \APACyear {1941}}%
}]{%
thurstone:41}
\APACinsertmetastar {%
thurstone:41}%
\begin{APACrefauthors}%
Thurstone, L\BPBI L.%
\BCBT {}\ \BBA {} Thurstone, T\BPBI G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1941}.
\newblock
\APACrefbtitle {Factorial studies of intelligence} {Factorial studies of
intelligence}.
\newblock
\APACaddressPublisher{Chicago, Ill.}{The University of Chicago press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Tingley%
, Yamamoto%
, Hirose%
, Keele%
\BCBL {}\ \BBA {} Imai%
}{%
Tingley%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2014}}%
}]{%
mediation}
\APACinsertmetastar {%
mediation}%
\begin{APACrefauthors}%
Tingley, D.%
, Yamamoto, T.%
, Hirose, K.%
, Keele, L.%
\BCBL {}\ \BBA {} Imai, K.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2014}{}{}.
\newblock
{\BBOQ}\APACrefatitle {{mediation}: {R} Package for Causal Mediation Analysis}
{{mediation}: {R} package for causal mediation analysis}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Statistical Software}{59}{5}{1--38}.
\newblock
\begin{APACrefURL} \url{http://www.jstatsoft.org/v59/i05/} \end{APACrefURL}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Tukey%
}{%
Tukey%
}{%
{\protect \APACyear {1958}}%
}]{%
tukey:58}
\APACinsertmetastar {%
tukey:58}%
\begin{APACrefauthors}%
Tukey, J\BPBI W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1958}{06}{}.
\newblock
{\BBOQ}\APACrefatitle {Bias and confidence in Not-quite Large Samples
(preliminary report) (Abstract)} {Bias and confidence in not-quite large
samples (preliminary report) (abstract)}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Ann. Math. Statist.}{29}{2}{614}.
\newblock
\begin{APACrefDOI} \doi{10.1214/aoms/1177706647} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Velicer%
}{%
Velicer%
}{%
{\protect \APACyear {1976}}%
}]{%
velicer:76}
\APACinsertmetastar {%
velicer:76}%
\begin{APACrefauthors}%
Velicer, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1976}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Determining the number of components from the matrix of
partial correlations} {Determining the number of components from the matrix
of partial correlations}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{41}{3}{321--327}.
\newblock
\begin{APACrefDOI} \doi{10.1007/BF02293557} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wainer%
}{%
Wainer%
}{%
{\protect \APACyear {1976}}%
}]{%
wainer:76}
\APACinsertmetastar {%
wainer:76}%
\begin{APACrefauthors}%
Wainer, H.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1976}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Estimating coefficients in linear models: It don't make
no nevermind} {Estimating coefficients in linear models: It don't make no
nevermind}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Bulletin}{83}{2}{213-217}.
\newblock
\begin{APACrefDOI} \doi{10.1037/0033-2909.83.2.213} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Waller%
}{%
Waller%
}{%
{\protect \APACyear {2008}}%
}]{%
waller:08}
\APACinsertmetastar {%
waller:08}%
\begin{APACrefauthors}%
Waller, N\BPBI G.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2008}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Fungible Weights in Multiple Regression} {Fungible
weights in multiple regression}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{73}{4}{691-703}.
\newblock
\begin{APACrefDOI} \doi{10.1007/s11336-008-9066-z} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Waller%
\ \BBA {} Jones%
}{%
Waller%
\ \BBA {} Jones%
}{%
{\protect \APACyear {2010}}%
}]{%
waller:10}
\APACinsertmetastar {%
waller:10}%
\begin{APACrefauthors}%
Waller, N\BPBI G.%
\BCBT {}\ \BBA {} Jones, J\BPBI A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2010}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Correlation weights in multiple regression} {Correlation
weights in multiple regression}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{75}{1}{58-69}.
\newblock
\begin{APACrefDOI} \doi{10.1007/s11336-009-9127-Y} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wiggins%
}{%
Wiggins%
}{%
{\protect \APACyear {1973}}%
}]{%
wiggins:73}
\APACinsertmetastar {%
wiggins:73}%
\begin{APACrefauthors}%
Wiggins, J\BPBI S.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1973}.
\newblock
\APACrefbtitle {Personality and prediction: principles of personality
assessment} {Personality and prediction: principles of personality
assessment}.
\newblock
\APACaddressPublisher{Reading, Mass.}{Addison-Wesley Pub. Co}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wilcox%
}{%
Wilcox%
}{%
{\protect \APACyear {2001}}%
}]{%
wilcox:01}
\APACinsertmetastar {%
wilcox:01}%
\begin{APACrefauthors}%
Wilcox, R\BPBI R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2001}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Modern insights about {Pearson's} correlation and least
squares regression} {Modern insights about {Pearson's} correlation and least
squares regression}.{\BBCQ}
\newblock
\APACjournalVolNumPages{International Journal of Selection and
Assessment}{9}{1-2}{195-205}.
\newblock
\begin{APACrefDOI} \doi{10.1111/1468-2389.00172} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wilcox%
}{%
Wilcox%
}{%
{\protect \APACyear {2005}}%
}]{%
wilcox:05}
\APACinsertmetastar {%
wilcox:05}%
\begin{APACrefauthors}%
Wilcox, R\BPBI R.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{2005}.
\newblock
\APACrefbtitle {Introduction to robust estimation and hypothesis testing}
{Introduction to robust estimation and hypothesis testing}\
(\PrintOrdinal{2nd}\ \BEd).
\newblock
\APACaddressPublisher{Amsterdam: Boston}{Elsevier/Academic Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wilcox%
\ \BBA {} Keselman%
}{%
Wilcox%
\ \BBA {} Keselman%
}{%
{\protect \APACyear {2003}}%
}]{%
wilcox:03}
\APACinsertmetastar {%
wilcox:03}%
\begin{APACrefauthors}%
Wilcox, R\BPBI R.%
\BCBT {}\ \BBA {} Keselman, H\BPBI J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2003}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Modern Robust Data Analysis Methods: Measures of Central
Tendency} {Modern robust data analysis methods: Measures of central
tendency}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychological Methods}{8}{3}{254-274}.
\newblock
\begin{APACrefDOI} \doi{10.1037/1082-989X.8.3.254} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wiley%
}{%
Wiley%
}{%
{\protect \APACyear {1973}}%
}]{%
wiley:73}
\APACinsertmetastar {%
wiley:73}%
\begin{APACrefauthors}%
Wiley, D\BPBI E.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1973}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Structural equation models in the social sciences}
{Structural equation models in the social sciences}.{\BBCQ}
\newblock
\BIn{} A\BPBI S.~Goldberger\ \BBA {} O\BPBI D.~Duncan\ (\BEDS), (\BPG~69-83).
\newblock
\APACaddressPublisher{New York}{Seminar Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wilt%
, Bleidorn%
\BCBL {}\ \BBA {} Revelle%
}{%
Wilt%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2016}}%
}]{%
wbr:ejp:16}
\APACinsertmetastar {%
wbr:ejp:16}%
\begin{APACrefauthors}%
Wilt, J.%
, Bleidorn, W.%
\BCBL {}\ \BBA {} Revelle, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2016}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Finding a Life Worth Living: Meaning in Life and
Graduation from College} {Finding a life worth living: Meaning in life and
graduation from college}.{\BBCQ}
\newblock
\APACjournalVolNumPages{European Journal of Personality}{30}{}{158-167}.
\newblock
\begin{APACrefDOI} \doi{10.1002/per.2046} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wilt%
, Bleidorn%
\BCBL {}\ \BBA {} Revelle%
}{%
Wilt%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2017}}%
}]{%
wbr:jrp:17}
\APACinsertmetastar {%
wbr:jrp:17}%
\begin{APACrefauthors}%
Wilt, J.%
, Bleidorn, W.%
\BCBL {}\ \BBA {} Revelle, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2017}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Velocity Explains the Links between Personality States
and Affect} {Velocity explains the links between personality states and
affect}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Research in Personality}{69}{86-95}{}.
\newblock
\begin{APACrefDOI} \doi{10.1016/j.jrp.2016.06.008} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wilt%
, Funkhouser%
\BCBL {}\ \BBA {} Revelle%
}{%
Wilt%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2011}}%
}]{%
wfr:11}
\APACinsertmetastar {%
wfr:11}%
\begin{APACrefauthors}%
Wilt, J.%
, Funkhouser, K.%
\BCBL {}\ \BBA {} Revelle, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2011}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The Dynamic Relationships of Affective Synchrony to
Perceptions of Situations} {The dynamic relationships of affective synchrony
to perceptions of situations}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Research in Personality}{45}{}{309--321}.
\newblock
\begin{APACrefDOI} \doi{10.1016/j.jrp.2011.03.005} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wilt%
\ \BBA {} Revelle%
}{%
Wilt%
\ \BBA {} Revelle%
}{%
{\protect \APACyear {2017}}%
}]{%
wr:paid:17}
\APACinsertmetastar {%
wr:paid:17}%
\begin{APACrefauthors}%
Wilt, J.%
\BCBT {}\ \BBA {} Revelle, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2017}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The Big Five, Situational Context, and Affective
Experience} {The big five, situational context, and affective
experience}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Personality and Individual Differences}{}{}{}.
\newblock
\begin{APACrefDOI} \doi{10.1016/j.paid.2017.12.032} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wothke%
}{%
Wothke%
}{%
{\protect \APACyear {1993}}%
}]{%
wothke:93}
\APACinsertmetastar {%
wothke:93}%
\begin{APACrefauthors}%
Wothke, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1993}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Nonpositive definite matrices in structural modeling}
{Nonpositive definite matrices in structural modeling}.{\BBCQ}
\newblock
\BIn{} K\BPBI A.~Bollen\ \BBA {} J\BPBI S.~Long\ (\BEDS), \APACrefbtitle
{Testing structural equation models} {Testing structural equation models}\
(\BPG~256-293).
\newblock
\APACaddressPublisher{Newbury Park}{Sage Publications}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wright%
}{%
Wright%
}{%
{\protect \APACyear {1920}}%
}]{%
wright:20}
\APACinsertmetastar {%
wright:20}%
\begin{APACrefauthors}%
Wright, S.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1920}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The relative importance of heredity and environment in
determining the piebald pattern of guinea-pigs} {The relative importance of
heredity and environment in determining the piebald pattern of
guinea-pigs}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Proceedings of the National Academy of
Sciences}{6}{6}{320--332}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Wright%
}{%
Wright%
}{%
{\protect \APACyear {1921}}%
}]{%
wright:21}
\APACinsertmetastar {%
wright:21}%
\begin{APACrefauthors}%
Wright, S.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1921}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Correlation and causation} {Correlation and
causation}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of Agricultural Research}{20}{3}{557-585}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Yang%
\ \protect \BOthers {.}}{%
Yang%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2014}}%
}]{%
yang:ingredients:14}
\APACinsertmetastar {%
yang:ingredients:14}%
\begin{APACrefauthors}%
Yang, Y.%
, Read, S\BPBI J.%
, Denson, T\BPBI F.%
, Xu, Y.%
, Zhang, J.%
\BCBL {}\ \BBA {} Pedersen, W\BPBI C.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2014}{}{}.
\newblock
{\BBOQ}\APACrefatitle {The Key Ingredients of Personality Traits: Situations,
Behaviors, and Explanations} {The key ingredients of personality traits:
Situations, behaviors, and explanations}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Personality and Social Psychology
Bulletin}{40}{1}{79-91}.
\newblock
\begin{APACrefDOI} \doi{10.1177/0146167213505871} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Yarkoni%
\ \BBA {} Westfall%
}{%
Yarkoni%
\ \BBA {} Westfall%
}{%
{\protect \APACyear {2017}}%
}]{%
yarkoni:17}
\APACinsertmetastar {%
yarkoni:17}%
\begin{APACrefauthors}%
Yarkoni, T.%
\BCBT {}\ \BBA {} Westfall, J.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2017}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Choosing Prediction Over Explanation in Psychology:
Lessons From Machine Learning} {Choosing prediction over explanation in
psychology: Lessons from machine learning}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Perspectives on Psychological
Science}{12}{6}{1100-1122}.
\newblock
\begin{APACrefDOI} \doi{10.1177/1745691617693393} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Yates%
}{%
Yates%
}{%
{\protect \APACyear {1988}}%
}]{%
yates:88}
\APACinsertmetastar {%
yates:88}%
\begin{APACrefauthors}%
Yates, A.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYear{1988}.
\newblock
\APACrefbtitle {Multivariate exploratory data analysis: A perspective on
exploratory factor analysis} {Multivariate exploratory data analysis: A
perspective on exploratory factor analysis}.
\newblock
\APACaddressPublisher{}{Suny Press}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Yule%
}{%
Yule%
}{%
{\protect \APACyear {1903}}%
}]{%
yule:1903}
\APACinsertmetastar {%
yule:1903}%
\begin{APACrefauthors}%
Yule, G\BPBI U.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1903}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Notes on the Theory of Association of Attributes in
Statistics} {Notes on the theory of association of attributes in
statistics}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Biometrika}{2}{2}{121--134}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Yule%
}{%
Yule%
}{%
{\protect \APACyear {1912}}%
}]{%
yule:12}
\APACinsertmetastar {%
yule:12}%
\begin{APACrefauthors}%
Yule, G\BPBI U.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{1912}{}{}.
\newblock
{\BBOQ}\APACrefatitle {On the methods of measuring association between two
attributes} {On the methods of measuring association between two
attributes}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Journal of the Royal Statistical
Society}{LXXV}{}{579-652}.
\PrintBackRefs{\CurrentBib}
\bibitem [\protect \citeauthoryear {%
Zinbarg%
, Revelle%
, Yovel%
\BCBL {}\ \BBA {} Li%
}{%
Zinbarg%
\ \protect \BOthers {.}}{%
{\protect \APACyear {2005}}%
}]{%
zinbarg:pm:05}
\APACinsertmetastar {%
zinbarg:pm:05}%
\begin{APACrefauthors}%
Zinbarg, R\BPBI E.%
, Revelle, W.%
, Yovel, I.%
\BCBL {}\ \BBA {} Li, W.%
\end{APACrefauthors}%
\unskip\
\newblock
\APACrefYearMonthDay{2005}{}{}.
\newblock
{\BBOQ}\APACrefatitle {Cronbach's {$\alpha$}, {Revelle's} {$\beta$}, and
{McDonald's} {$\omega_H$}: Their relations with each other and two
alternative conceptualizations of reliability} {Cronbach's {$\alpha$},
{Revelle's} {$\beta$}, and {McDonald's} {$\omega_H$}: Their relations with
each other and two alternative conceptualizations of reliability}.{\BBCQ}
\newblock
\APACjournalVolNumPages{Psychometrika}{70}{1}{123-133}.
\newblock
\begin{APACrefDOI} \doi{10.1007/s11336-003-0974-7} \end{APACrefDOI}
\PrintBackRefs{\CurrentBib}
\end{thebibliography}
\newpage
\section{Appendix}
The \R{} code for the various examples is shown here.
Table~\ref{tab:msq} was a subset of the \pfun{msqR} data set which is included in the \Rpkg{psych} package. Here we show the size of the entire data set (6411 rows by 79 columns) , the number of subjects with repeated measures (2086), and how to form a subset of the first eight cases for both time 1 and time 2.
\begin{Rinput}
msq.items <- c("anxious" , "at.ease" , "calm" , "confident", "content",
"jittery", "nervous", "relaxed" , "tense" , "upset" ) #these overlap with the sai
dim(msqR) #show the dimensions of the data set
colnames(msqR) #what are the variables
table(msqR$time) #show the number of observations with various repeated values
example <- msqR[c(1:8,69:76),c(cs(id,time),msq.items)]
df2latex(example) #make a \LateX table of the example data
\end{Rinput}
\subsection{Descriptive statistics}
Table~\ref{tab:describe} shows descriptive statistics.
\begin{Rinput}
describe(msqR[c(1:8,69:76),c(cs(id,time),msq.items)],IQR=TRUE) #for the data in table 1
describe(msqR[c(cs(id,time),msq.items) #describe the entire data set
\end{Rinput}
\subsection{Correlation and regression}
Table~\ref{tab:Tal_Or} shows the correlation matrix from the \cite{talor:10} data set. Here we show several different ways to show those correlations and to test for their significance. In this and the subsequent examples, we use the standard notation for $y \sim x$. Unfortunately, the $\sim$ symbol renders poorly and for those who want to directly copy from the pdf, the $\sim $ symbol should be written in by hand.
\begin{Rinput}
describe(Tal_Or) #the descriptive statistics for the data.
t.test(reaction ~ cond, data=Tal_Or) # The t.test of interest
t.test(pmi ~ cond, data=Tal_Or) # Also test the effects on pmi
t.test(import ~ cond, data=Tal_Or) #and import
cor(Tal_Or) #the core-R command displays to 9 decimals
#or just show the lower diagonal of the correlations,
lowerCor(Tal_Or) #round the results to two decimals and abbreviate the names
corr.test(Tal_Or) # find the correlations, the raw p values and the adjusted p values
cor.ci(Tal_Or[1:4], n.iter=1000)
cor2latex(Tal_Or[1:4],stars=TRUE,adjust="none") #create the Table
\end{Rinput}
Produces this output:
\begin{Routput}
> describe(Tal_Or) #the descriptive statistics for the data.
vars n mean sd median trimmed mad min max range skew kurtosis se
cond 1 123 0.47 0.50 0.00 0.46 0.00 0 1 1 0.11 -2.00 0.05
pmi 2 123 5.60 1.32 6.00 5.78 1.48 1 7 6 -1.17 1.30 0.12
import 3 123 4.20 1.74 4.00 4.26 1.48 1 7 6 -0.26 -0.89 0.16
reaction 4 123 3.48 1.55 3.25 3.44 1.85 1 7 6 0.21 -0.90 0.14
gender 5 123 1.65 0.48 2.00 1.69 0.00 1 2 1 -0.62 -1.62 0.04
age 6 123 24.63 5.80 24.00 23.76 1.48 18 61 43 4.71 24.76 0.52
> t.test(reaction ~ cond, data=Tal_Or) # The t.test of interest
Welch Two Sample t-test
data: reaction by cond
t = -1.7964, df = 120.98, p-value = 0.07492
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-1.04196792 0.05058861
sample estimates:
mean in group 0 mean in group 1
3.25000 3.74569
... output omitted
cor(Tal_Or) #the core-r command
cond pmi import reaction gender age
cond 1.00000000 0.180773560 0.18091083 0.16026292 -0.12717905 0.025245417
pmi 0.18077356 1.000000000 0.28207107 0.44649392 -0.02112095 -0.004947199
import 0.18091083 0.282071074 1.00000000 0.46477681 0.02700985 0.073431563
reaction 0.16026292 0.446493916 0.46477681 1.00000000 0.01436459 -0.083728952
gender -0.12717905 -0.021120953 0.02700985 0.01436459 1.00000000 -0.318450715
age 0.02524542 -0.004947199 0.07343156 -0.08372895 -0.31845072 1.000000000
lowerCor(Tal_Or) #round the results to two decimals and abbreviate the names
cond pmi imprt rectn gendr age
cond 1.00
pmi 0.18 1.00
import 0.18 0.28 1.00
reaction 0.16 0.45 0.46 1.00
gender -0.13 -0.02 0.03 0.01 1.00
age 0.03 0.00 0.07 -0.08 -0.32 1.00
>
> corr.test(Tal_Or) # find the correlations, the raw p values and the adjusted p values
Call:corr.test(x = Tal_Or)
Correlation matrix
cond pmi import reaction gender age
cond 1.00 0.18 0.18 0.16 -0.13 0.03
pmi 0.18 1.00 0.28 0.45 -0.02 0.00
import 0.18 0.28 1.00 0.46 0.03 0.07
reaction 0.16 0.45 0.46 1.00 0.01 -0.08
gender -0.13 -0.02 0.03 0.01 1.00 -0.32
age 0.03 0.00 0.07 -0.08 -0.32 1.00
Sample Size
[1] 123
Probability values (Entries above the diagonal are adjusted for multiple tests.)
cond pmi import reaction gender age
cond 0.00 0.50 0.50 0.69 1 1
pmi 0.05 0.00 0.02 0.00 1 1
import 0.05 0.00 0.00 0.00 1 1
reaction 0.08 0.00 0.00 0.00 1 1
gender 0.16 0.82 0.77 0.87 0 0
age 0.78 0.96 0.42 0.36 0 0
To see confidence intervals of the correlations, print with the short=FALSE option
>
> cor.ci(Tal_Or[1:4], n.iter=1000)
Call:corCi(x = x, keys = keys, n.iter = n.iter, p = p, overlap = overlap,
poly = poly, method = method, plot = plot, minlength = minlength)
Coefficients and bootstrapped confidence intervals
cond pmi imprt rectn
cond 1.00
pmi 0.18 1.00
import 0.18 0.28 1.00
reaction 0.16 0.45 0.46 1.00
scale correlations and bootstrapped confidence intervals
lower.emp lower.norm estimate upper.norm upper.emp p
cond-pmi 0.02 0.02 0.18 0.35 0.34 0.03
cond-imprt 0.00 0.00 0.18 0.35 0.34 0.05
cond-rectn -0.02 -0.02 0.16 0.33 0.32 0.09
pmi-imprt 0.10 0.11 0.28 0.44 0.43 0.00
pmi-rectn 0.30 0.30 0.45 0.57 0.58 0.00
imprt-rectn 0.32 0.31 0.46 0.60 0.59 0.00
To see confidence intervals of the correlations, print with the short=FALSE option
> cor2latex(Tal_Or[1:4],stars=TRUE,adjust="none") #create the Table
.... omitted
\end{Routput}
\subsection{Mediation and Moderation \label{app:mediation}}
Mediation is just a different way of thinking of regression. It can be done using the \pfun{mediate} function.
The first example just shows the regression analysis and draws the figure, The second example adds pmi and import as mediators. Compare the two outputs. See Figure~\ref{fig:regression}.
\begin{Rinput}
reg <- mediate(reaction ~ pmi +cond + import,data=Tal_Or)
moderate.diagram(reg,main="Regression")
reg
med <- mediate(reaction ~ cond + (pmi)+ (import),data=Tal_Or)
print(med,short=FALSE)
\end{Rinput}
\begin{Routput}
> reg <- mediate(reaction ~ pmi +cond + import,data=Tal_Or)
> moderate.diagram(reg,main="Regression")
> reg
Mediation/Moderation Analysis
Call: mediate(y = reaction ~ pmi + cond + import, data = Tal_Or)
The DV (Y) was reaction . The IV (X) was pmi cond import . The mediating variable(s) = .
DV = reaction
slope se t p
pmi 0.40 0.09 4.26 4.0e-05
cond 0.10 0.24 0.43 6.7e-01
import 0.32 0.07 4.59 1.1e-05
With R2 = 0.33
R = 0.57 R2 = 0.33 F = 19.11 on 3 and 119 DF p-value: 3.5e-10
>
> med <- mediate(reaction ~ cond + (pmi)+ (import),data=Tal_Or)
> print(med,short=FALSE)
Mediation/Moderation Analysis
Call: mediate(y = reaction ~ cond + (pmi) + (import), data = Tal_Or)
The DV (Y) was reaction . The IV (X) was cond . The mediating variable(s) = pmi import .
Total effect(c) of cond on reaction = 0.5 S.E. = 0.28 t = 1.79 df= 119
with p = 0.077
Direct effect (c') of cond on reaction removing pmi import = 0.1
S.E. = 0.24 t = 0.43 df= 119 with p = 0.67
Indirect effect (ab) of cond on reaction through pmi import = 0.39
Mean bootstrapped indirect effect = 0.4 with standard error = 0.17
Lower CI = 0.09 Upper CI = 0.73
R = 0.57 R2 = 0.33 F = 19.11 on 3 and 119 DF p-value: 3.5e-10
Full output
Total effect estimates (c)
reaction se t df Prob
cond 0.5 0.28 1.79 119 0.0766
Direct effect estimates (c')
reaction se t df Prob
cond 0.10 0.24 0.43 119 6.66e-01
pmi 0.40 0.09 4.26 119 4.04e-05
import 0.32 0.07 4.59 119 1.13e-05
'a' effect estimates
cond se t df Prob
pmi 0.48 0.24 2.02 121 0.0454
import 0.63 0.31 2.02 121 0.0452
'b' effect estimates
reaction se t df Prob
pmi 0.40 0.09 4.26 119 4.04e-05
import 0.32 0.07 4.59 119 1.13e-05
'ab' effect estimates
reaction boot sd lower upper
cond 0.39 0.4 0.17 0.09 0.73
'ab' effects estimates for each mediator
pmi boot sd lower upper
cond 0.19 0.19 0.11 0.01 0.42
import boot sd lower upper
cond 0.2 0.2 0.11 0.01 0.45
\end{Routput}
To show moderation, we use the \cite{garcia:10} data set. We use the \pfun{scale} and \pfun{lm} functions from Core-R to do the regressions. We compare the mean centered versus non-mean centered results. Then we use \pfun{setCor} to combine these two steps. We include a demonstration of how to create the interaction plot of Figure~\ref{fig:regression}
\begin{Rinput}
#First do the regular linear model
mod1 <- lm(respappr ~ prot2 * sexism ,data=Garcia) #do not mean center
centered <- scale(Garcia,scale=FALSE) #mean center, do not standardize
centered.df <- data.frame(centered) #convert to a data frame
mod.centered <- lm(respappr ~ prot2 * sexism ,data=centered.df)
summary(mod1) #the uncentered model
summary(mod.centered) #the centered model
par(mfrow=c(1,2))
#compare two models (bootstrapping n.iter set to 5000 by default
# 1) mean center the variables prior to taking product terms
mod <- setCor(respappr ~ prot2 * sexism ,data=Garcia,
,main="A: Moderated regression (std. and mean centered)")
mod
#demonstrate interaction plots
plot(respappr ~ sexism, pch = 23- protest, bg = c("black","red", "blue")[protest],
data=Garcia, main = "B: Response to sexism varies as type of protest")
by(Garcia,Garcia$protest, function(x) abline(lm(respappr ~ sexism,
data =x),lty=c("solid","dashed","dotted")[x$protest+1]))
text(6.5,3.5,"No protest")
text(3.1,3.9,"Individual")
text(3.1,5.2,"Collective")
\end{Rinput}
\begin{Routput}
> summary(mod1) #the uncentered model
Call:
lm(formula = respappr ~ prot2 * sexism, data = Garcia)
Residuals:
Min 1Q Median 3Q Max
-3.4984 -0.7540 0.0801 0.8301 3.1853
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.5667 1.2095 5.429 2.83e-07 ***
prot2 -2.6866 1.4515 -1.851 0.06654 .
sexism -0.5290 0.2359 -2.243 0.02668 *
prot2:sexism 0.8100 0.2819 2.873 0.00478 **
---
Signif. codes: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
Residual standard error: 1.144 on 125 degrees of freedom
Multiple R-squared: 0.2962, Adjusted R-squared: 0.2793
F-statistic: 17.53 on 3 and 125 DF, p-value: 1.456e-09
> summary(mod.centered) #the centered model
Call:
lm(formula = respappr ~ prot2 * sexism, data = centered.df)
Residuals:
Min 1Q Median 3Q Max
-3.4984 -0.7540 0.0801 0.8301 3.1853
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.01184 0.10085 -0.117 0.90671
prot2 1.45803 0.21670 6.728 5.52e-10 ***
sexism 0.02354 0.12927 0.182 0.85579
prot2:sexism 0.80998 0.28191 2.873 0.00478 **
---
Signif. codes: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1
Residual standard error: 1.144 on 125 degrees of freedom
Multiple R-squared: 0.2962, Adjusted R-squared: 0.2793
F-statistic: 17.53 on 3 and 125 DF, p-value: 1.456e-09
> #compare two models (bootstrapping n.iter set to 5000 by defalt
> # 1) mean center the variables prior to taking product terms
> mod <- setCor(respappr ~ prot2 * sexism ,data=Garcia,
+ ,main="A: Moderated regression (std. and mean centered)")
> mod
Call: setCor(y = respappr ~ prot2 * sexism, data = Garcia,
main = "A: Moderated regression (std. and mean centered)")
Multiple Regression from raw data
DV = respappr
slope se t p VIF
prot2 0.51 0.08 6.73 5.5e-10 1
sexism 0.01 0.08 0.18 8.6e-01 1
prot2*sexism 0.22 0.08 2.87 4.8e-03 1
Multiple Regression
R R2 Ruw R2uw Shrunken R2 SE of R2 overall F df1 df2 p
respappr 0.54 0.3 0.42 0.18 0.28 0.06 17.53 3 125 1.46e-09
> #demonstrate interaction plots
> plot(respappr ~ sexism, pch = 23- protest, bg = c("black","red", "blue")[protest],
+ data=Garcia, main = "B: Response to sexism varies as type of protest")
> by(Garcia,Garcia$protest, function(x) abline(lm(respappr ~ sexism,
+ data =x),lty=c("solid","dashed","dotted")[x$protest+1]))
Garcia$protest: 0
NULL
------------------------------------------------------------------------------------------------------------------------------------------------------------------
Garcia$protest: 1
NULL
------------------------------------------------------------------------------------------------------------------------------------------------------------------
Garcia$protest: 2
NULL
> text(6.5,3.5,"No protest")
> text(3.1,3.9,"Individual")
> text(3.1,5.2,"Collective")
>
\end{Routput}
\subsection{Decision theory and Area under the curve}
Table~\ref{tab:sdt} and Figure~\ref{fig:sdt} are example of signal detection theory. This is done by giving the four cells to the \pfun{AUC} function.
\begin{Rinput}
AUC(c(49,40,79,336))
\end{Rinput}
\begin{Routput}
Decision Theory and Area under the Curve
The original data implied the following 2 x 2 table
Predicted.Pos Predicted.Neg
True.Pos 0.097 0.079
True.Neg 0.157 0.667
Conditional probabilities of
Predicted.Pos Predicted.Neg
True.Pos 0.55 0.45
True.Neg 0.19 0.81
Accuracy = 0.76 Sensitivity = 0.55 Specificity = 0.81
with Area Under the Curve = 0.76
d.prime = 1 Criterion = 0.88 Beta = 0.15
Observed Phi correlation = 0.32
Inferred latent (tetrachoric) correlation = 0.53
>
\end{Routput}
\subsection{EFA}
The factor analysis of the \pfun{Thurstone} data set was done using the \pfun{fa} function. We specify that the number of subjects was 213. By default, we find a \emph{minres} solution and use the \pfun{oblminin} rotation. We also show how to specify other factor extraction techniques, and other rotations. We just show the first solution.
\begin{Rinput}
fa(Thurstone,nfactors=3,n.obs=213)
fa(Thurstone,nfactors=3,n.obs=213,fm="mle") #use the maximum likelihood algorithm
fa(Thurstone,nfactors=3,n.obs=213, rotate="Varimax") #use an orthogonal rotation.
\end{Rinput}
\begin{Routput}
> fa(Thurstone,nfactors=3,n.obs=213)
Factor Analysis using method = minres
Call: fa(r = Thurstone, nfactors = 3, n.obs = 213)
Standardized loadings (pattern matrix) based upon correlation matrix
MR1 MR2 MR3 h2 u2 com
Sentences 0.90 -0.03 0.04 0.82 0.18 1.0
Vocabulary 0.89 0.06 -0.03 0.84 0.16 1.0
Sent.Completion 0.84 0.03 0.00 0.74 0.26 1.0
First.Letters 0.00 0.85 0.00 0.73 0.27 1.0
Four.Letter.Words -0.02 0.75 0.10 0.63 0.37 1.0
Suffixes 0.18 0.63 -0.08 0.50 0.50 1.2
Letter.Series 0.03 -0.01 0.84 0.73 0.27 1.0
Pedigrees 0.38 -0.05 0.46 0.51 0.49 2.0
Letter.Group -0.06 0.21 0.63 0.52 0.48 1.2
MR1 MR2 MR3
SS loadings 2.65 1.87 1.49
Proportion Var 0.29 0.21 0.17
Cumulative Var 0.29 0.50 0.67
Proportion Explained 0.44 0.31 0.25
Cumulative Proportion 0.44 0.75 1.00
With factor correlations of
MR1 MR2 MR3
MR1 1.00 0.59 0.53
MR2 0.59 1.00 0.52
MR3 0.53 0.52 1.00
Mean item complexity = 1.2
Test of the hypothesis that 3 factors are sufficient.
The degrees of freedom for the null model are 36 and the objective function was 5.2 with
Chi Square of 1081.97
The degrees of freedom for the model are 12 and the objective function was 0.01
The root mean square of the residuals (RMSR) is 0.01
The df corrected root mean square of the residuals is 0.01
The harmonic number of observations is 213 with the empirical chi square 0.52 with prob < 1
The total number of observations was 213 with Likelihood Chi Square = 2.98 with prob < 1
Tucker Lewis Index of factoring reliability = 1.026
RMSEA index = 0 and the 90 % confidence intervals are 0 0
BIC = -61.36
Fit based upon off diagonal values = 1
Measures of factor score adequacy
MR1 MR2 MR3
Correlation of (regression) scores with factors 0.96 0.92 0.90
Multiple R square of scores with factors 0.93 0.85 0.82
Minimum correlation of possible factor scores 0.86 0.71 0.63
\end{Routput}
\subsection{Reliability}
Here we find the reliability of the \pfun{msqR} items found in the first example. We select just the time 1 data. We show several different approaches. Because we have just 8 items and they represent two subfactors, we find $\omega_h$ using a two factor solution.
\begin{Rinput}
msq.items <- c("anxious" , "at.ease" , "calm" , "confident", "content",
"jittery", "nervous", "relaxed" , "tense" , "upset" ) #these overlap with the sai
msq1 <- subset(msqR,msqR$time==1)
alpha(msq1[msq.items], check.keys=TRUE)
omega(msq1[msq.items], nfactors=2)
\end{Rinput}
\begin{Routput}
alpha(msq1[msq.items], check.keys=TRUE)
Reliability analysis
Call: alpha(x = msq1[msq.items], check.keys = TRUE)
raw_alpha std.alpha G6(smc) average_r S/N ase mean sd median_r
0.83 0.83 0.86 0.33 5 0.0046 2 0.54 0.32
lower alpha upper 95% confidence boundaries
0.82 0.83 0.84
Reliability if an item is dropped:
raw_alpha std.alpha G6(smc) average_r S/N alpha se var.r med.r
anxious- 0.83 0.83 0.85 0.34 4.7 0.0047 0.026 0.34
at.ease 0.80 0.80 0.83 0.31 4.1 0.0055 0.028 0.32
calm 0.80 0.81 0.84 0.32 4.2 0.0054 0.030 0.32
confident 0.83 0.83 0.85 0.36 5.0 0.0046 0.022 0.32
content 0.82 0.82 0.84 0.34 4.6 0.0049 0.025 0.32
jittery- 0.83 0.83 0.85 0.35 4.8 0.0047 0.027 0.33
nervous- 0.82 0.82 0.84 0.33 4.4 0.0049 0.030 0.32
relaxed 0.80 0.81 0.84 0.31 4.1 0.0055 0.030 0.31
tense- 0.81 0.81 0.83 0.32 4.2 0.0051 0.029 0.32
upset- 0.82 0.82 0.85 0.34 4.7 0.0049 0.033 0.35
Item statistics
n raw.r std.r r.cor r.drop mean sd
anxious- 1871 0.54 0.56 0.51 0.42 2.3 0.86
at.ease 3018 0.77 0.74 0.72 0.67 1.6 0.94
calm 3020 0.74 0.71 0.68 0.63 1.6 0.92
confident 3021 0.54 0.50 0.43 0.38 1.5 0.93
content 3010 0.64 0.59 0.55 0.50 1.4 0.92
jittery- 3026 0.52 0.55 0.48 0.41 2.3 0.83
nervous- 3017 0.59 0.64 0.60 0.52 2.6 0.68
relaxed 3023 0.76 0.73 0.70 0.66 1.6 0.91
tense- 3017 0.67 0.71 0.69 0.60 2.4 0.78
upset- 3019 0.54 0.58 0.50 0.45 2.6 0.68
Non missing response frequency for each item
0 1 2 3 miss
anxious 0.53 0.29 0.13 0.04 0.38
at.ease 0.14 0.33 0.35 0.18 0.00
calm 0.14 0.34 0.36 0.17 0.00
confident 0.16 0.33 0.37 0.14 0.00
content 0.17 0.35 0.35 0.13 0.01
jittery 0.54 0.31 0.12 0.04 0.00
nervous 0.70 0.22 0.06 0.02 0.00
relaxed 0.12 0.30 0.40 0.18 0.00
tense 0.59 0.28 0.10 0.03 0.00
upset 0.74 0.18 0.05 0.02 0.00
Warning message:
In alpha(msq1[msq.items], check.keys = TRUE) :
Some items were negatively correlated with total scale and were automatically reversed.
This is indicated by a negative sign for the variable name.
> omega(msq1[msq.items], nfactors=2)
Three factors are required for identification -- general factor loadings set to be equal.
Proceed with caution.
Think about redoing the analysis with alternative values of the 'option' setting.
Omega
Call: omega(m = msq1[msq.items], nfactors = 2)
Alpha: 0.83
G.6: 0.86
Omega Hierarchical: 0.45
Omega H asymptotic: 0.51
Omega Total 0.87
Schmid Leiman Factor loadings greater than 0.2
g F1* F2* h2 u2 p2
anxious- 0.36 -0.57 0.46 0.54 0.28
at.ease 0.52 0.59 0.64 0.36 0.43
calm 0.49 0.47 -0.21 0.51 0.49 0.48
confident 0.31 0.58 0.46 0.54 0.21
content 0.40 0.65 0.59 0.41 0.26
jittery- 0.35 -0.52 0.40 0.60 0.31
nervous- 0.43 -0.57 0.51 0.49 0.36
relaxed 0.51 0.48 -0.22 0.53 0.47 0.48
tense- 0.50 -0.62 0.63 0.37 0.39
upset- 0.35 -0.29 0.25 0.75 0.50
With eigenvalues of:
g F1* F2*
1.8 1.6 1.5
general/max 1.13 max/min = 1.05
mean percent general = 0.37 with sd = 0.1 and cv of 0.28
Explained Common Variance of the general factor = 0.37
The degrees of freedom are 26 and the fit is 0.24
The number of observations was 3032 with Chi Square = 721.36 with prob < 2.4e-135
The root mean square of the residuals is 0.04
The df corrected root mean square of the residuals is 0.05
RMSEA index = 0.094 and the 10 % confidence intervals are 0.088 0.1
BIC = 512.92
Compare this with the adequacy of just a general factor and no group factors
The degrees of freedom for just the general factor are 35 and the fit is 1.67
The number of observations was 3032 with Chi Square = 5055.64 with prob < 0
The root mean square of the residuals is 0.21
The df corrected root mean square of the residuals is 0.24
RMSEA index = 0.218 and the 10 % confidence intervals are 0.213 0.223
BIC = 4775.04
Measures of factor score adequacy
g F1* F2*
Correlation of scores with factors 0.67 0.77 0.76
Multiple R square of scores with factors 0.45 0.60 0.59
Minimum correlation of factor score estimates -0.09 0.19 0.17
Total, General and Subset omega for each subset
g F1* F2*
Omega total for total scores and subscales 0.87 0.84 0.79
Omega general for total scores and subscales 0.45 0.33 0.30
Omega group for total scores and subscales 0.36 0.51 0.49
\end{Routput}
%\printindex
\end{document}