Contact:

Please cite as:

Version history

The first iteration of semPower was developed as a java program in 2015 and was ported as a slightly extended version to R a year afterwards. These versions provided support for a priori, post hoc, and compromise model-free power analyses based on common effect-size measures such as the RMSEA. The R package was subsequently extended to support covariance matrices as input as an alternative way to define the effect. The current, second version of semPower expands this approach by providing many convenience functions to define the effect in terms of model parameters, covering many commonly encountered model structures, and also supports simulated power estimation in addition to analytical power analyses.

Installation

The semPower package can be installed via CRAN. The latest development version is available from github at https://github.com/moshagen/semPower and can be installed as follows:

# install.packages("devtools")
devtools::install_github("moshagen/semPower")

(Very) basic functionality is also provided as a shiny app, which can be used online at https://sempower.shinyapps.io/sempower.

1 Introduction

semPower provides a collection of functions to perform power analyses for structural equation models. Statistical power is a concept arising in the context of classical (frequentist) null-hypothesis significance testing and is defined as the probability to reject a certain hypothesis if this hypothesis is factually wrong. As a general rule, a hypothesis test is only meaningful to the extent that statistical power is reasonably high, because otherwise a non-significant test outcome carries little information regarding the veracity of the tested hypothesis.

For illustration, consider a simple two-factor CFA model and assume that the interest lies in detecting that the correlation between the factors differs from zero. To test the hypothesis that the factors are uncorrelated, one can compare a model that freely estimates the factor correlation with an otherwise identical model that restricts the correlation between these factors to zero. If the restricted model fits the data significantly worse, the hypothesis of a zero correlation between the factors is rejected, in turn, informing the conclusion that the correlation between the factors differs from zero.

Statistical power now gives the probability that the outcome of the model test associated with this hypothesized model turns out significant on a certain alpha-error level with a certain sample size. Suppose that the correlation between the factors is \(r = .20\) in the population, each factor is measured by 3 indicators and all (non-zero) loadings equal .5, \(\alpha = .05\), and a sample size of \(N = 125\). Then, the probability of obtaining a significant test outcome to detect a correlation of \(r \geq .20\) is just 20%. Stated differently, in 4 out of 5 random samples (each of size \(N = 125\)) one will not detect that the factors are correlated. Indeed, to obtain a more reasonable power of 80% in this scenario, a sample size of \(N = 783\) is required.

Correspondingly, statistical power is an integral part in planning the required sample size and statistical hypothesis testing more generally. Of note, however, a sufficiently large sample to obtain a certain power does not always imply a sufficiently large sample to enable model estimation. At times, the required sample size to yield a sufficiently high power might still be too small to estimate the model at hand, so that sample size considerations should also be based on other factors beyond statistical power.

1.1 Types of power analyses

Generally, statistical power depends on

  • the extent to which the tested hypothesis is wrong (= the magnitude of effect)
  • the sample size (N)
  • the alpha error (alpha)
  • the degrees of freedom (df)

and will be higher for a larger effect, a larger sample size, a higher alpha error, and fewer degrees of freedom1. When performing a power analysis, one of these quantities is computed as function of the other quantities, giving rise to different types of power analyses:

  • A priori power analysis: Determines the required sample size to detect a certain effect with a desired power, given alpha and df.
  • Post-hoc power analysis: Determines the achieved power to detect a certain effect with a given sample size, alpha and df.
  • Compromise power analysis: Determines the alpha and the beta error, given the alpha/beta ratio, the sample size, a certain effect, and the df.

Each type of power analysis has different aims. A priori power analyses are performed prior to data collection to inform the required number of observations to detect an expected effect with the desired power. Post hoc power analyses are performed after data collection to judge whether the given sample size yields a power that is sufficiently high for a meaningful test of a certain hypothesis. Compromise power analyses are used to determine which decision rule to apply in evaluating whether the outcome of a hypothesis test favors the null or the alternative hypothesis (see Moshagen & Erdfelder, 2016, for details).

1.2 Types of hypotheses

Statistical power is always tied to a particular hypothesis, so power will typically differ even for the same (base-)model depending on which hypothesis is considered. A general statement such as “power was 80%” is meaningless, unless the hypothesis for which power was determined is also stated (such as “power to detect a correlation \(\geq .2\) between the first and the second factor was 80%.”). Moreover, power can be high for one hypothesis, but low for other hypotheses, so a power analysis should always be performed concerning (a) the focal hypotheses and (b) the hypothesis where the smallest effect is expected.

In SEM, the typical types of hypotheses that occur either implement equality constraints on two or more parameters (such as cross-group constraints) or assign a particular parameter a specific value (such as zero).2 For instance, consider a cross-lagged panel model (CLPM) with two constructs X and Y measured by three indicators each at three different waves (so the full model comprises six factors). Relevant (non-exhaustive) hypotheses could be

  • whether the model as a whole describes the data well.
  • whether measurement invariance over time concerning the indicators holds.
  • whether the autoregressive effects of X are constant across waves.
  • whether the autoregressive effects of X are different from zero.
  • whether the autoregressive effects of Y are constant across waves.
  • whether the autoregressive effects of Y are different from zero.
  • whether the autoregressive effects of X are equal to those of Y.
  • whether the cross-lagged effects of X on Y are constant across waves.
  • whether the cross-lagged effects of Y on X are constant across waves.
  • whether the cross-lagged effects of X on Y are different from zero.
  • whether the cross-lagged effects of Y on X are different from zero.
  • whether the cross-lagged effects of X on Y are equal to those of Y on X.
  • whether the synchronous (residual) correlations between X and Y differ from zero.
  • whether the synchronous (residual) correlations are equal across waves.

Any of these hypotheses will likely be associated with a different expectation of what reflects the relevant magnitude of effect and consequently with a different power to detect this effect. For example, concerning the hypothesis of whether an autoregressive effect differs from zero, one will usually be satisfied to obtain sufficient power to detect a large regression coefficient (of, say, \(b \geq .50\)). Concerning the cross-lagged effects, however, one rather wants sufficient power to detect a small regression coefficient (of, say, \(b \geq .10\)). Furthermore, a cross-lagged effect of X on Y of .10 will not always be associated with the same power as a cross-lagged effect of Y on X of the same magnitude, because power to detect a cross-lagged effect also depends on all other parameters of the model, such as autoregressive effects and loadings.

To give a number of examples, the following provides the required sample sizes to yield a power of 80% on \(\alpha = .05\), assuming that X and Y are measured by three indicators each each of 2 waves, all loadings on X are .5 and those on Y are .7, and all autoregressive and cross-lagged effects are equal for X and Y:

  • N = 249 to detect that the model exhibits misfit corresponding to RMSEA \(\geq\) .05.
  • N = 138 to detect that the autoregressive effect of X is \(\geq\) .50
  • N = 56 to detect that the autoregressive effect of Y is \(\geq\) .50
  • N = 1836 to detect that the cross-lagged effect of X on Y is \(\geq\) .10
  • N = 2110 to detect that the cross-lagged effect of Y on X is \(\geq\) .10

Correspondingly, one should define the relevant hypothesis of interest carefully when performing a power analysis (and again note that the required \(N\) to achieve sufficient power is not necessarily sufficient to support model estimation).

1.3 Performing power analyses

Next to defining a relevant hypothesis, performing a power analysis (obviously) requires a decision on which type of power analysis to perform. The types of power analyses available in semPower are:

  • semPower.aPriori to perform an a priori power analysis (i.e., determine the required sample size).
  • semPower.postHoc to perform a post hoc power analysis (i.e., determine the achieved power).
  • semPower.compromise to perform a compromise power analysis (i.e., determine a reasonable decision rule).

Any power analysis requires to specify the magnitude of effect that is to be detected in a certain metric. The functions stated above understand the following effect-size measures: F0, RMSEA, Mc, GFI, and AGFI. Because any of these effect measures apply equally regardless of the particular type of model considered, the functions above are also referred to as model free power analyses. For example, the statement that power to reject a model with 100 df exhibiting misspecifications corresponding to RMSEA \(\geq .05\) on \(\alpha\) = .05 with a sample size of 250 is \(1 - \beta\) = 97% is always true, regardless of whether the model under scrutiny is a CFA model, a CLPM, a multigroup model, or any other SEM model. It is the very nature of effect sizes that they are agnostic with respect to how a particular model looks like.

However, a common problem in performing a power analysis is that it is often difficult to translate a specific hypothesis into a specific value for a specific effect size (such as a specific value for the RMSEA). Consider the situation that one is interested in determining whether two factors in a CFA model are correlated. A suitable model to test this hypothesis would constrain the correlation between these factors to zero. When this constrained model fits the data as well as the unconstrained model (freely estimating the correlation), both factors can be assumed to be orthogonal. Otherwise, one would conclude that the factors are correlated. Suppose that a correlation between these factors of \(r \geq .1\) is considered a meaningful deviation from orthogonality. In terms of power analyses, one thus wants sufficient power to identify whether the correlation between the factors is at least \(r = .1\). The misfit associated with a model assuming a correlation of 0 when, in reality, the true correlation is at least \(r = .1\) is supposed to define the magnitude of effect. The problem is now that one cannot immediately say how this difference in a certain model parameter (the correlation between the factors) translates to an effect size such as the RMSEA.

For this reason, semPower also provides various convenience functions that allow for a model-based definition of the effect of interest in terms of model parameters as well as a more generic definition of the effect as function of population and model-implied means and covariance matrices. In the example above, the relevant convenience function (semPower.powerCFA) just requires the definition of the factor model and the specification of the to be detected correlation as input, and plugs the associated effect size (which is RMSEA = .05 in the scenario above when each factor is measured by three indicators loading by .5 each, and when df = 1) into one of the model-free power functions. Currently, semPower provides model-based power analyses for the following model types:

The remainder of this document provides some notes on the statistical background, a formal definition of decision errors in hypothesis testing, statistical power, and various effect sizes, and a detailed description of the functions contained in this package.

2 Statistical Background

This chapter provides a brief statistical background on hypothesis testing, model estimation, and effect sizes in SEM.

2.1 Hypothesis Testing and Statistical Power

The statistical evaluation of mathematical models often proceeds by considering a test statistic that expresses the discrepancy between the observed data and the data as implied by the fitted model. In SEM, the relevant test statistic for a sample of size \(N\) is given by \(T = \hat{F}(N-1)\). \(\hat{F}\) denotes the minimized sample value of the chosen discrepancy function (such as the Maximum Likelihood discrepancy function) and thereby indicates the lack of fit of the model to the sample data. Thus, \(T\) permits a likelihood-ratio test of the null hypothesis (H0) that the model is correct. If the hypothesized model holds in the population, \(T\) can be shown to follow asymptotically a central \(\chi^2\)(df) distribution with \(df = .5 \cdot p(p+1) - q\) degrees of freedom, where \(p\) is the number of manifest variables and \(q\) denotes the number of free parameters. This is why \(T\) is often referred to as the “chi-square model test statistic” – a convention which is followed here.

Based on the observed value for the chi-square test statistic, a null hypothesis significance test can be performed to evaluate whether this value is larger than what would be expected by chance alone. The usual test proceeds as follows: Given a certain specified alpha-error level (typically \(\alpha\) = .05), a critical chi-square value is obtained from the asymptotic central \(\chi^2\)(df) distribution. If the observed value for the chi-square test statistic exceeds the critical value, the null hypothesis that the model fits the data is rejected. Otherwise, H0 is retained. Finding that the observed test statistic exceeds the critical value (implying an upper-tail probability that falls below the specified alpha level) thus leads to the statistical decision that the discrepancy between the hypothesized and the actual population covariance matrix is too large to be attributable to sampling error only. Accordingly, a statistically significant chi-square test statistic provides evidence against the validity of the hypothesized model.

When testing statistical hypotheses using this framework, two types of decision errors can occur: The alpha error (or Type-I error) of incorrectly rejecting a true H0 (a correct model) and the beta error (or Type-II error) of incorrectly retaining a false H0 (an incorrect model). Statistical power is defined as the complement of the beta-error probability (\(1 - \beta\)) and thus gives the probability to reject an incorrect model.

If the H0 is false, the chi-square test statistic is no longer central \(\chi^2\)(df) distributed, but can be shown to follow a noncentral \(\chi^2\)(df, \(\lambda\)) distribution with a non-centrality parameter \(\lambda\) and an expected value of df + \(\lambda\) (MacCallum et al., 1996). The non-centrality parameter \(\lambda\) shifts the expected value of the non-central \(\chi^2\)(df, \(\lambda\)) distribution to the right of the corresponding central distribution. Having determined the critical value associated with the desired alpha probability from the central \(\chi^2\)(df) distribution, the beta-error probability can be computed by constructing the corresponding non-central \(\chi^2\)(df, \(\lambda\)) distribution with a certain non-centrality parameter \(\lambda\) and obtaining the area (i.e., the integral) of this distribution to the left of the critical value:

\[ \beta = \int_{0}^{\chi^2_{crit}} f_{\chi^2(df, \lambda)}(x) \, dx\] Correspondingly, statistical power is the area of the non-central \(\chi^2\)(df, \(\lambda\)) distribution to the right of the critical value, i.e., \(= 1 - \beta\). The general situation is illustrated in the following figure.

Central (red) and non-central (blue) chi-square distributions.

Figure 2.1: Central (red) and non-central (blue) chi-square distributions.

The figure depicts a central (solid) \(\chi^2\)(df = 100) and a non-central (dashed) \(\chi^2\)(df = 100, \(\lambda = 40.75\)). The area of the central distribution \(\chi^2\)(df) to the right of the critical value reflects the alpha error. The black vertical line indicates a critical value of 124, which corresponds to alpha = .05. The area of \(\chi^2\)(df, \(\lambda\)) distribution to the left of the critical value is the beta-error probability, which takes a value of beta = .20 in this example. Statistical power is defined as 1 - beta, that is, the area under the noncentral \(\chi^2\)(df, \(\lambda\)) distribution to the right of the critical value.

2.2 Measures of Effect

As evident from the above, power depends on the applied critical value corresponding to a certain alpha error probability and on the distance between the central and the non-central \(\chi^2(df)\) distributions as quantified by the non-centrality parameter \(\lambda\). The non-centrality parameter \(\lambda\), in turn, depends on the number of observations \(N\) and on the degree to which the tested H0 is factually wrong, i.e., on the discrepancy between the H0 and the H1 model (the effect size).

To define the discrepancy between the H0 and the H1 model for power analysis, any non-centrality based measure of effect can be used. For model-free power analyses, semPower understands the measures detailed below. Any model-based power analysis is eventually converted into the population minimum of the fit function as effect size.

F0

\(F_0\) is the population minimum of the chosen fitting function, such as weighted least-squares (WLS) or Maximum Likelihood (ML). The ML fitting function is

\[F_0 = \log|\Sigma| - \log|\hat{\Sigma}| + tr(\Sigma\hat{\Sigma}^{-1}) - p + (\mu - \hat{\mu}) \hat{\Sigma}^{-1} (\mu - \hat{\mu}) \] where \(\Sigma\) is the \(p \times p\) population covariance matrix, \(\hat{\Sigma}\) the \(p \times p\) model-implied covariance matrix, \(p\) the number of observed variables, \(\mu\) the vector of population means, and \(\hat{\mu}\) the model-implied means. If means are not part of the model, the last term becomes zero.

The WLS fitting function is \[F_0 = (\sigma - s)' V (\sigma - s)\] where \(\sigma = vec(\mu, \Sigma)\), \(s = vec(\hat \mu, \hat \Sigma)\), and \(V\) is a weight matrix. In the (full) WLS fitting function, \(V\) is (often) the inverse of \(N\) times the asymptotic covariance matrix of the sample statistics. In the DWLS fitting function, \(V\) is a diagonal matrix with the inverse of the diagonal elements of \(N\) times the asymptotic covariance matrix of the sample statistics. The ULS fitting function is a special case with \(V = I\)

If the model is correct, \(\hat{\Sigma} = \Sigma\), \(\hat{\mu} = \mu\), and thus \(F_0 = 0\). Otherwise, \(F_0 > 0\) with higher values expressing a larger discrepancy (misfit) of the model to the data.

If fitting a model to some sample data of size \(N\), the estimated minimum of the fit function, \(\hat{F}\), is used to construct an asymptotically \(\chi^2\)-distributed model test statistic, commonly simply called the chi-square model test:

\[\chi^2 = \hat{F}(N-1)\] Note that \(\hat{F}\) is a biased estimate of \(F_0\): If \(F_0 = 0\), the expected value of \(\hat{F}\) is df, i.e., the model degrees of freedom. For a model with \(q\) free parameters, the df are given by \[ df = \dfrac{p(p+1)}{2} - q \]

Whereas \(F_0\) is the genuine measure of effect in SEM, the main disadvantage is that specific values are difficult to interpret because of its logarithmic scaling and because specific values also depend on features unrelated to model fit such as a the number of manifest variables comprised in the model. For these reasons, various transformations of \(F_0\) exist that are described in the following.

RMSEA

The Root-Mean-Squared Error of Approximation (RMSEA; Browne & Cudeck, 1992; Steiger & Lind, 1980) scales \(F_0\) by the model degrees of freedom:

\[RMSEA = \sqrt{(F_0/df)}\] so that the RMSEA is bounded by zero and lower values indicate a better fit. The implied \(F_0\) is: \[F_0 = df \cdot RMSEA^2\] Given that \(F_0\) is scaled by the df, defining an effect in terms of the RMSEA requires specification of the degrees of freedom.

Mc

McDonald’s (1989) measure of non-centrality (Mc) is a transformation of \(F_0\) on the interval 0-1, with higher values indicating better fit.

\[Mc = e^{-.5F_0}\] so that \[F_0 = -2\ln{Mc}\]

GFI

The Goodness-of-Fit Index (GFI; Jöreskog & Sörbom, 1984; Steiger, 1990) scales \(F_0\) on the interval 0-1, with higher values indicating better fit:

\[GFI = \dfrac{p}{p+2F_0}\]

\[F_0 = \dfrac{p (1-GFI)}{2GFI}\] As the GFI depends on the number of observed variables (\(p\)), this number needs to be provided when defining an effect in terms of the GFI.

AGFI

The Adjusted Goodness-of-Fit Index (AGFI; Jöreskog & Sörbom, 1984; Steiger, 1990) modifies the GFI by including a penalty for the number of free parameters, as measured by the model degrees of freedom:

\[AGFI = 1 - \dfrac{p(p+1)}{2df} \left(1 - \dfrac{p}{p+2F_0} \right)\] \[F_0 = \dfrac{p (1-AGFI) df}{p(p+1) -2df(1-AGFI)}\]

Specifying an effect in terms of the AGFI requires specification of both the number of observed variables (\(p\)) and the model degrees of freedom (\(df\)).

Measures not based on non-centrality

Fit-indices that are not based on non-centrality have no straightforward relation to \(F_0\) and are thus not well suited for power analyses. However, when the input parameters include covariance matrices, semPower also reports the following measures.

SRMR

The Standardized Root-Mean-Square Residual (SRMR) is a measure of the (root of the) average (squared) difference between the (standardized) model-implied and population covariance matrices. It ranges from 0 to 1, with lower values indicating better fit. Let \(E_0\) be the difference between the model-implied and the population covariance matrix, \(E_0 = \Sigma - \hat{\Sigma}\), \(vech\) denote the vectorization transformation, and \(Q\) be a diagonal matrix of dimension \(.5p(p+1)\) containing the inverse of the product of standard deviations of observed variables \(i\) and \(j\). Then, the SRMR can be defined as

\[SRMR = \sqrt{\dfrac{1}{.5p(p+1)}vech(E_0) \cdot Q \cdot vech(E_0)'}\]

The relation of the residual matrix \(E_0\) to \(F_0\) is complex and depends on the model-implied covariance matrix, so the SRMR is not well suited to define an effect in terms of \(F_0\) (based on ML estimation): \[F_0 = -\ln|I + \hat{\Sigma}^{-.5} E_0 \hat{\Sigma}^{-.5}|\]

CFI

The Comparative Fit Index (CFI) is an incremental index expressing the proportionate reduction of misfit associated with the hypothesized model (\(F_{0H}\)) in relation to the null model (\(F_{0N}\)), defined as a model that constrains all covariances to zero. In the population, the CFI ranges from 0 to 1, with higher values indicating better fit.

\[CFI = \dfrac{F_{0N}-F_{0H}}{F_{0N}}\]

Although it is simple to obtain \(F_0\) from the CFI, this requires knowledge of \(F_{0N}\), which is rather difficult to determine a priori:

\[F_0 = F_{0N} - CFI \cdot F_{0N} \]

3 Model-free power analyses

Performing a power analysis generally requires the specification of a measure and magnitude of effect that is to be detected and a provision of the model degrees of freedom (df). Further arguments are required depending on the type of power analysis.

The functions described in this chapter are “model-free” in the sense that the results depend on the df of a model, but are otherwise agnostic with respect to how a particular model looks like. For instance, the power to reject a model with df = 100 exhibiting an RMSEA \(\geq\) .05 with N = 500 on \(\alpha\) = .05 is always the same, regardless of whether the model is a CFA model, a mediation model, a CLPM, or a multigroup model.

By contrast, model-based power analyses define the effect of interest in terms of particular model parameters, so different functions are required for different types of models. However, the functions performing model-based power analysis are actually only a high-level interface and eventually transform a particular hypothesis concerning the model parameters into an effect size understood by model-free power analyses.

Thus, regardless of whether the effect of interest is directly defined in terms of an effect size understood by semPower or indirectly via constraints on particular model parameters, the actual power analysis is always performed by one of the following functions.

3.1 A priori power analysis: Determine N

The purpose of a priori power analyses is to determine the required sample size to detect an effect with a certain probability on a specified alpha error. In the language of structural equation modeling, an a priori power analysis asks: How many observations do I need to detect the effect of interest (i.e., falsify the model under scrutiny) with a certain probability (statistical power)?

Performing an a priori power analyses requires the specification of:

  • the alpha error (alpha)
  • the desired power (power; or, equivalently, the acceptable beta error, beta)
  • the type of effect (effect.measure)
  • the magnitude of effect (effect)
  • the degrees of freedom of the model (df). See how to obtain the df if you are unsure.

Depending on the chosen effect size measure, it may also be required to define the number of observed variables (p).

Suppose one wants to determine the required sample size to detect misspecifications of a model (involving df = 50 degrees of freedom) with a power of 80% on an alpha error level of .05, where the amount of misfit corresponds to an RMSEA of at least .05. To achieve this, the function semPower.aPriori is called with arguments effect = .05, effect.measure = 'RMSEA', alpha = .05, power = .80, and df = 50. The results are stored in a list called ap.

ap <- semPower.aPriori(effect = .05, effect.measure = 'RMSEA', 
                        alpha = .05, power = .80, df = 50)

Equivalently, instead of calling the semPower.aPriori function, one may also use the generic semPower function with the additional type = 'a-priori' argument:

ap <- semPower(type = 'a-priori', 
               effect = .05, effect.measure = 'RMSEA', 
               alpha = .05, power = .80, df = 50)

Calling the summary method on ap prints the results and a figure of the associated central and non-central \(\chi^2\) distributions.

summary(ap)
## 
##  semPower: A priori power analysis
##                                    
##  F0                        0.125000
##  RMSEA                     0.050000
##  Mc                        0.939413
##                                    
##  df                        50      
##  Required Num Observations 243     
##                                    
##  Critical Chi-Square       67.50480
##  NCP                       30.25000
##  Alpha                     0.050000
##  Beta                      0.199142
##  Power (1 - Beta)          0.800858
##  Implied Alpha/Beta Ratio  0.251077

This shows that N = 243 yields a power of approximately 80% to detect the specified effect.

Note that whereas \(N \geq 243\) is sufficient to a yield the desired power, larger samples might be required to enable successful model estimation, for instance, when the model under scrutiny comprises a large number of free parameters. The required sample size determined in an a priori power analysis merely gives the lower bound concerning the desired power, but does not consider aspects such as proper convergence and accuracy of parameter recovery when estimating the model under scrutiny. As a more extreme example, the following yields a required N of 14:

ap <- semPower(type = 'a-priori', 
               effect = .08, effect.measure = 'RMSEA', 
               alpha = .05, power = .80, df = 2000)

N = 14 is entirely correct in terms of the desired power (of 80%) for the stated hypothesis (reject a model exhibiting an RMSEA \(\geq\) .08 on 2000 df), but will generally be far from sufficient to support the estimation of a model involving 2000 df. As another extreme example, the following yields a required N of 78,516:

ap <- semPower(type = 'a-priori', 
               effect = .01, effect.measure = 'RMSEA', 
               alpha = .05, power = .80, df = 1)

N = 78,516 is also entirely correct when the aim is to yield a power of 80% to reject a model exhibiting an RMSEA \(\geq\) .01 on 1 df. However, parameter estimation will likely succeed when the sample is much smaller. Correspondingly, the required sample size to obtain a certain power and the required sample size to obtain trustworthy parameter estimates are different issues. Power is just one aspect in sample size planning.

The output printed above further shows the Critical Chi-Square, the non-centrality parameter (NCP), and the ratio between the error probabilities (Implied Alpha/Beta ratio). In this example, the ratio between alpha and beta is approximately 0.25, showing that committing an beta error is four-times as likely as committing an alpha error. This is obviously a consequence of the chosen input parameters, since a power (1 - beta) of .80 implies an beta error of .20, which is four times the chosen alpha error of .05.

semPower also converts the chosen effect into other effect size measures: An RMSEA of .05 (based on df = 50) corresponds to \(F_0\) = 0.125 and Mc = .939. If one is also interested in obtaining the associated GFI and AGFI, the number of variables needs to be provided. When the model involves 20 observed variables, the call above can be modified by including the argument p = 20:

ap <- semPower.aPriori(effect = .05, effect.measure = 'RMSEA', 
                       alpha = .05, power = .80, df = 50, p = 20)

Now the GFI and AGFI values equaling RMSEA = .05, assuming df = 50 and p = 20, are also provided.

If one is interested in how power changes for a range of sample sizes, it is useful to request a power plot.

3.2 Post hoc power analysis: Determine power

The purpose of post hoc power analyses is to determine the actually achieved power to detect a specified effect with given sample size on a certain alpha-error level. In the language of structural equation modeling, a post hoc power analysis asks: With my sample size at hand, how large is the probability (power) to detect the effect of interest?

Performing a post hoc power analyses requires the specification of:

  • the alpha error (alpha)
  • the sample size (N)
  • the type of effect (effect.measure)
  • the magnitude of effect (effect)
  • the degrees of freedom of the model (df). See how to obtain the df if you are unsure.

Depending on the chosen effect-size measure, it may also be required to define the number of observed variables (p).

Suppose, one wants to determine the actually achieved power with a sample size of N = 1000 to detect misspecifications of a model (involving df = 100 degrees of freedom) corresponding to RMSEA \(\geq\) .05 on an alpha error level of .05. To achieve this, the function semPower.postHoc is called with arguments effect = .08, effect.measure = 'RMSEA', alpha = .05, N = 1000, and df = 100, and the results are stored in a list called ph.

ph <- semPower.postHoc(effect = .05, effect.measure = 'RMSEA', 
                      alpha = .05, N = 1000, df = 100)

Equivalently, instead of calling the semPower.aPriori, one may also use the generic semPower function with the additional type = 'post-hoc' argument:

ph <- semPower(type = 'post-hoc', 
               effect = .05, effect.measure = 'RMSEA',
               alpha = .05, N = 1000, df = 100)
summary(ph)
## 
##  semPower: Post hoc power analysis
##                                       
##  F0                       0.250000    
##  RMSEA                    0.050000    
##  Mc                       0.882497    
##                                       
##  df                       100         
##  Num Observations         1000        
##  NCP                      249.7500    
##                                       
##  Critical Chi-Square      124.3421    
##  Alpha                    0.050000    
##  Beta                     2.903302e-17
##  Power (1 - Beta)         > 0.9999    
##  Implied Alpha/Beta Ratio 1.722177e+15

Calling the summary method on ph provides an output structured identically to the one produced by semPower.aPriori and shows that the power is very high (power > .9999). The associated error probabilities are provided in higher precision. Specifically, the beta error is beta = 2.903302e-17 which translates into \(2.9 \cdot 10^{-17} = 0.000000000000000029\). In practice, one would almost never miss a model with an actual RMSEA \(\geq\) .05 (or F0 \(\geq\) 0.25 or Mc \(\leq\) .882) under these conditions. The implied alpha/beta ratio is 1.722177e+15, showing that committing an alpha error is about two quadrillion (\(10^{15}\)) times as likely as committing a beta error.

If one is interested in how power changes for a range of different magnitudes of effect (say, for RMSEAs ranging from .01 to .15), it is useful to request a power plot.

3.3 Compromise power analysis: Determine alpha and beta

The purpose of compromise power analyses is to determine alpha and beta (and the associated critical value of the chi-square test statistic) given a specified effect, a certain sample size, and a desired ratio between alpha and beta (Moshagen & Erdfelder, 2016). In the language of structural equation modeling, a compromise power analysis asks: With my sample size at hand, how should the critical value for the chi-square model test be defined to obtain proportionate alpha- and beta-error levels in deciding whether my model is rather aligned with the hypothesis of perfect fit or with the hypothesis of an unacceptable degree of misfit (as defined by the chosen effect)?

Performing a compromise power analyses requires the specification of:

  • desired ratio between alpha and beta (abratio; defaults to 1)
  • the sample size (N)
  • the type of effect (effect.measure)
  • the magnitude of effect (effect)
  • the degrees of freedom of the model (df). See how to obtain the df if you are unsure.

Depending on the chosen effect-size measure, it may also be required to define the number of observed variables (p).

Suppose one wants to determine the critical chi-square value and the associated alpha and beta errors, forcing them to be equal (i.e., an ratio of 1). The model involves 100 df, the sample size is N = 1000, and the H1 model representing an unacceptable degree of misfit is defined as a model associated with an RMSEA of at least .08. Thus, the function semPower.compromise is called with arguments effect = .08, effect.measure = 'RMSEA', abratio = 1, N = 1000, and df = 100, the results are stored in a list called cp, and the summary method is called to obtain formatted results.

cp <- semPower.compromise(effect = .08, effect.measure = 'RMSEA', 
                           abratio = 1, N = 1000, df = 100)

Equivalently, instead of calling semPower.compromise, one may also use the generic semPower function with the additional type = 'compromise' argument:

cp <- semPower(type = 'compromise', 
               effect = .05, effect.measure = 'RMSEA', 
               abratio = 1, N = 1000, df = 100)
summary(cp)
## 
##  semPower: Compromise power analysis
##                                       
##  F0                       0.250000    
##  RMSEA                    0.050000    
##  Mc                       0.882497    
##                                       
##  df                       100         
##  Num Observations         1000        
##  Desired Alpha/Beta Ratio 1.000000    
##                                       
##  Critical Chi-Square      192.8233    
##  Implied Alpha            7.357816e-08
##  Implied Beta             7.357816e-08
##  Implied Power (1 - Beta) > 0.9999    
##  Actual Alpha/Beta Ratio  1.000000

The output is structured identically to the one produced by semPower.aPriori and shows that choosing a Critical Chi-Square = 312 is associated with balanced error probabilities, alpha = 1.212986e-23 and beta = 1.212986e-23. As requested, both error probabilities are as just as large. In addition, committing either error is highly unlikely: an error of 1.212986e-23 translates into \(1.2 \cdot 10^{-23} = 0.000000000000000000000012\). In practice, one would almost never make a wrong decision.

If one rather prefers the error probabilities to differ (for example because one considers falsely accepting an incorrect model to be 100 times as bad as falsely rejecting an correct model), this can be achieved by changing the abratio argument accordingly. For example, requesting the alpha error to be 100 times as large as the beta error proceeds by setting abratio = 100.

cp <- semPower.compromise(effect = .08, effect.measure = 'RMSEA', 
                           abratio = 100, N = 1000, df = 100)

3.4 Power-analysis to detect an overall difference between two models

A common scenario is to test two competing models against each other, where a more restrictive model (involving more df) is compared against a less restrictive model (involving less df). When the difference between these models lies just in a single (or few) particular parameter(s), the effect should be determined in terms of the model parameters. If, however, the difference between the models potentially spreads across multiple parameters (as, say, when comparing a 3-factor with a 5-factor model), one approach to power analysis it to define the models in terms of overall fit.

For example, to obtain the required sample size to yield a power of 80% to discriminate a model with an associated RMSEA of .04 on 44 df from a model with an associated RMSEA of .05 on 41 df, define both the effect and df arguments as vectors (do not define lists!) comprising two elements:

ap <- semPower.aPriori(effect = c(.04, .05), effect.measure = 'RMSEA', 
                       alpha = .05, power = .80, df = c(44, 41))
summary(ap)

which shows that 340 observations are required to discriminate these models. Post hoc and compromise power analyses are performed accordingly.

A similar situation often occurs in tests of measurement invariance, where one wants sufficient power to detect whether certain cross-group or cross-time constraints on the model parameters (such as equal loadings across groups) are violated. Again, the difference between such models can be defined through particular parameters, for instance, by assuming that a single loading differs by .1 across groups. However, it is also reasonable to assume that non-invariance spreads across multiple parameters (say, across all loadings), so that one approach to power analysis it to define the models in terms of overall fit.

The general syntax is the same as above, but now the N argument also needs to be set, which gives the number of observations by group in compromise and post hoc power analysis, and the group weights in a priori power analysis. For example, the following asks for the required sample size to detect a change in the Mc of .01 in a three-group model, where all groups are equal-sized (N = c(1, 1, 1)):

ap <- semPower.aPriori(effect = c(.99, .98), effect.measure = 'Mc', 
                        alpha = .05, power = .80, df = c(69, 57), N = c(1, 1, 1))
summary(ap)

This shows that 858 observations (286 by group) are required for a power of 80%.

3.5 Define the effect through covariance matrices

The previous sections assumed that the magnitude of effect is determined by defining a certain effect size metric (such as \(F_0\) or RMSEA) and a certain magnitude of effect. Alternatively, the effect can also be determined by specifying the population (\(\Sigma\)) and the model-implied (\(\hat{\Sigma}\)) covariance matrices (and, if means are part of the model, \(\mu\) and \(\hat{\mu}\)) directly. To determine the associated effect in terms of F0, semPower just plugs these matrices in the ML-fitting function: \[F_0 = \log|\hat{\Sigma}| - \log|\Sigma| + tr(\Sigma\hat{\Sigma}^{-1}) - p + (\mu - \hat{\mu}) \hat{\Sigma}^{-1} (\mu - \hat{\mu}) \]

Suppose, \(\Sigma\) and \(\hat{\Sigma}\) have been defined previously and are referred to by the variables Sigma and SigmaHat Then, any of the power-analysis functions is called by setting the Sigma and SigmaHat arguments accordingly (and omitting the effect and effect.measures arguments). This could look as follows:

semPower.aPriori(alpha = .05, power = .80, df = 100, 
                 Sigma = Sigma, SigmaHat = SigmaHat)
semPower.postHoc(alpha = .05, N = 1000, df = 100, 
                 Sigma = Sigma, SigmaHat = SigmaHat)
semPower.compromise(abratio = 1, N = 1000, df = 100, 
                    Sigma = Sigma, SigmaHat = SigmaHat)
semPower.powerPlot.byN(alpha = .05, df = 100, power.min = .05, power.max = .99, 
                       Sigma = Sigma, SigmaHat = SigmaHat)

This feature is particularly useful when used in conjunction with other functions provided by semPower and is indeed internally used by all functions performing model-based power analyses. An example of how to obtain the relevant covariance matrices is provided in a later chapter.

4 Model-based power analyses

A general difficulty in model-free power analysis is that the relation between constraints on a particular model parameter and the resulting effect size is often not clear. For instance, obtaining the required N to detect a cross-lagged effect \(\geq\) .10 in a CLPM with a certain power requires to translate the hypothesized cross-lagged effect to a non-centrality based effect size such as RMSEA, which is no straightforward endavour. semPower therefore provides various convenience function to simplify this process, which are described in this chapter.

The purpose of all convenience functions is to provide high-level interfaces that allow specifying the parameters of a certain model type (such as a CLPM) and a certain effect of interest in terms of the model parameters (such as a cross-lagged effect). The convenience functions then obtain the relevant population and model-implied covariance matrices and perform the desired power analysis based on these matrices. All convenience functions therefore require that the lavaan package (Rosseel, 2012) is installed, and always require the specification of the relevant parameters for the desired power analysis.

More precisely, all convenience functions internally obtain the population covariance matrix (and mean vector, if necessary) via semPower.genSigma, define the H0 model (and optionally the H1 model), call semPower.powerLav to fit the models to the population values, which, in turn, obtains the model-implied covariance matrix (and mean vector), and plugs the population and model-implied matrices into one of the model-free power analyses functions. Since all of these low level functions are also exposed, an even higher level of flexibility can be achieved by calling these functions directly (see this and this chapter for illustrations).

Given that the functions performing a model-based power analysis typically operate on a factor model, it is generally required to specify the factor model in terms of the number of factors, number of indicators, and loadings. For this reason, the chapter begins with a primer on how to define the factor model.

4.1 Definition of the factor model

Although all convenience functions described in this chapter implement different model structures, all these structures typically (but not necessarily) involve latent factors, so the factor model always needs to be defined in terms of the number of factors, the number of indicators for each factors, and the loadings of each indicator on the factors. Indeed, the “factor” model also needs to be specified if the model does not include any factor, but operates on observed variables only (such as a CLPM based on observed scores rather than on latent factors). There are several ways to achieve this, which are documented further below.

The magnitude of the factor loadings and the number of indicators per factor have a very large effect on statistical power (because both quantities increase factor determinacy and thus reduce random noise). For example, power to detect a factor correlation of \(r \geq .2\) with N = 250 is 88% when both factors are measured by 10 indicators each and all loadings are .90, but only 11% when both factors are measured by 3 indicators each and all loadings are .30. Likewise, the dispersion of factor loadings also affects power, although to a (considerably) lesser degree. It is thus crucial to be careful in defining appropriate factor loadings, which should generally be based on previous empirical results.

Any semPower convenience function expects one of the following arguments to define the factor model:

  • Lambda to define the loading matrix.
  • loadings to define a reduced loading matrix that only contains the primary loadings.
  • nIndicator to define the number of indicators per factor in conjunction with loadM to define a single loading magnitude for the indicators of each factor or a single loading magnitude to apply for all indicators.

The nIndicator and loadM arguments are usually the simplest approach, but do not allow for any dispersion of the loading magnitudes within a factor. This flexibility is offered by the loadings argument, which in most cases should suit all needs. Providing the complete loading matrix via the Lambda argument is only required for more complex loadings patterns where a single indicator is supposed to load on more than one factor.

Provide the full loading matrix (Lambda)

Suppose there are two factors measured by 3 and 4 indicators, respectively, and all loadings are equal to .5. The first option to define this factor model is to provide a loading matrix for the Lambda argument:

Lambda <- matrix(c(
  c(.5, 0),
  c(.5, 0),
  c(.5, 0),
  c(0, .5),
  c(0, .5),
  c(0, .5),
  c(0, .5)
  ), ncol = 2, byrow = TRUE)
Provide the primary factor loadings only (loadings)

A second way to define the model above is to provide a list comprising two vectors for the loadings argument:

loadings <- list(
  c(.5, .5, .5),
  c(.5, .5, .5, .5)
  )

loadings must be a list of vectors, where each vector defines the loading of the indicators on the respective factor. One vector is needed for each factor, so in the example above the first vector comprises 3 and the second 4 elements, reflecting the desired loading matrix above. Note that the loadings argument assumes the absence of any secondary loading, so that each loading defined in the vectors refers to a single indicator. As another example, the following are two equivalent ways to define three factors with 3, 4, and 5 indicators loading by the specified values:

Lambda <- matrix(c(
  c(.7, 0, 0),
  c(.6, 0, 0),
  c(.5, 0, 0),
  c(0, .5, 0),
  c(0, .8, 0),
  c(0, .6, 0),
  c(0, .3, 0),
  c(0, 0, .9),
  c(0, 0, .5),
  c(0, 0, .7),
  c(0, 0, .4),
  c(0, 0, .6)
  ), ncol = 3, byrow = TRUE)

loadings <- list(
  c(.7, .6, .5),
  c(.5, .8, .6, .3),
  c(.9, .5, .7, .4, .6)
  )
Provide a single loading magnitude to apply to all indicators of a specific factor (nIndicator and loadM)

A third way to define two factors measured by 3 and 4 indicators, respectively, and all loadings equal to .5, is to provide both the nIndicator and the loadM arguments:

nIndicator <- c(3, 4)
loadM <- .5

nIndicator is a vector providing the number of indicators separately for each factor. loadM can be a single number (as in the example above) to say that all loadings have the same value. Alternatively, loadM can be a vector specifying the loadings separately for each factor, where each indicator of a specific factor takes the defined value as loading. Thus, specifying loadM <- c(.5, .5) would achieve the same result as the code above, whereas loadM <- c(.5, .6) would assign all indicators of the second factor a loading of .6.

Factor models involving observed covariates

To include additional observed variables (that do not act as a factor indicator) in a model, a dummy factor with a single indicator loading by 1 can be defined. For instance, each of the following options defines an observed variable and a factor with 4 indicators loading by .5 each:

nIndicator <- c(1, 4)
loadM <- c(1, .5)

loadings <- list(
  c(1),
  c(.5, .5, .5, .5)
  )

Lambda <- matrix(c(
  c(1, 0),
  c(0, .5),
  c(0, .5),
  c(0, .5),
  c(0, .5)
  ), ncol = 2, byrow = TRUE)
Models including observed variables only

The “factor” model also needs to be defined when the model does not include any factor, but only contains observed variables. Consider a CLPM with two waves based on observed variables only, so there are 4 observed variables in total. Requesting an observed only model can be achieved by using either of the following:

  • Lambda = diag(4)
  • loadings = as.list(rep(1, 4))
  • nIndicator = rep(1, 4) and loadM = 1
Ordering of factors

For many (but not all) convenience functions, it is important to define the factors in the expected order. For instance, semPower.powerRegression treats the first factor as criterion (Y) and the remaining factors as predictors (X). Thus, nIndicator <- c(10, 5, 5) says that the criterion is measured by 10 indicators, whereas both predictors are measured by 5 indicators. Using nIndicator <- c(5, 10, 5) instead would imply a criterion measured by 5 indicators. Details on the expected order of factors are provided in each specific convenience function.

Multiple group models

The definition of the factor model in the case of multiple groups proceeds in the same way as described above, with the sole exception that the relevant arguments must be provided as a list, where each component corresponds to a specific group. For instance, below are two equivalent ways to define a two-factor model with 3 and 4 indicators, respectively, for two groups. In the first group, all loadings on the first and second factor are .5 and .6, respectively, in the second group all loadings on the first and second factor are .4 and .7, respectively.

# using the loadings argument
loadings <- list(
  # loadings for group 1
  list(
    c(.5, .5, .5),      # factor 1
    c(.6, .6, .6, .6)   # factor 2
  ),
  # loadings for group 2
  list(
    c(.4, .4, .4),      # factor 1
    c(.7, .7, .7, .7)   # factor 2
  )
)

# using nIndicator and loadM
nIndicator <- list(
  # nIndicators for group 1
  c(3, 4),
  # nIndicators for group 2
  c(3, 4)
)
loadM <- list(
  # loadings for group 1
  c(.5, .6),
  # loadings for group 1
  c(.4, .7)
)

Because the number of indicators per factor is usually assumed to be identical across groups, the list structure for nIndicator can also be omitted (e.g., nIndicator = c(3, 4)). Similarly, when also assuming that all loadings are equal across groups, the list structure for loadM may be omitted as well, provided that at least one additional argument referring to the factor model (such as Phi , Alpha, or tau) is a list.

4.2 Arguments common to all convenience functions

Whereas all convenience functions expect certain arguments (or values provided as arguments) that are unique to the specific function, a number of arguments are expected by all functions:

  • The type of power analysis requested (type):
    • Use type = 'a-priori' (or 'ap') to request an a-priori power analysis and provide the alpha error (e. g., alpha = .05) and the desired beta error (e. g., beta = .20; or equivalently, the desired power, power = .80).
    • Use type = 'post-hoc' (or 'ph') to request a post-hoc power analysis and provide the alpha error (e. g.,alpha = .05) and the number of observations (e. g.,N = 250).
    • Use type = 'compromise' (or 'cp') to request a compromise hoc power analysis and provide the desired ratio between alpha and beta error (e. g., abratio = 1) and the number of observations (e. g., N = 250).
  • fittingFunction: Defines the fitting function used to obtain \(\hat \Sigma\) and \(F_0\). Must be one of 'ML' (the default), 'WLS', 'DWLS', or 'ULS'. This should not be confused with derived robust test-statistics (such as ‘MLM',’MLR', ’WLSMV', etc.) that are only relevant in simulated power analysis.
  • Arguments defining the factor model, one of:
    • Lambda to provide a loading matrix.
    • loadings to provide a list of vectors defining the primary factor loadings.
    • nIndicator and loadM to define the number of indicators by factor and a single loading to apply for all indicators of a specific factor.
  • comparison: The relevant comparison model; one of 'saturated' or 'restricted' (the default). See the chapter on the definition of a comparison model for details.
  • nullEffect: Defines the relevant hypothesis depending on the specific convenience function.
  • nullWhich: Defines which parameters are targeted by the hypothesis specified in nullEffect.
  • nullWhichGroups: Defines which groups are targeted when nullEffect refers to cross-group constrains.
  • simulatedPower: Whether to perform a simulated (TRUE) rather than an analytical (FALSE; the default) power analysis. See the chapter on simulated power for details.

Thus, a typical call could look as follows (here taking a CFA model as an example):

powerCFA <- semPower.powerCFA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, beta = .20,
  # set comparison model
  comparison = 'restricted',
  # define fitting function
  fittingFunction = 'ML',
  # arguments (and values) specific to semPower.powerCFA
  Phi = .25,
  nullEffect = 'cor = 0',
  nullWhich = c(1, 2),
  # define factor model
  nIndicator = c(4, 3), loadM = c(.5, .6)
  )

4.3 CFA models

semPower.powerCFA is used to perform power analyses to reject hypotheses arising in a standard CFA model involving several factors or one or more factor(s) and one or more additional observed covariates. semPower.powerCFA provides interfaces to perform power analyses concerning the following hypotheses:

  • whether a correlation differs from zero (nullEffect = 'cor = 0').
  • whether two correlations differ from each other (nullEffect = 'corX = corZ').
  • whether a correlation differs across two or more groups (nullEffect = 'corA = corB').
  • whether a loading differs from zero (nullEffect = 'loading = 0').

semPower.powerCFA only addresses hypotheses concerning correlation(s) involving one or more factors. semPower provides other convenience functions for hypothesis arising in latent regression models, models involving a bifactor structure, mediation models, generic path models, and multigroup measurement invariance. For hypotheses regarding global model fit, a model-free power analysis should be performed.

semPower.powerCFA expects the following arguments:

  • Phi: Either a single number defining the correlation between exactly two factors or the factor correlation matrix.
  • nullEffect: Defines the hypothesis of interest; one of 'cor = 0', 'corX = corZ', or 'corA = corB'.
  • nullWhich: Defines which correlation(s) is targeted by the hypothesis defined in nullEffect.
  • nullWhichGroups: Defines which groups are targeted when nullEffect = 'corA = corB'.
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model.

semPower.powerCFA provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Detect whether a correlation differs from zero

To perform a power analysis to detect whether a correlation between factors differs from zero, use nullEffect = 'cor = 0' (which is also the default hypothesis and could thus be omitted).

In the simplest case, the model contains exactly two factors, so only the to-be-detected factor correlation needs to be specified (along the factor model itself). For instance, the following requests the required sample (type = 'a-priori') to detect that a factor correlation of at least .25 (Phi = .25) differs from zero (nullEffect = 'cor = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The first factor is measured by 4 indicators and the second factor by 3 indicators (nIndicator = c(4, 3)). All indicators of the first factor load by .5, whereas all indicators of the second factor load by .6 (loadM = c(.5, .6)). See the chapter on specifying a factor model for alternative (more flexible) ways to define the factor loadings.

powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'a-priori', alpha = .05, power = .80,
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(4, 3), loadM = c(.5, .6))
summary(powerCFA)

The results of the power analysis are printed by calling the summary method on powerCFA, which in this example provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300,
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(4, 3), loadM = c(.5, .6))

Now, summary(powerCFA) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

In the examples above, a power analysis was performed by comparing the implied H0 model against a less restrictive H1 model (by omitting the comparison argument which defaults to 'restricted'). If one rather wants to compare the H0 model against the saturated model, use comparison = 'saturated'. See the chapter on the definition of the comparison model for a detailed discussion.

If one is interested in detecting the correlation between a factor and an observed covariate, the only change refers to the definition of the factor model. Below, nIndicator = c(4, 1) and loadM = c(.5, 1) define a factor with 4 indicators loading by .5 each and another dummy factor with a single indicator loading by 1 (which is then simply an observed variable):

powerCFA <- semPower.powerCFA(type = 'a-priori', alpha = .05, power = .80,
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              nIndicator = c(4, 1), loadM = c(.5, 1))

Alternatively, Phi can also be a factor correlation matrix. The following defines three factors correlated according to Phi and uses the nullWhich = c(1, 3) argument to determine the required sample size to detect a correlation of at least .30 between the first and the third factor:

Phi <- matrix(c(
   c(1.00, 0.20, 0.30),
   c(0.20, 1.00, 0.10),
   c(0.30, 0.10, 1.00)
 ), ncol = 3,byrow = TRUE)

powerCFA <- semPower.powerCFA(type = 'a-priori', alpha = .05, power = .80,
                              Phi = Phi,
                              nullEffect = 'cor = 0',
                              nullWhich = c(1, 3),
                              nIndicator = c(3, 3, 3), loadM = c(.5, .7, .6))
Detect whether a loading differs from zero

To perform a power analysis to detect whether a loading differs from zero, use nullEffect = 'loading = 0'.

For instance, the following requests the required sample (type = 'a-priori') to detect that the loading of the third indicator on the first factor (of .50, nullWhich = c(3, 1)) differs from zero (nullEffect = 'loading = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80), where all factors are measured by 3 indicators (nIndicator = c(3, 3, 3)) and all non-zero loadings on the first and second factor are equal to .5, and .7 (loadM = c(.5, .7)), respectively, and the correlation between the factors is .50 (see Definition of the factor model).

powerCFA <- semPower.powerCFA(type = 'a-priori', alpha = .05, power = .80,
                              Phi = .3,
                              nullEffect = 'loading = 0',
                              nullWhich = c(3, 1),
                              nIndicator = c(3, 3), loadM = c(.5, .7))

In the example above, the indicator associated with the loading targeted by the null hypothesis only exhibited a single non-zero loading, because the population model was defined such that each indicator loads on a single factor. If the loading of interest should refer to a secondary loading (cross-loading), the factor model needs to be defined by providing the Lambda matrix. For instance, the following defines two factors with loading matrix Lambda and requests the required sample to detect that the loading of the fourth indicator on the second factor (nullWhich = c(4, 2)) differs from zero.

Lambda <- matrix(c(
  c(.8, 0),
  c(.7, 0),
  c(.6, 0),
  c(.5, .1),  # 4th indicator loads on both factors
  c(0, .5),
  c(0, .6),
  c(0, .7),
  c(0, .8)
), ncol = 2, byrow = TRUE)
powerCFA <- semPower.powerCFA(type = 'a-priori', alpha = .05, power = .80,
                              Phi = .3,
                              nullEffect = 'loading = 0',
                              nullWhich = c(4, 2),
                              Lambda = Lambda)
Detect whether two correlations differ from each other

To perform a power analysis to detect whether two correlations differ from each other, use nullEffect = 'corX = corZ'.

For instance, the following requests the required sample (type = 'a-priori') to detect that the correlation between the first and second factor (of .20) differs from the correlation between the first and the third factor (of .30; nullEffect = 'corX = corZ') on alpha = .05 (alpha = .05) with a power of 80% (power = .80), where all factors are measured by 3 indicators (nIndicator = c(3, 3, 3)) and all non-zero loadings on the first, second, and third factor are equal to .5, .7, and .6 (loadM = c(.5, .7, .6)), respectively (see Definition of the factor model).

Phi <- matrix(c(
   c(1.00, 0.20, 0.30),
   c(0.20, 1.00, 0.10),
   c(0.30, 0.10, 1.00)
 ), ncol = 3,byrow = TRUE)

powerCFA <- semPower.powerCFA(type = 'a-priori', alpha = .05, power = .80,
                              Phi = Phi,
                              nullEffect = 'corX = corZ',
                              nullWhich = list(c(1, 2), c(1, 3)),
                              nIndicator = c(3, 3, 3), loadM = c(.5, .7, .6))

Note that nullWhich is now a list comprising two vectors, jointly defining which correlations to set to equality. nullWhich = list(c(1, 2), c(1, 3)) says that the correlation between the first and the second factor (c(1, 2)) and the correlation between the first and the third (c(1, 3)) factor are restricted to be equal.

nullWhich can also comprise more than two elements to test for the equality of more than two correlations. For instance, using nullWhich = list(c(1, 2), c(1, 3), c(2, 3)) in the scenario above constrains all factor correlations to be equal.

As before, it is also possible to include observed covariates instead of latent factors by simply defining a dummy factor with a single indicator loading by 1. For example, to replace the first factor in the example above by an observed variable (thus asking for the required N to detect that two factors correlate differently to an observed outcome), the factor model is changed by altering nIndicator and loadM:

powerCFA <- semPower.powerCFA(type = 'a-priori', alpha = .05, power = .80,
                              Phi = Phi,
                              nullEffect = 'corX = corZ',
                              nullWhich = list(c(1, 2), c(1, 3)),
                              nIndicator = c(1, 3, 3), loadM = c(1, .7, .6))
Detect whether a correlation differs across two or more groups

To perform a power analysis to detect whether a correlation differs across two or more groups, use nullEffect = 'corA = corB'.

For instance, the following requests the required sample (type = 'a-priori') to detect that the correlation between two factors in group 1 (of .20) differs from the one in group 2 (of .40; nullEffect = 'corA = corB') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The measurement model is identical in both groups: Both factors are measured by 5 indicators each (nIndicator = c(5, 5)), and all non-zero loadings on the first and second factor are equal to .7 and .5 (loadM = c(.7, .5)), respectively, in both groups (see Definition of the factor model). Phi = list(.2, .4) is now a list comprising two elements, the first defining the correlation between the factors in the first group to be .2, the second defining the correlation between the factors in the second group to be .4. In addition, N must also be a list, which in case of an a priori power analysis gives the group weights. N = list(1, 1) requests equally sized groups.

powerCFA <- semPower.powerCFA(type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                              nullEffect = 'corA = corB',
                              Phi = list(.2, .4), 
                              loadM = c(.7, .5), 
                              nIndicator = c(5, 5))

If using N = list(2, 1) instead, the first group would be twice as large as the second group. If a post hoc or compromise power analysis is requested, N is a list providing the number of observations for each group.

Phi can also be a list of factor correlation matrices (instead of a list of single numbers). For instance, the following defines different factor correlation matrices for two groups (Phi1 and Phi2) with 300 and 400 observations (N = list(300, 400)), respectively, and requests the achieved power (type = 'post-hoc') on alpha = .05 (alpha = .05) to detect that the correlation between factor 1 and 3 (nullWhich = c(1, 3)) differs across groups.

Phi1 <- matrix(c(
    c(1.00, 0.20, 0.50),
    c(0.20, 1.00, 0.10),
    c(0.50, 0.10, 1.00)
 ), ncol = 3,byrow = TRUE)
Phi2 <- matrix(c(
    c(1.00, 0.20, 0.30),
    c(0.20, 1.00, 0.10),
    c(0.30, 0.10, 1.00)
 ), ncol = 3,byrow = TRUE)

powerCFA <- semPower.powerCFA(type = 'post-hoc', alpha = .05, N = list(300, 400),
                              Phi = list(Phi1, Phi2),
                              nullEffect = 'corA = corB',
                              nullWhich = c(1, 3),
                              nIndicator = c(3, 3, 3), loadM = c(.5, .5, .5))

If there are more than two groups, the targeted correlation is held equal across all groups by default. If the correlation should only be constrained to equality in specific groups, nullWhichGroups is used to identify the groups to which the equality restrictions apply. For instance, the following defines three equally sized groups with a distinct correlation between the two factors, but only asks for the required sample to detect that the correlation in group 1 (of .2) differs from the one in group 3 (of .3; nullWhichGroups = c(1, 3)).

powerCFA <- semPower.powerCFA(type = 'a-priori', alpha = .05, power = .80, N = list(1, 1, 1),
                              nullEffect = 'corA = corB',
                              Phi = list(.2, .4, .3), 
                              nullWhichGroups = c(1, 3),
                              loadM = c(.7, .5), 
                              nIndicator = c(5, 5))
Detect whether a loading differs from zero

To perform a power analysis to detect whether a loading differs from zero, use nullEffect = 'loading = 0'.

For this type of hypothesis, it is often useful to define the loadings using the full loading matrix (Lambda), as the interest often lies in the power to detect secondary loadings. The nullWhich argument then is a vector defining which element in Lambda is hypothesized to equal zero.

For instance, the following requests the required sample (type = 'a-priori') to detect that the loading of indicator 4 on factor 1 (of .1; nullWhich = c(4, 1)) differs from zero (nullEffect = 'loading = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). A two factor model is defined with the loadings corresponding to the ones defined in Lambda and a factor correlation of .2 (Phi = .2).

Lambda <- matrix(c(
   c(0.80, 0.00),
   c(0.70, 0.00),
   c(0.60, 0.00),
   c(0.10, 0.50),
   c(0.00, 0.50),
   c(0.00, 0.80)
 ), ncol = 2,byrow = TRUE)

powerCFA <- semPower.powerCFA(type = 'a-priori', alpha = .05, power = .80,
                              Phi = .2,
                              nullEffect = 'loading = 0',
                              nullWhich = c(4, 1),
                              Lambda = Lambda)

4.4 Models involving a bifactor structure

semPower.powerBifactor is used to perform power analyses to reject correlational hypotheses arising in a model involving one general factor in a bifactor structure (which is referred to as the “bifactor”) and at least one additional variable, which can also be a bifactor, a standard factor, or an observed covariate. semPower.powerBifactor provides interfaces to perform power analyses concerning the following hypotheses:

  • whether a correlation differs from zero (nullEffect = 'cor = 0').
  • whether two correlations differ from each other (nullEffect = 'corX = corZ').
  • whether a correlation differs across two or more groups (nullEffect = 'corA = corB').

semPower.powerBifactor only addresses hypotheses concerning correlation(s) involving one or more bifactors. semPower provides other convenience functions for hypothesis arising in standard CFA models. For hypotheses regarding global model fit, a model-free power analysis should be performed.

semPower.powerBifactor expects the following arguments:

  • bfLoadings: A single vector or a list containing one or more vectors giving the loadings on each bifactor.
  • bfWhichFactors: A list containing one or more vectors defining which (specific) factors are part of the bifactor structure.
  • Phi: Either a single number defining the correlation between exactly two factors or the factor correlation matrix. Must only contain the bifactor(s) and covariate(s), but not any specific factor. Phi assumes the following order \((BF_1, \cdots, BF_k, COV_1, \cdots, COV_k)\).
  • nullEffect: Defines the hypothesis of interest; one of 'cor = 0', 'corX = corZ', or 'corA = corB'.
  • nullWhich: Defines which correlation(s) is targeted by the hypothesis defined in nullEffect.
  • nullWhichGroups: Defines which groups are targeted when nullEffect = 'corA = corB'.
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model, which may only refer to specific factors (that are part of the bifactor) and additional covariate(s). The loadings on the bifactor are defined in bfLoadings.

semPower.powerBifactor provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Definition of the bifactor structure

A model involving a bifactor structure implies that each indicator that is part of the structure loads on both a general factor (the bifactor) and on a specific factor. In orthogonal bifactor models, the general and the specific factors are independent. A bifactor is defined by specifying the number of indicators and the respective loadings on the bifactor in the argument bfLoadings. The specific factors comprised in the bifactor structure are defined in bfWhichFactors, and the number of indicators and loadings on the specific factors in either Lambda, loadings, or nIndicator and loadM (see the chapter on specifying a factor model). The latter arguments also include the number of indicators and loadings defining the covariate(s).

Note that identifying and estimating models involving a bifactor can be rather tricky, in particular when there is more than a single bifactor structure. Be prepared to receive warnings about non-convergence and adapt your model accordingly. Indeed, a common warning is lavaan complaining about a non-positive definite covariance matrix between the latent variables, which can be safely ignored. Concerning identification, it is generally recommended to define at least one indicator that only loads on the bifactor, but not on any of the specific factors.

For example, the following defines a single bifactor with 10 indicators loading by .5 each (bfLoadings <- rep(.5, 10)) on the bifactor. The bifactor structure involves 3 specific factors, namely the factors 1, 2, and 3 (bfWhichFactors <- c(1, 2, 3)) defined in the argument defining the measurement model (loadings). The loadings on the first specific factor are .30, .20, and .10 (c(.30, .20, .10)), the loadings on the second .05, .10, and .15 (c(.05, .10, .15)), and the loadings on the third are .20, .05, and .15 (c(.20, .05, .15)). Furthermore, the fourth factor defined in the loadings argument acts as an additional covariate (which is not part of the bifactor structure). The covariate is a factor with 4 indicators, loading by .70, .75, .80, and .85 (c(.70, .75, .80, .85)).

bfLoadings <- rep(.5, 10)
bfWhichFactors <- c(1, 2, 3)
loadings <- list(
  c(.30, .20, .10),       # specific factor 1
  c(.05, .10, .15),       # specific factor 2
  c(.20, .05, .15),       # specific factor 3
  c(.70, .75, .80, .85)   # covariate
)

The above implies the following loading matrix: \[\Lambda = \begin{pmatrix} BF & S_1 & S_2 & S_3 & COV\\ .50 & 0 & 0 & 0 & 0\\ .50 & .30 & 0 & 0 & 0\\ .50 & .20 & 0 & 0 & 0\\ .50 & .10 & 0 & 0 & 0\\ .50 & 0 & .05 & 0 & 0\\ .50 & 0 & .10 & 0 & 0\\ .50 & 0 & .15 & 0 & 0\\ .50 & 0 & 0 & .20 & 0\\ .50 & 0 & 0 & .05 & 0\\ .50 & 0 & 0 & .15 & 0\\ 0 & 0 & 0 & 0 & .70\\ 0 & 0 & 0 & 0 & .75\\ 0 & 0 & 0 & 0 & .80\\ 0 & 0 & 0 & 0 & .85 \end{pmatrix}\]

Note that the bifactor comprises 10 indicators, whereas the specific factors jointly comprise only 9 indicators, so that one indicator solely loads on the bifactor. This is in fact desired and recommended to ensure that the model is identified. Any indicator that is exclusive for the bifactor is not part of the loadings argument, so that, in the example above, the first indicator loads on the bifactor only and the second indicator is the first indicator of the first specific factor.

If a certain indicator should only load on the specific factor, but not on the bifactor, define the respective loading on the bifactor to equal zero. In the following example, the first indicator of the specific factor (and thus the second indicator of the bifactor) only loads on the specific factor:

bfLoadings <- c(.5, 0, .5, .5, .5, .5, .5, .5, .5, .5)
bfWhichFactors <- c(1, 2, 3)
loadings <- list(
  c(.30, .20, .10),       # specific factor 1
  c(.05, .10, .15),       # specific factor 2
  c(.20, .05, .15),       # specific factor 3
  c(.70, .75, .80, .85)   # covariate
)

The correlations between the bifactor(s) and the covariate(s) are defined in Phi. However, given that orthogonal bifactor models require all correlations between the bifactor and the specific factors to be zero, Phi must omit the specific factors and only include the bifactor(s) and the covariate(s) assuming the following order: \((BF_1, \cdots, BF_k, COV_1, \cdots, COV_k)\). For instance, to define the correlation between the bifactor and the covariate in the example above to be .30, Phi becomes a matrix with 2 rows and 2 columns:

Phi <- matrix(c(
  c(1, .3),   # bifactor
  c(.3, 1)    # covariate
), ncol = 2, byrow = TRUE)

The same approach is used for more complex structures involving more than one bifactor and more than one covariate. If more than one bifactor is to be defined, bfLoadings and bfWhichFactors become lists, where each component refers to a particular bifactor. For instance, the following defines two bifactors. The first involves 10 indicators that all load by .6 (rep(.6, 10)), the second involves 11 indicators that all load by .6 (rep(.6, 11)). Both bifactors comprise 3 specific factors (bfWhichFactors), the first factors 1-3 (c(1, 2, 3)) and the second factors 4-6 (c(4, 5, 6)). The loadings argument shows that the specific factors 1-3 (that are part of the first bifactor) all involve three indicators, whereas the specific factors 4-6 (that are part of the second bifactor) involve 4, 3, and 3 indicators, respectively. Further, loadings also defines a factor that is not part of any of the bifactors and thus acts as a covariate. The covariate is a factor with 4 indicators that all load by .6 (c(.6, .6, .6, .6)). As there are now two bifactors and one additional covariate, Phi, defining their intercorrelations, must be a \(3 \cdot 3\) matrix, where the columns 1-2 refer to the bifactors, and the third column to the covariate. The correlations in Phi imply that the correlations between the bifactors is .3. The covariate correlates at .5 and .1 with the first and second bifactor, respectively.

bfLoadings <- list(rep(.6, 10),
                   rep(.6, 11))
bfWhichFactors <- list(c(1, 2, 3),
                       c(4, 5, 6))
loadings <- list(
  # specific factors for bf1
  c(.2, .2, .2),
  c(.15, .15, .15),
  c(.25, .25, .25),
  # specific factors bf2
  c(.10, .15, .15, .20),
  c(.15, .10, .20),
  c(.20, .15, .25),
  # covariate
  c(.6, .6, .6, .6)
)

Phi <- matrix(c(
  c(1.0, 0.3, 0.5),  # bifactor 1
  c(0.3, 1.0, 0.1),  # bifactor 2
  c(0.5, 0.1, 1.0)   # covariate 1
), ncol = 3, byrow = TRUE)
Detect whether a correlation differs from zero

To perform a power analysis to detect whether a correlation between a bifactor and another factor or observed covariate differs from zero, use nullEffect = 'cor = 0' (which is also the default hypothesis and could thus be omitted).

For instance, the following requests the required sample (type = 'a-priori') to detect that a correlation between the bifactor and the covariate of at least .30 (Phi) differs from zero (nullEffect = 'cor = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). nullWhich = c(1, 2) defines which correlation between the factors should be restricted to zero. In this example, there is only a single correlation in Phi, so nullWhich must be c(1, 2). The arguments related to the definition of the bifactor structure were described in detail in the previous section. Here, a bifactor spanning 10 indicators, three specific factors with 3 indicators each, and a covariate not part of the bifactor structure with 4 indicators are defined.

bfLoadings <- rep(.5, 10)
bfWhichFactors <- c(1, 2, 3)
loadings <- list(
  c(.30, .20, .10),       # specific factor 1
  c(.05, .10, .15),       # specific factor 2
  c(.20, .05, .15),       # specific factor 3
  c(.70, .75, .80, .85)   # covariate
)
Phi <- matrix(c(
  c(1, .3),   # bifactor
  c(.3, 1)    # covariate
), ncol = 2, byrow = TRUE)
powerBF <- semPower.powerBifactor(
                                  # define type of power analysis
                                  type = 'a-priori', alpha = .05, power = .80,
                                  # define hypothesis
                                  nullEffect = 'cor = 0',
                                  nullWhich = c(1, 2),
                                  # define factor model
                                  bfLoadings = bfLoadings,
                                  bfWhichFactors = bfWhichFactors,
                                  Phi = Phi,
                                  loadings = loadings
                                  )
summary(powerBF)

The results of the power analysis are printed by calling the summary method on powerBF, which in this example provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerBF <- semPower.powerBifactor(
                                  # define type of power analysis
                                  type = 'post-hoc', alpha = .05, N = 350,
                                  # define hypothesis
                                  nullEffect = 'cor = 0',
                                  nullWhich = c(1, 2),
                                  # define factor model
                                  bfLoadings = bfLoadings,
                                  bfWhichFactors = bfWhichFactors,
                                  Phi = Phi,
                                  loadings = loadings
                                  )

Now, summary(powerBF) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

If one is interested in detecting the correlation between a factor and an observed covariate, the only change refers to the definition of the factor model. Below, the fourth factor defined in loadings comprises a single indicator loading by 1 c(1), which is then simply an observed variable:

bfLoadings <- rep(.5, 10)
bfWhichFactors <- c(1, 2, 3)
loadings <- list(
  c(.30, .20, .10),       # specific factor 1
  c(.05, .10, .15),       # specific factor 2
  c(.20, .05, .15),       # specific factor 3
  c(1)                    # observed covariate
)
Phi <- matrix(c(
  c(1, .2),   # bifactor
  c(.2, 1)    # covariate
), ncol = 2, byrow = TRUE)
powerBF <- semPower.powerBifactor(
                                  # define type of power analysis
                                  type = 'a-priori', alpha = .05, power = .80,
                                  # define hypothesis
                                  nullEffect = 'cor = 0',
                                  nullWhich = c(1, 2),
                                  # define factor model
                                  bfLoadings = bfLoadings,
                                  bfWhichFactors = bfWhichFactors,
                                  Phi = Phi,
                                  loadings = loadings
                                  )

Similarly, the model can also define two bifactors. In the following, two bifactors comprising 10 and 11 indicators (bfLoadings), respectively, and spanning the specific factors 1-3 and 4-6, respectively, (bfWhichFactors) are defined, and the required sample size to detect that a correlation between the bifactors of at least .3 is detected with a power of 80% on alpha = .05:

bfLoadings <- list(rep(.6, 10),
                   rep(.6, 11))
bfWhichFactors <- list(c(1, 2, 3),
                       c(4, 5, 6))
loadings <- list(
  # specific factors for bf1
  c(.2, .2, .2),
  c(.15, .15, .15),
  c(.25, .25, .25),
  # specific factors bf2
  c(.10, .15, .15, .20),
  c(.15, .10, .20),
  c(.20, .15, .25)
)
Phi <- matrix(c(
  c(1.0, 0.3),  # bifactor 1
  c(0.3, 1.0)   # bifactor 2
), ncol = 2, byrow = TRUE)
powerBF <- semPower.powerBifactor(
                                  # define type of power analysis
                                  type = 'a-priori', alpha = .05, power = .80,
                                  # define hypothesis
                                  nullEffect = 'cor = 0',
                                  nullWhich = c(1, 2),
                                  # define factor model
                                  bfLoadings = bfLoadings,
                                  bfWhichFactors = bfWhichFactors,
                                  Phi = Phi,
                                  loadings = loadings
                                  )

In the examples above, a power analysis was performed by comparing the implied H0 model against a less restrictive H1 model (by omitting the comparison argument which defaults to 'restricted'). If one rather wants to compare the H0 model against the saturated model, use comparison = 'saturated'. See the chapter on the definition of the comparison model for a detailed discussion.

Detect whether two correlations differ from each other

To perform a power analysis to detect whether two correlations differ from each other, use nullEffect = 'corX = corZ'.

For instance, the following requests the required sample (type = 'a-priori') to detect that a correlation between the bifactor and the first covariate (of .30) differs from the correlation between the bifactor and the second covariate (of .50; nullEffect = 'corX = corZ') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The arguments related to the definition of the bifactor structure were described in detail in one of the previous sections. Here, a bifactor spanning 10 indicators, three specific factors with 3 indicators each, and two covariates that are not part of the bifactor structure with 4 and 5 indicators, respectively, are defined.

bfLoadings <- rep(.5, 10)
bfWhichFactors <- c(1, 2, 3)
loadings <- list(
  c(.30, .20, .10),             # specific factor 1
  c(.05, .10, .15),             # specific factor 2
  c(.20, .05, .15),             # specific factor 3
  c(.70, .75, .80, .85),        # covariate 1
  c(.80, .50, .40, .55, .60)    # covariate 2
)
Phi <- matrix(c(
  c(1, .3, .5),   # bifactor
  c(.3, 1, .2),   # covariate 1
  c(.5, .2, 1)    # covariate 2
), ncol = 3, byrow = TRUE)
powerBF <- semPower.powerBifactor(
                                  # define type of power analysis
                                  type = 'a-priori', alpha = .05, power = .80,
                                  # define hypothesis
                                  nullEffect = 'corX = corZ',
                                  nullWhich = list(c(1, 2), c(1, 3)),
                                  # define factor model
                                  bfLoadings = bfLoadings,
                                  bfWhichFactors = bfWhichFactors,
                                  Phi = Phi,
                                  loadings = loadings
                                  )

Note that nullWhich is now a list comprising two vectors, jointly defining which correlations defined in Phi to set to equality. nullWhich = list(c(1, 2), c(1, 3)) says that the correlation between the first and the second factor (c(1, 2)) and the correlation between the first and the third factor (c(1, 3)) are restricted to be equal. Since the first factor in Phi refers to the bifactor, this means that the correlation of the bifactor to the two covariates is targeted by the null hypothesis.

Detect whether a correlation differs across two or more groups

To perform a power analysis to detect whether a correlation differs across two or more groups, use nullEffect = 'corA = corB'.

For instance, the following requests the required sample (type = 'a-priori') to detect that the correlation between the bifactor and the covariate in group 1 (of .30) differs from the one in group 2 (of .10; nullEffect = 'corA = corB') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The measurement model is identical in both groups and implies a bifactor measured by 10 indicators (bfLoadings) spanning three specific factors (bfWhichFactors) measured by 3 indicators each, and an additional covariate (that is not part of the bifactor structure) measured by 4 indicators (loadings). See above for details. Phi is now a list comprising two elements, the first defining the correlation between the bifactor and the covariate in the first group (Phi1), the second defining the corresponding correlation in the second group (Phi2). In addition, N must also be a list, which in case of an a priori power analysis gives the group weights. N = list(1, 1) requests equally sized groups. If using N = list(2, 1) instead, the first group would be twice as large as the second group. If a post hoc or compromise power analysis is requested, N is a list providing the number of observations for each group.

bfLoadings <- rep(.6, 10)
bfWhichFactors <- c(1, 2, 3)
loadings <- list(
  # specific factors
  c(.20, .20, .20),
  c(.15, .15, .15),
  c(.25, .25, .25),
  # covariate
  c(.70, .60, .80, .70)
)
# correlations in group 1
Phi1 <- matrix(c(
  c(1.0, 0.3),  # bifactor
  c(0.3, 1.0)   # covariate
), ncol = 2, byrow = TRUE)
# correlations in group 2
Phi2 <- matrix(c(
  c(1.0, 0.1),  # bifactor
  c(0.1, 1.0)   # covariate
), ncol = 2, byrow = TRUE)
powerBF <- semPower.powerBifactor(
                                  # define type of power analysis
                                  type = 'a-priori', alpha = .05, power = .80, N = c(1, 1),
                                  # define hypothesis
                                  nullEffect = 'corA = corB',
                                  nullWhich = c(1, 2),
                                  # define factor model
                                  bfLoadings = bfLoadings,
                                  bfWhichFactors = bfWhichFactors,
                                  Phi = list(Phi1, Phi2),
                                  loadings = loadings
                                  )

If there are more than two groups, the targeted correlation is held equal across all groups by default. If the correlation should only be constrained to equality in specific groups, nullWhichGroups is used to identify the groups to which the equality restrictions apply. For instance, the following expands the previous example by defining another Phi for a third group and then asks for the required sample size to detect that the correlation in group 1 differs from the one in group 3 (nullWhichGroups = c(1, 3)).

# correlations in group 3
Phi3 <- matrix(c(
  c(1.0, 0.5),  # bifactor
  c(0.5, 1.0)   # covariate
), ncol = 2, byrow = TRUE)
powerBF <- semPower.powerBifactor(
                                  # define type of power analysis
                                  type = 'a-priori', alpha = .05, power = .80, N = c(1, 1, 1),
                                  # define hypothesis
                                  nullEffect = 'corA = corB',
                                  nullWhich = c(1, 2),
                                  nullWhichGroups = c(1, 3),
                                  # define factor model
                                  bfLoadings = bfLoadings,
                                  bfWhichFactors = bfWhichFactors,
                                  Phi = list(Phi1, Phi2, Phi3),
                                  loadings = loadings
                                  )

4.5 Latent regression models

semPower.powerRegression is used to perform power analyses for SEM models involving a simple linear regression relation of the form \(\hat{Y} = \beta_1 \cdot X_1 + ... + \beta_k \cdot X_k\), where \(Y\) and \(X_i\) can be factors or observed variables. semPower.powerRegressionprovides interfaces to perform power analyses concerning the following hypotheses:

  • whether a slope (\(\beta_i\)) differs from zero (nullEffect = 'slope = 0').
  • whether two slopes (\(\beta_i\), \(\beta_j\)) differ from each other (nullEffect = 'slopeX = slopeZ').
  • whether a slope (\(\beta_{i,m}\), \(\beta_{i,n}\)) differs across two or more groups (nullEffect = 'slopeA = slopeB').

semPower.powerRegression only addresses hypotheses concerning slope(s) in a regression. semPower provides other convenience functions for hypothesis arising in mediation models, generic path models, and cross-lagged panel models. For hypotheses regarding global model fit, a model-free power analysis should be performed.

semPower.powerRegression expects the following arguments:

  • slopes: Vector of slopes (or a single number for a single slope) for the predictors \(X_1\) to \(X_k\) in the prediction of \(Y\).
  • corXX: Correlation(s) (or covariances) between the predictors. Either a single number defining the correlation between exactly two predictors, or a \(k \cdot k\) correlation matrix, or NULL for uncorrelated predictors (the default).
  • nullEffect: Defines the hypothesis of interest; one of 'slope = 0', 'slopeX = slopeZ', or 'slopeA = slopeB'.
  • nullWhich: Defines which slope(s) is targeted by the hypothesis defined in nullEffect.
  • nullWhichGroups: Defines which groups are targeted when nullEffect = 'slopeA = slopeB'.
  • standardized: Whether the arguments provided to slopes and corXX are standardized (TRUE, the default) or unstandardized (FALSE).
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model, where the first factor represents the criterion \(Y\) and the remaining factors the predictors \(X_1\) to \(X_k\).

semPower.powerRegression provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Detect whether a slope differs from zero

To perform a power analysis to detect whether a slope differs from zero, use nullEffect = 'slope = 0' (which is also the default and could thus be omitted).

For instance, the following sets up three factors measured by 3, 4, and 5 indicators (nIndicator = c(3, 4, 5)). All indicators of the first factor load by .5, all indicators of the second factor by .6, and all of the third factor by .7 (loadM = c(.5, .6, .7)). See the chapter on specifying a factor model for alternative (more flexible) ways to define the factor loadings. semPower.powerRegression treats the first defined factor as the criterion (\(Y\)) and the remaining factors as predictors (here: \(X_1\) and \(X_2\)), so, in the present example, the criterion is measured by 3 indicators loading by .5 each. The slopes of \(X_1\) and \(X_2\) in the prediction of \(Y\) are defined to be .2 and .3, respectively (slopes = c(.2, .3)) and the correlation between the predictors is defined to be .4 (corXX = .4). Finally, the required sample (type = 'a-priori') is requested to detect that the first slope (nullWhich = 1) differs from zero (nullEffect = 'slope = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80).

powerReg <- semPower.powerRegression(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80,
                                     # define hypothesis
                                     slopes = c(.2, .3), 
                                     corXX = .4, 
                                     nullEffect = 'slope = 0',
                                     nullWhich = 1,
                                     # define measurement model
                                     nIndicator = c(3, 5, 4), 
                                     loadM = c(.5, .6, .7))
summary(powerReg)

The results of the power analysis are printed by calling the summary method on powerReg, which, in this example, provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerReg <- semPower.powerRegression(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80,
                                     # define hypothesis
                                     slopes = c(.2, .3), 
                                     corXX = .4, 
                                     nullEffect = 'slope = 0',
                                     nullWhich = 1,
                                     # define measurement model
                                     nIndicator = c(3, 5, 4), 
                                     loadM = c(.5, .6, .7))

Now, summary(powerReg) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

In the examples above, a power analysis was performed by comparing the implied H0 model against a less restrictive H1 model (by omitting the comparison argument which defaults to 'restricted'). If one rather wants to compare the H0 model against the saturated model, use comparison = 'saturated'. See the chapter on the definition of the comparison model for a detailed discussion.

Also, all slopes were treated as completely standardized parameters (by omitting the standardized argument, which defaults to TRUE). This implies that semPower defines the residual variances (in \(\Psi\)) such that all variances are 1. If the slopes should rather be treated as unstandardized, set standardized = FALSE, which implies an identity matrix for \(\Psi\).

If there are more than two predictors, the predictor intercorrelation matrix must be provided as argument to corXX. For instance, consider the regression \(\hat{Y} = .3 \cdot X_1 + .2 \cdot X_2 + .1 \cdot X_3\), where all factors are measured by 3 indicators, all loadings equal .5, the interest lies in detecting that the slope for \(X_2\) differs from zero (nullWhich = 2), and the predictors are correlated according to corXX:

corXX <- matrix(c(
 #   X1    X2    X3
 c(1.00, 0.20, 0.30),  # X1
 c(0.20, 1.00, 0.10),  # X2
 c(0.30, 0.10, 1.00)   # X3
), ncol = 3, byrow = TRUE)
powerReg <- semPower.powerRegression(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80,
                                     # define hypothesis
                                     slopes = c(.3, .2, .1), 
                                     corXX = corXX, 
                                     nullEffect = 'slope = 0',
                                     nullWhich = 2,
                                     # define measurement model
                                     nIndicator = c(3, 3, 3, 3), 
                                     loadM = c(.5, .5, .5, .5))

If corXX is omitted or NULL, all predictors are assumed to be uncorrelated.

If the criterion or (one of) the predictors is an observed variable (instead of a factor), the only change refers to the definition of the factor model. For instance, the following defines the first “factor”, which is always treated as the criterion \(Y\), as a dummy factor with a single indicator (nIndicator = c(1, 5, 4)) and a loading of 1 (loadM = c(1, .6, .7)), so it becomes equivalent to an observed variable:

powerReg <- semPower.powerRegression(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80,
                                     # define hypothesis
                                     slopes = c(.2, .3), 
                                     corXX = .4, 
                                     nullWhich = 1,
                                     # define measurement model
                                     nIndicator = c(1, 5, 4), 
                                     loadM = c(1, .6, .7))

Similarly, using nIndicator = c(6, 1, 4) and loadM = c(.5, 1, .7) would make the first predictor (\(X_1\)) an observed variable, and a regression involving observed variables only could be defined using nIndicator = c(1, 1, 1) and loadM = c(1, 1, 1) or, more simply, by just providing Lambda = diag(3) instead of nIndicator and loadM. A MIMIC model with a factor measured by 5 indicators all loading by .7 would thus correspond to nIndicator = c(5, 1, 1) and loadM = c(.7, 1, 1).

Detect whether two slopes differ from each other

To perform a power analysis to detect whether two correlations differ from each other, use nullEffect = 'slopeX = slopeZ'.

For instance, the following sets up three factors measured by 3, 4, and 5 indicators (nIndicator = c(3, 4, 5)). All indicators of the first factor load by .5, all indicators of the second factor by .6, and all of the third factor by .7 (loadM = c(.5, .6, .7); see definition of a factor model). Recall that the first factor is treated as criterion (\(Y\)). Having defined the criterion and the predictors, the slopes of \(X_1\) and \(X_2\) in the prediction of \(Y\) are defined to be .1 and .4, respectively (slopes = c(.1, .4)) and the correlation between the predictors is defined to be .25 (corXX = .25). Finally, the required sample (type = 'a-priori', nullWhich = c(1, 2)) is requested to detect that the first slope differs from the second slope (nullEffect = 'slopeX = slopeZ') on alpha = .05 (alpha = .05) with a power of 80% (power = .80).

powerReg <- semPower.powerRegression(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80,
                                     # define hypothesis
                                     slopes = c(.1, .4), 
                                     corXX = .25, 
                                     nullEffect = 'slopeX = slopeZ',
                                     nullWhich = c(1, 2),
                                     # define measurement model
                                     nIndicator = c(3, 5, 4), 
                                     loadM = c(.5, .6, .7))

Note that nullWhich is now a vector defining which slopes should be set to equality. nullWhich = c(1, 2) says that the first and the second slope shall be equal.

nullWhich can also comprise more than two elements to test for the equality of more than two slopes For instance, when there are 4 predictors, using nullWhich = c(1, 2, 4) would constrain the first, second, and fourth (but not the third) slope to equality.

Detect whether a slope differs across two or more groups

To perform a power analysis to detect whether a slope differs across two or more groups, use nullEffect = 'slopeA = slopeB'.

For instance, the following sets up four factors measured by 4, 5, 3, and 6 indicators (nIndicator = c(4, 5, 3, 6)). All indicators of the first factor load by .5, all indicators of the second factor by .6, all of the third factor by .7, and all of the fourth factor by .6 (loadM = c(.5, .6, .7, .6); see definition of a factor model). Recall that the first factor is treated as criterion (\(Y\)), and the remaining factors as predictors. The predictors are correlated according to the matrix defined in corXX. Note that this part of the factor model is identical across groups, which is indeed a prerequisite to allow for meaningful cross-group comparisons of slopes. However, separate regression relationships are defined for each group by using a list structure for the slopes argument: In the first group, the regression equation is \(\hat{Y} = .2 \cdot X_1 + .3 \cdot X_2 + .4 \cdot X_3\) (c(.2, .3, .4)), whereas in the second group it is \(\hat{Y} = .2 \cdot X_1 + .05 \cdot X_2 + .4 \cdot X_3\) (c(.2, .05, .4)). Finally, the required sample (type = 'a-priori') is requested to detect that the second slope (\(\beta_2\), nullWhich = 2) differs across groups (nullEffect = 'slopeA = slopeB') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). Furthermore, in multiple group models the N argument also needs to be provided as a list, which in case of an a priori power analysis gives the group weights. N = list(1, 1) requests equally sized groups. If using N = list(2, 1) instead, the first group would be twice as large as the second group. If a post hoc or compromise power analysis is requested, N is a list providing the number of observations for each group.

corXX <- matrix(c(
  #   X1    X2    X3
  c(1.00, 0.20, 0.30),  # X1
  c(0.20, 1.00, 0.10),  # X2
  c(0.30, 0.10, 1.00)   # X3
), ncol = 3,byrow = TRUE)
powerReg <- semPower.powerRegression(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                                     # define hypothesis
                                     slopes = list(
                                       # slopes in group 1 
                                       c(.2, .3, .4), 
                                       # slopes in group 2 
                                       c(.2, .05, .4)
                                       ),
                                     corXX = corXX,
                                     nullEffect = 'slopeA = slopeB',
                                     nullWhich = 2,
                                     # define measurement model
                                     nIndicator = c(4, 5, 3, 6),
                                     loadM = c(.5, .6, .7, .6)
                                     )

If there are more than two groups, the targeted slope is held equal across all groups by default. If the slope should only be constrained to equality in specific groups, nullWhichGroups is used to identify the groups to which the equality restrictions apply. For instance, the following defines three equally sized groups with a distinct slope for \(X_2\), but only asks for the required sample to detect that the second slope (nullWhich = 2) in group 1 (of .3) differs from the second slope in group 3 (of .45; nullWhichGroups = c(1, 3)).

powerReg <- semPower.powerRegression(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80, N = list(1, 1, 1),
                                     # define hypothesis
                                     slopes = list(
                                       # slopes in group 1 
                                       c(.2, .3, .4), 
                                       # slopes in group 2 
                                       c(.2, .05, .4),
                                       # slopes in group 3 
                                       c(.2, .45, .4)
                                       ),
                                     corXX = NULL,  # independent predictors
                                     nullEffect = 'slopeA = slopeB',
                                     nullWhich = 2,
                                     nullWhichGroups = c(1, 3),
                                     # define measurement model
                                     nIndicator = c(4, 5, 3, 6),
                                     loadM = c(.5, .6, .7, .6)
                                     )

4.6 Mediation models

semPower.powerMediation is used to perform power analyses to reject hypotheses arising in a mediation context involving factors and/or observed variables. This includes the simple case or a single variable \(M\) mediating the relation between \(X\) and \(Y\) (X -> M -> Y), but may also refer to more complex mediation chains involving several mediators. Note that the power for mediation effects involving latent variables is only approximated. semPower.powerMediation provides interfaces to perform power analyses concerning the following hypotheses:

  • whether an indirect effect differs from zero (nullEffect = 'ind = 0').
  • whether an indirect effect differs across two or more groups (nullEffect = 'indA = indB').

Note that (for now) semPower only provides accurate power analyses for models without latent variables, where the test of the indirect effect is performed according to Tofighi and Kelley (2020). For single group models with latent variables (i.e, nullEffect = 'ind = 0'), power is only (sometimes roughly) approximated. Multiple group models involving latent variables (i.e., nullEffect = 'indA = indB') are currently not supported.3

semPower.powerMediation only addresses hypotheses in a mediation context. semPower provides other convenience functions for hypotheses arising in latent regression models, generic path models, and cross-lagged panel models. For hypotheses regarding global model fit, a model-free power analysis should be performed.

semPower.powerMediation expects the following arguments:

  • Either (assuming a simple mediation of the form X -> M -> Y):
    • bYX: the slope for \(X\) in the prediction of \(Y\) (X -> Y).
    • bMX: the slope for \(X\) in the prediction of \(M\) (X -> M).
    • bYM: the slope for \(M\) in the prediction of \(Y\) (M -> Y).
  • or (for more complex mediation mechanisms):
    • Beta: Matrix of regression weights connecting the latent factors (all-Y notation). Exogenous variables must be in the first row(s), so the upper triangular of Beta must be zero. See this chapter for details.
    • indirect: A list of vectors of size 2 indicating the elements of Beta that define the indirect effect.
  • nullEffect: Defines the hypothesis of interest; one of 'ind = 0' or 'indA = indB'.
  • nullWhichGroups: Defines which groups are targeted when nullEffect = 'indA = indB'.
  • standardized: Defines whether all parameters are standardized (TRUE, the default) or unstandardized (FALSE).
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model, assuming the order \(X\), \(M\), \(Y\) in case of a simple mediation or the order given in Beta.

semPower.powerMediation provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Detect whether an indirect effect differs from zero

To perform a power analysis to detect whether a mediation effect (= an indirect effect) differs from zero, use nullEffect = 'ind = 0', which is also the default and could thus be omitted. Note again that this is yields approximate results for models involving latent variables and provides accurate results only for models without latent variables.

In the simple case of a mediation of the form X -> M -> Y, the relevant arguments specifying the size of the slopes and thus the magnitude of the mediation effect are bYX, bMX, and bYM. For instance, the following sets up three factors measured by 3, 4, and 5 indicators (nIndicator = c(3, 4, 5)). All indicators of the first factor load by .5, all indicators of the second factor by .6, and all indicators of the third factor by .7 (loadM = c(.5, .6, .7)). See the chapter on specifying a factor model for alternative (more flexible) ways to define the factor loadings. semPower.powerMediation treats the first factor as predictor \(X\), the second factor as mediator \(M\), and the third factor as criterion \(Y\), so in the present example the mediator is measured by 4 indicators loading by .6 each. The slopes for the relations X -> Y, X -> M, M -> Yare defined to be .25 (bYX = .25), .3 (bMX = .3), and .4 (bYM = .4), respectively. Finally, the required sample size (type = 'a-priori') to detect that the indirect effect differs from zero (nullEffect = 'ind = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80) is requested.

powerMed <- semPower.powerMediation(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80,
                                     # define hypothesis
                                     bYX = .25, 
                                     bMX = .3, 
                                     bYM = .4,
                                     nullEffect = 'ind = 0',
                                     # define measurement model
                                     nIndicator = c(3, 4, 5),
                                     loadM = c(.5, .6, .7)
                                     )
summary(powerMed)

The results of the power analysis are printed by calling the summary method on powerMed, which, in this example, provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerMed <- semPower.powerMediation(
                                     # define type of power analysis
                                     type = 'post-hoc', alpha = .05, N = 300,
                                     # define hypothesis
                                     bYX = .25, 
                                     bMX = .3, 
                                     bYM = .4,
                                     nullEffect = 'ind = 0',
                                     # define measurement model
                                     nIndicator = c(3, 4, 5),
                                     loadM = c(.5, .6, .7)
                                     )

Now, summary(powerMed) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

In the examples above, a power analysis was performed by comparing the implied H0 model against a less restrictive H1 model (by omitting the comparison argument which defaults to 'restricted'). If one rather wants to compare the H0 model against the saturated model, use comparison = 'saturated'. See the chapter on the definition of the comparison model for a detailed discussion.

Also, all slopes were treated as completely standardized parameters (by omitting the standardized argument, which defaults to TRUE). This implies that semPower defines the residual variances (in \(\Psi\)) such that all variances are 1. If the slopes should rather be treated as unstandardized, set standardized = FALSE, which implies an identity matrix for \(\Psi\).

If one or all of the involved variables are observed variables (instead of factors), the only change refers to the definition of the factor model. For instance, the following defines the first “factor”, which is treated as the predictor \(X\), as a dummy factor with a single indicator (nIndicator = c(1, 4, 5)) and a loading of 1 (loadM = c(1, .6, .7)), so it becomes equivalent to an observed variable:

powerMed <- semPower.powerMediation(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80,
                                     # define hypothesis
                                     bYX = .25, 
                                     bMX = .3, 
                                     bYM = .4,
                                     nullEffect = 'ind = 0',
                                     # define measurement model
                                     nIndicator = c(1, 4, 5),
                                     loadM = c(1, .6, .7)
                                     )

Similarly, a mediation model involving observed variables only could be defined using nIndicator = c(1, 1, 1) and loadM = c(1, 1, 1) or, more simply, by just providing Lambda = diag(3) instead of nIndicator and loadM.

semPower.powerMediation provides an alternative way to specify mediation structures that go beyond the simple case of a X -> M -> Y mediation. For illustration, suppose there are four variables, the hypothesized structure is X -> M1 -> M2 -> Y, and that the indirect effect of interest is given by the slopes connecting X -> M1, M1 -> M2, and M2 -> Y. To reflect this type of mediation, the Beta and indirect arguments need to be set. Below, the regression relations between the factors are defined in Beta (see this chapter for details), implying a slope for X -> M1 of .2, for M1 -> M2 of .3, and for M2 -> Y of .4 (and all other slopes being equal to zero). indirect is a list of vectors of size two, indicating the elements of Beta that define the indirect effect, so in this example, the indirect effect of interest comprise X -> M1 (c(2, 1)), M1 -> M2 (c(3, 2)), Y -> M2 (c(4, 3)). Further, all variables are defined to be observed variables, rather than latent factors (Lambda = diag(4)).

Beta <- matrix(c(
  c(.00, .00, .00, .00),       # X  = .00*X + .00*M1 + .00*M1 + .00*Y 
  c(.20, .00, .00, .00),       # M1 = .20*X + .00*M1 + .00*M1 + .00*Y 
  c(.00, .30, .00, .00),       # M2 = .00*X + .30*M1 + .00*M1 + .00*Y 
  c(.00, .00, .40, .00)        # Y  = .00*X + .00*M1 + .40*M1 + .00*Y 
), byrow = TRUE, ncol = 4)
powerMed <- semPower.powerMediation(
                                     # define type of power analysis
                                     type = 'a-priori', alpha = .05, power = .80,
                                     # define hypothesis
                                     Beta = Beta, 
                                     indirect = list(c(2, 1), c(3, 2), c(4, 3)),
                                     nullEffect = 'ind = 0',
                                     # define measurement model
                                     Lambda = diag(4)
                                     )
Detect whether an indirect differs across groups

To perform a power analysis to detect whether a mediation effect (= an indirect effect) differs across groups, use nullEffect = 'indA = indB'. This is currently only possible for models involving observed variables only.

For instance, the following requests the required sample (type = 'a-priori') to detect that the indirect effect in group 1 differs from the one in group 2 (nullEffect = 'indA = indB') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). bYX, bMX, and bYM are now lists comprising two elements, the first defining the respective slope in the first group, the second the respective slope in the second group. Thus, the slopes in the first and second group for X -> M are .2 and .4, for M -> Y .3 and .5, and for X -> M .25 and .25, respectively. Lambda = diag(3) implies that all variables are observed. Furthermore, in multiple group models, the N argument also needs to be provided as a list, which in case of an a priori power analysis gives the group weights. N = list(1, 1) requests equally sized groups. If using N = list(2, 1) instead, the first group would be twice as large as the second group. If a post hoc or compromise power analysis is requested, N is a list providing the number of observations for each group.

powerMed <- semPower.powerMediation(type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                                    nullEffect = 'indA = indB',
                                    bYX = list(.25, .25),
                                    bMX = list(.2, .4),
                                    bYM = list(.3, .5),
                                    Lambda = diag(3)
                                   )

The same as above can also be achieved using the Beta and indirect arguments, which generally offer greater flexibility. Key is to provide Beta as a list, where each component reflects the regression relationships for each group.

# Beta for group 1
Beta1 <- matrix(c(
  c(.00, .00, .00),    # X = .00*X + .00*M + .00*Y 
  c(.20, .00, .00),    # M = .20*X + .00*M + .00*Y 
  c(.25, .30, .00)     # Y = .25*X + .30*M + .00*Y 
), byrow = TRUE, ncol = 3)
# Beta for group 2
Beta2 <- matrix(c(
  c(.00, .00, .00),    # X = .00*X + .00*M + .00*Y 
  c(.40, .00, .00),    # M = .40*X + .00*M + .00*Y 
  c(.25, .50, .00)     # Y = .25*X + .50*M + .00*Y 
), byrow = TRUE, ncol = 3)
powerMed <- semPower.powerMediation(type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                                    nullEffect = 'indA = indB',
                                    Beta = list(Beta1, Beta2),
                                    indirect = list(c(2, 1), c(3, 2)),
                                    Lambda = diag(3)
                                    )

4.7 Generic path models

semPower.powerPath is used to perform power analyses to reject hypotheses arising in a generic structural equation model specifying regression relations between the factors or between factors and observed covariates via the Beta and Psi matrices (see this chapter for details). semPower.powerPath provides interfaces to perform power analyses concerning the following hypotheses:

  • whether a slope differs from zero (nullEffect = 'beta = 0').
  • whether two slopes differ from each other (nullEffect = 'betaX = betaZ').
  • whether a slope differs across two or more groups (nullEffect = 'betaA = betaB').

semPower.powerPath offers a generic and flexible way to address hypotheses involving regression relationships, so that power analyses can be performed for hypotheses not covered by a more specific convenience function, such as semPower.powerRegression, semPower.powerMediation, and semPower.powerCLPM.

semPower.powerPath expects the following arguments:

  • Beta: Matrix of regression slopes (all-Y notation); see this chapter for examples.
  • Psi: Variance-covariance matrix of latent (residual) factors or NULL when all covariances shall be zero.
  • nullEffect: Defines the hypothesis of interest; one of 'beta = 0', 'betaX = betaZ', or 'betaA = betaB'.
  • nullWhich: Defines which slope(s) is targeted by the hypothesis defined in nullEffect.
  • nullWhichGroups: Defines which groups are targeted when nullEffect = 'betaA = betaB'.
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model ordered as implied by Beta.

semPower.powerPath provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Detect whether a slope differs from zero

To perform a power analysis to detect whether a slope differs from zero, use nullEffect = 'beta = 0' (which is also the default and could thus be omitted).

semPower.powerRegression requires to specify the regression relations between the factors via the Beta argument (see this chapter for details). For instance, in the example below, there are four factors (equal to number of columns/rows of Beta). The structure of Beta implies the relations \(F_2 = .2 \cdot F_1\), \(F_3 = .3 \cdot F_2\), and \(F_4 = .1 \cdot F_1 + .4 \cdot F_3\). nullWhich is a vector of size two, indicating the element of Beta (i.e. the specific slope) that is targeted by the null hypothesis. Below, the required sample (type = 'a-priori') is requested to detect that the slope of \(F_1\) in the prediction of \(F_4\) (nullWhich = c(4, 1)) differs from zero (nullEffect = 'beta = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). Finally, the measurement model for the factors needs to be defined. The order of factors matches the order in Beta: The first factor is measured by 3 indicators, the second factor by 4 indicators, the third by 5, and the fourth by 6 indicators (nIndicator = c(3, 4, 5, 6)). The respective indicators of the first factor load by .7, on the second by .5, on the third by .6, and on the fourth factor by .8 (loadM = c(.7, .5, .6, .8)). See the chapter on specifying a factor model for alternative (more flexible) ways to define the factor loadings.

Beta <- matrix(c(
  c(.00, .00, .00, .00),       # f1 = .00*f1 + .00*f2 + .00*f3 + .00*f4
  c(.20, .00, .00, .00),       # f2 = .20*f1 + .00*f2 + .00*f3 + .00*f4
  c(.00, .30, .00, .00),       # f3 = .00*f1 + .30*f2 + .00*f3 + .00*f4
  c(.10, .00, .40, .00)        # f4 = .10*f1 + .00*f2 + .40*f3 + .00*f4
), byrow = TRUE, ncol = 4)
powerPath <- semPower.powerPath(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                Beta = Beta,
                                nullWhich = c(4, 1),
                                # define measurement model
                                nIndicator = c(3, 4, 5, 6),
                                loadM = c(.7, .5, .6, .8),
                                )
summary(powerPath)

The results of the power analysis are printed by calling the summary method on powerPath, which, in this example, provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerPath <- semPower.powerPath(
                                # define type of power analysis
                                type = 'post-hoc', alpha = .05, N = 300,
                                # define hypothesis
                                Beta = Beta,
                                nullEffect = 'beta = 0',
                                nullWhich = c(4, 1),
                                # define measurement model
                                nIndicator = c(3, 4, 5, 6),
                                loadM = c(.7, .5, .6, .8),
                                )

Now, summary(powerPath) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

In the example above, there were no correlations between the factors beyond those implied by the regression relations. The Psi argument can be used to add additional sources of covariation. For instance, the following defines four factors with a regression structure (Beta) corresponding to \(F_3 = .2 \cdot F_1 + .3 \cdot F_2\) and \(F_4 = .3 \cdot F_1 + .4 \cdot F_2\). In addition, Psi defines a correlation between \(F_1\) and \(F_2\) of .25 and a (residual) correlation between \(F_3\) and \(F_4\) of .3.

Beta <- matrix(c(
  c(.00, .00, .00, .00),       # f1 = .00*f1 + .00*f2 + .00*f3 + .00*f4
  c(.00, .00, .00, .00),       # f2 = .00*f1 + .00*f2 + .00*f3 + .00*f4
  c(.20, .30, .00, .00),       # f3 = .20*f1 + .30*f2 + .00*f3 + .00*f4
  c(.30, .40, .00, .00)        # f4 = .30*f1 + .40*f2 + .00*f3 + .00*f4
), byrow = TRUE, ncol = 4)
Psi <- matrix(c(
  c(1.0, .25, .00, .00),       # f1
  c(.25, 1.0, .00, .00),       # f2
  c(.00, .00, 1.0, .30),       # f3
  c(.00, .00, .30, 1.0)        # f4
), byrow = TRUE, ncol = 4)
powerPath <- semPower.powerPath(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                Beta = Beta,
                                Psi = Psi,
                                nullEffect = 'beta = 0',
                                nullWhich = c(4, 1),
                                # define measurement model
                                nIndicator = c(3, 4, 5, 6),
                                loadM = c(.7, .5, .6, .8),
                                )

Note that all examples above treated all parameters as completely standardized (by omitting the standardized argument, which defaults to TRUE). When standardized = TRUE, semPower defines the residual variances in \(\Psi\) such that all variances are 1. When Psi is provided, the diagonal elements are ignored and all off-diagonal elements are treated as (residual-) correlations. When standardized = FALSE, Psi is unaltered (or replaced by an identity matrix, when Psi = NULL).

Further, all examples above performed a power analysis by comparing the implied H0 model against a less restrictive H1 model (by omitting the comparison argument which defaults to 'restricted'). If one rather wants to compare the H0 model against the saturated model, use comparison = 'saturated'. See the chapter on the definition of the comparison model for a detailed discussion.

If one (or all) of the involved variables is an observed variable (instead of a factor), the only change refers to the definition of the factor model. For instance, the following defines the first and the third “factor” as dummy factors with a single indicator (nIndicator = c(1, 4, 1, 6)) and a loading of 1 (loadM = c(1, .5, 1, .8)), so these become equivalent to observed variables:

powerPath <- semPower.powerPath(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                Beta = Beta,
                                nullWhich = c(4, 1),
                                # define measurement model
                                nIndicator = c(1, 4, 1, 6),
                                loadM = c(1, .5, 1, .8),
                                )

Similarly, a path model involving observed variables only could be defined using nIndicator = c(1, 1, 1, 1) and loadM = c(1, 1, 1, 1) or, more simply, by just providing Lambda = diag(4) instead of nIndicator and loadM.

Detect whether two slopes differ from each other

To perform a power analysis to detect whether two correlations differ from each other, use nullEffect = 'betaX = betaZ'.

For instance, the following defines the regression relationships (Beta) between four factors in the same way as described in detail above to be \(F_3 = .2 \cdot F_1 + .3 \cdot F_2\) and \(F_4 = .3 \cdot F_1 + .4 \cdot F_2\). The four factors are measured by 3, 4, 5, and 6 indicators (nIndicator = c(3, 4, 5, 6)), which load by .7, .5, .6, and .8, respectively (loadM = c(.7, .5, .6, .8), see definition of the factor model). The required sample size (type = 'a-priori') to detect that the slopes for \(F1\) and \(F_2\) in the prediction of \(F_4\) (nullWhich = list(c(4, 1), c(4, 2))) differ (nullEffect = 'betaX = betaZ') on alpha = .05 (alpha = .05) with a power of 80% (power = .80) is requested.

Beta <- matrix(c(
  c(.00, .00, .00, .00),       # f1 = .00*f1 + .00*f2 + .00*f3 + .00*f4
  c(.00, .00, .00, .00),       # f2 = .00*f1 + .00*f2 + .00*f3 + .00*f4
  c(.20, .30, .00, .00),       # f3 = .20*f1 + .30*f2 + .00*f3 + .00*f4
  c(.30, .40, .00, .00)        # f4 = .30*f1 + .40*f2 + .00*f3 + .00*f4
), byrow = TRUE, ncol = 4)
powerPath <- semPower.powerPath(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                Beta = Beta,
                                nullEffect = 'betaX = betaZ',
                                nullWhich = list(c(4, 1), c(4, 2)),
                                # define measurement model
                                nIndicator = c(3, 4, 5, 6),
                                loadM = c(.7, .5, .6, .8),
                                )

Note that nullWhich is now a list of vectors defining which slopes should be set to equality. nullWhich = list(c(4, 1), c(4, 2)) says that the slopes for \(F1\) and \(F_2\) in the prediction of \(F_4\) shall be equal.

nullWhich can also comprise more than two elements to test for the equality of more than two slopes For instance, when there are 4 predictors, using nullWhich = list(c(4, 1), c(4, 2), c(3, 1)) would constrain the slopes for \(F1\) and \(F_2\) in the prediction of \(F_4\) and the slope for \(F1\) in the prediction of \(F_3\) to equality.

Detect whether a slope differs across two or more groups

To perform a power analysis to detect whether a slope differs across two or more groups, use nullEffect = 'betaA = betaB'.

For instance, the following defines the regression relationships between three factors in the same way as described in detail above separately for two groups. In the first group (Beta1), the relations are \(F_2 = .2 \cdot F_1\) and \(F_3 = .3 \cdot F_1 + .5 \cdot F_2\), whereas in the second group (Beta2) these are \(F_2 = .4 \cdot F_1\) and \(F_3 = .3 \cdot F_1 + .5 \cdot F_2\). In multiple group models, Beta must be provided as a list, where each component defines the regression relations for a specific group (Beta = list(Beta1, Beta2)). The measurement model is identical across groups: All factors are measured by 5 indicators (nIndicator = c(5, 5, 5)) which load by .7 on the first, by .5 on the second, and by .6 on the third factor (loadM = c(.7, .5, .6)). Then, the required sample size (type = 'a-priori') to detect that the slope of \(F_1\) in the prediction of \(F_2\) (nullWhich = c(2, 1)) differs across groups (nullEffect = 'betaA = betaB') on alpha = .05 (alpha = .05) with a power of 80% (power = .80) is requested. Furthermore, in multiple group models, the N argument also needs to be provided as a list, which in case of an a priori power analysis gives the group weights. N = list(1, 1) requests equally sized groups. If using N = list(2, 1) instead, the first group would be twice as large as the second group. If a post hoc or compromise power analysis is requested, N is a list providing the number of observations for each group.

# beta in group 1
Beta1 <- matrix(c(
  c(.00, .00, .00),       # f1 = .00*f1 + .00*f2 + .00*f3 + .00*f4
  c(.20, .00, .00),       # f2 = .20*f1 + .00*f2 + .00*f3 + .00*f4
  c(.30, .50, .00)        # f3 = .30*f1 + .50*f2 + .00*f3 + .00*f4
), byrow = TRUE, ncol = 3)
# beta in group 2
Beta2 <- matrix(c(
  c(.00, .00, .00),       # f1 = .00*f1 + .00*f2 + .00*f3 + .00*f4
  c(.40, .00, .00),       # f2 = .40*f1 + .00*f2 + .00*f3 + .00*f4
  c(.30, .50, .00)        # f3 = .30*f1 + .50*f2 + .00*f3 + .00*f4
), byrow = TRUE, ncol = 3)
powerPath <- semPower.powerPath(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                                # define hypothesis
                                Beta = list(Beta1, Beta2),
                                nullEffect = 'betaA = betaB',
                                nullWhich = c(2, 1),
                                # define measurement model
                                nIndicator =  c(5, 5, 5),
                                loadM =  c(.7, .5, .6)
                                )

If there are more than two groups, the targeted slope is held equal across all groups by default. If the slope should only be constrained to equality in specific groups, nullWhichGroups is used to identify the groups to which the equality restrictions apply. For instance, the following defines three equally sized groups with a distinct slope for \(F_1\) in the prediction of \(F_2\), but only asks for the required sample to detect that this slope (nullWhich = c(2, 1)) in group 1 (of .20) differs from the second slope in group 3 (of .40; nullWhichGroups = c(1, 3)).

# beta in group 1
Beta1 <- matrix(c(
  c(.00, .00),      # f1 = .00*f1 + .00*f2
  c(.20, .00)       # f2 = .20*f1 + .00*f2
), byrow = TRUE, ncol = 2)
# beta in group 2
Beta2 <- matrix(c(
  c(.00, .00),      # f1 = .00*f1 + .00*f2
  c(.30, .00)       # f2 = .30*f1 + .00*f2
), byrow = TRUE, ncol = 2)
# beta in group 3
Beta3 <- matrix(c(
  c(.00, .00),      # f1 = .00*f1 + .00*f2
  c(.40, .00)       # f2 = .40*f1 + .00*f2
), byrow = TRUE, ncol =2)
powerPath <- semPower.powerPath(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80, N = list(1, 1, 1),
                                # define hypothesis
                                Beta = list(Beta1, Beta2, Beta3),
                                nullEffect = 'betaA = betaB',
                                nullWhich = c(2, 1),
                                nullWhichGroups =  c(1, 3),
                                # define measurement model
                                nIndicator =  c(5, 5),
                                loadM =  c(.7, .5)
                                )

4.8 Multiple group invariance

semPower.powerMI is used to perform power analyses for hypotheses arising in multigroup measurement invariance models. The typical - but not in all parts necessary - sequence is (a) configural, (b) metric, (c) scalar, and (d) residual invariance concerning the measurement part, and (e) latent variances, (f) latent covariances, and (g) latent means concerning the structural part, where each level of invariance is usually compared against the previous level (e.g., scalar vs. metric).. semPower.powerMI provides interfaces to perform power analyses concerning the hypothesis whether a particular level of invariance is tenable, implementing a model-based approach, so that non-invariant parameters need to be specified. When one does not expect (or is not interested in or does not have sufficiently specific hypotheses on) measurement non-invariance for certain parameters, but rather assumes that non-invariance spreads across multiple parameters (say, across most or all loadings), one should consider to perform model-free power analysis concerning the overall difference between two models.

semPower.powerMI only addresses hypotheses concerning multigroup measurement invariance. See the corresponding chapter for other hypothesis arising in multigroup settings.

semPower.powerMI expects the following arguments:

  • comparison: Defines the comparison model (see below for valid arguments).
  • nullEffect: Defines the level of invariance of interest (see below for valid arguments).
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model, which must include a list structure for at least some model parameters, so that parameters that differ across groups are defined.

There are two ways to specify the models defined in the comparison and the nullEffect arguments. Either, one may specify a specific level of invariance that includes all previous levels:

  • 'configural': no invariance constraints. Shows the same fit as the saturated model, so this only affects the df.
  • 'metric': all loadings are restricted to equality.
  • 'scalar': all loadings and (indicator-)intercepts are restricted to equality.
  • 'residual': all loadings, (indicator-)intercepts, and (indicator-)residuals are restricted to equality.
  • 'covariances': all loadings, (indicator-)intercepts, (indicator-)residuals, and factor covariances are restricted to equality.
  • 'means': all loadings, (indicator-)intercepts, and (indicator-)residuals, factor covariances, and latent means are restricted to equality.

Alternatively, the models can also be defined using lavaan style group.equal restrictions as a vector to allow for greater flexibility:

  • 'none': no invariance constraints and thus representing a configural invariance model.
  • c('loadings'): all loadings are restricted to equality.
  • c('loadings', 'intercepts'): all loadings and (indicator-)intercepts are restricted to equality.
  • c('loadings', 'intercepts', 'residuals'): all loadings, (indicator-)intercepts, and (indicator-)residuals are restricted to equality.
  • c('loadings', 'residuals'): all loadings and (indicator-)residuals are restricted to equality.
  • c('loadings', 'intercepts', 'means'): all loadings, (indicator-)intercepts, and latent factor means are restricted to equality.

Note that semPower.powerMI implements variance scaling of the factors (the variances of all factors are equal to 1 in all groups), so invariance of variances ('lv.variances') is always met.

semPower.powerMI provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted'). Note that multiple group constraints are provided to lavaan via its group.equal argument, which is not returned by semPower.powerMI.
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Detect metric non-invariance

To perform a power analysis to detect whether a metric invariance model is significantly worse than a configural invariance model, use nullEffect = 'metric' in conjunction with comparison = 'configural', and define the factor model in a way that at least one loading differs across groups (so that metric invariance is violated to the extent as defined by the difference in the loadings across groups).

For instance, the following requests the required sample size (type = 'a-priori') to detect that a metric invariance model (nullEffect = 'metric') differs from a configural invariance model (comparison = 'configural') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The model comprises a single factor, which is measured by 5 indicators in both groups (nIndicator = list(5, 5)). In the first group, all indicators load by .5, whereas in the second group, all indicators load by .6 (loadM = list(.5, .6)). See the chapter on specifying a factor model for alternative (more flexible) ways to define the factor loadings. Furthermore, in multiple group models, the N argument also needs to be supplied as a list, which in case of an a priori power analysis gives the group weights. N = list(1, 1) requests equally sized groups. If using N = list(2, 1) instead, the first group would be twice as large as the second group.

powerMI <- semPower.powerMI(
                           # define type of power analysis
                           type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                           # define hypothesis
                           comparison = 'configural', 
                           nullEffect = 'metric',
                           # define measurement model
                           nIndicator = list(5, 5),
                           loadM = list(.5, .6))
summary(powerMI)

The results of the power analysis are printed by calling the summary method on powerMI, which in this example provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly, where N now provides the number of observations for each group:

powerMI <- semPower.powerMI(
                           # define type of power analysis
                           type = 'post-hoc', alpha = .05, N = list(300, 400),
                           # define hypothesis
                           comparison = 'configural', 
                           nullEffect = 'metric',
                           # define factor model
                           nIndicator = list(5, 5),
                           loadM = list(.5, .6))

Now, summary(powerMI) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

The values provided to the comparison and nullEffect arguments can also be specified according to lavaan conventions as vectors, so the following yields the same results as above:

powerMI <- semPower.powerMI(
                           type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                           # define hypothesis
                           comparison = 'none', 
                           nullEffect = c('loadings'),
                           nIndicator = list(5, 5),
                           loadM = list(.5, .6))

Note that the arguments specifying a factor model must be provided as lists, where each component refers to a specific group. For instance, the following defines a two-factor model, where the first factor is measured by 3 indicators and the second factor is measured by 4 indicators (in both groups, nIndicator = list(c(3, 4), c(3, 4))). In the first group, all loadings are equal to .5. In the second group, the loadings on the first factor are also .5, but the loadings on the second factor are .6 (list(c(.5, .5), c(.5, .6))). In both groups, the factor correlation is .3 (Phi = list(.3, 3)).

powerMI <- semPower.powerMI(
                           type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                           comparison = 'configural', 
                           nullEffect = 'metric',
                           # define two factors 
                           nIndicator = list(c(3, 4), c(3, 4)),
                           loadM = list(c(.5, .5), c(.5, .6)),
                           Phi = list(.3, 3))

Measurement parameters that should be equal across groups can also be provided omitting the list structure, so the same as above can be achieved by using:

powerMI <- semPower.powerMI(
                           type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                           comparison = 'configural', 
                           nullEffect = 'metric',
                           # define two factors
                           nIndicator = c(3, 4),
                           loadM = list(c(.5, .5), c(.5, .6)),
                           Phi = list(.3, .3)) 

In the examples above, all indicators of a certain factor exhibited measurement non-invariance. If only a subset of indicators should show a different loading by group, specify the factor model using the loadings argument. For instance, the following defines a two-factor model with a factor correlation of .3 (Phi = list(.3, 3)) in both groups. The first factor is measured by 3 indicators and the second factor is also measured by 3 indicators (in both groups). In the first group, the loadings on the first factor are .7, .6, and .5, and those on the second factor are .5, .5, and .7. In the second group, the loadings on the first factor are .7, .7, and .5, and those on the second factor are .5, .5, and .6. Thus, there are group differences concerning the loadings of the second indicator of the first factor (.6 vs .5) and concerning the loadings of the third indicator of the second factor (.7 vs .6).

powerMI <- semPower.powerMI(
                           type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                           comparison = 'configural', 
                           nullEffect = 'metric',
                           # define measurement model 
                           loadings = list(
                             # loadings on the first and second factor in the first group
                             list(c(.7, .6, .5), 
                                  c(.5, .5, .7)),
                             # loadings on the first and second factor in the second group
                             list(c(.7, .7, .5), 
                                  c(.5, .5, .6))
                             ),
                           Phi = list(.3, .3))
Detect scalar non-invariance

Detecting failure of another level of invariance proceeds largely identical as described in the case of the metric invariance model above. The differences concern that nullEffect now refers to the level of invariance of interest (such as nullEffect = 'scalar') and that the comparison model would usually refer to the model one level lower in the hierarchy (such as comparison = 'metric'). In addition, when comparing a scalar against a metric invariance model, the factor model should be defined in a way that metric invariance holds (identical loadings across groups), whereas there must be at least one difference across groups concerning the indicator intercepts (tau).

For instance, the following requests the required sample (type = 'a-priori') to detect that a scalar invariance model (nullEffect = 'scalar') differs from a metric invariance model (comparison = 'metric') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The model comprises a single factor, which is measured by 5 indicators that all load by .5 in both groups (nIndicator = list(5, 5); see definition of the factor model). Importantly, the indicator intercepts (tau) partly differ across groups: the second intercept is 0 in the first, but .1 in the second group, the third intercept is 0 in the first, but -.3 in the second group:

powerMI <- semPower.powerMI(
                           # define type of power analysis
                           type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                           # define hypothesis
                           comparison = 'metric', 
                           nullEffect = 'scalar',
                           # define measurement model (same for all groups)
                           nIndicator = 5,
                           loadM = .5,
                           # define indicator intercepts
                           tau = list(
                             # intercepts in the first group
                             c(0, 0, 0, 0, 0),
                             # intercepts in the second group
                             c(0, .1, -.3, 0, 0)
                           ))

Equivalently, comparison and nullEffect can also be provided according to lavaan conventions:

powerMI <- semPower.powerMI(
                           type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                           # define hypothesis
                           comparison = c('loadings'), 
                           nullEffect = c('loadings', 'intercepts'),
                           # define measurement model
                           nIndicator = 5,
                           loadM = .5,
                           tau = list(
                             c(0, 0, 0, 0, 0),
                             c(0, .1, -.3, 0, 0)
                           ))
Detect residual non-invariance

Detecting residual non-invariance again proceeds largely identical as described above. The main difference is that the Theta matrix providing the residual-variances needs to be provided.

For instance, the following uses lavaan style restrictions to request the required sample (type = 'a-priori') to detect that a residual invariance model (nullEffect = c('loadings', 'residuals')) differs from a metric invariance model (comparison = c('loadings')) on alpha = .05 (alpha = .05) with a power of 80% (power = .80). Importantly, the residual variances (Theta) partly differ across groups: In the first group, all residual variances are .75, whereas in the second group, the residual variances are .70, .50, and .60.

Theta1 <- matrix(c(
  c(.75, 0, 0),
  c(0, .75, 0),
  c(0, 0, .75)
), ncol = 3, byrow = TRUE)
Theta2 <- matrix(c(
  c(.70, 0, 0),
  c(0, .50, 0),
  c(0, 0, .60)
), ncol = 3, byrow = TRUE)
powerMI <- semPower.powerMI(
                           type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                           # define hypothesis
                           comparison = c('loadings'), 
                           nullEffect = c('loadings', 'residuals'),
                           # define measurement model
                           nIndicator = 3,
                           loadM = .5, 
                           Theta = list(Theta1, Theta2))

Note that if defining the invariance models using the predefined constants (nullEffect = 'residual'), invariance of intercepts is also assumed, so the proper comparison model would be comparison = 'scalar'.

Detect whether latent means differ across groups

Detecting that latent factor means differ across groups again proceeds largely identical as described above for tests of other levels of invariance. However, the model definition must include a statement about indicator intercepts (tau) and latent means (Alpha).

For instance, the following requests the required sample (type = 'a-priori') to detect that the latent means on a single factor differ across groups (nullEffect = c('loadings', 'intercepts', 'means')) in comparison to a scalar invariance model (comparison = c('loadings', 'intercepts')) on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The model comprises a single factor, which is measured by 5 indicators that all load by .5 in both groups (nIndicator = 5 and loadM = .5; see definition of the factor model). All indicator intercepts (tau) equal zero in both groups. Importantly, the latent mean (Alpha) is 0 in the first, but .5 in the second group (thus corresponding to a standardized mean difference of .5, because the factor variances are fixed to 1):

powerMI <- semPower.powerMI(
                           # define type of power analysis
                           type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                           # define hypothesis
                           comparison = c('loadings', 'intercepts'), 
                           nullEffect = c('loadings', 'intercepts', 'means'),
                           # define measurement model (same for all groups)
                           nIndicator = 5,
                           loadM = .5,
                           # define indicator intercepts
                           tau = list(
                             c(0, 0, 0, 0, 0),
                             c(0, 0, 0, 0, 0)
                           ),
                           # define latent means in the first and the second group
                           Alpha = list(
                             c(0.0), 
                             c(0.5)
                           ))

4.9 Longitudinal invariance

semPower.powerLI is used to perform power analyses for hypotheses arising in models assessing longitudinal invariance involving the repeated assessment of a single attribute. The typical - but not in all parts necessary - sequence is (a) configural, (b) metric, (c) scalar, and (d) residual invariance concerning the measurement part, and (e) latent variances, (f) latent covariances, and (g) latent means concerning the structural part, where each level of invariance is usually compared against the previous level (e.g., scalar vs. metric). semPower.powerLI provides interfaces to perform power analyses concerning the hypothesis whether a particular level of invariance is tenable, implementing a model-based approach, so that non-invariant parameters need to be specified. When one does not expect (or is not interested in or does not have sufficiently specific hypotheses on) measurement non-invariance for certain parameters, but rather assumes that non-invariance spreads across multiple parameters (say, across most or all loadings), one should consider to perform model-free power analysis concerning the overall difference between two models.

semPower.powerLI only addresses hypotheses concerning longitudinal measurement invariance. For multigroup invariance, see the semPower.powerMI.

semPower.powerLI expects the following arguments:

  • comparison: Defines the comparison model (see below for valid arguments).
  • nullEffect: Defines the level of invariance of interest (see below for valid arguments).
  • Phi: Defines the factor correlation matrix (i.e., the autocorrelations across time) or a single number when there are exactly two measurements, or NULL for uncorrelated factors.
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model.

There are two ways to specify the models defined in the comparison and the nullEffect arguments. Either, one may specify a specific level of invariance that includes all previous levels:

  • 'configural': no invariance constraints. Shows the same fit as the saturated model, so this only affects the df.
  • 'metric': all loadings are restricted to equality.
  • 'scalar': all loadings and (indicator-)intercepts are restricted to equality.
  • 'residual': all loadings, (indicator-)intercepts, and (indicator-)residual variances are restricted to equality.
  • 'covariances': all loadings, (indicator-)intercepts, (indicator-)residual variances, and latent covariances are restricted to equality.
  • 'means': all loadings, (indicator-)intercepts, (indicator-)residual variances, latent covariances, and latent means are restricted to equality.

Alternatively, the models can also be defined using lavaan style restrictions as a vector to allow for greater flexibility, for instance:

  • 'none': no invariance constraints and thus representing a configural invariance model.
  • c('loadings'): all loadings are restricted to equality.
  • c('loadings', 'intercepts'): all loadings and (indicator-)intercepts are restricted to equality.
  • c('loadings', 'intercepts', 'residuals'): all loadings, (indicator-)intercepts, and (indicator-)residual variances are restricted to equality.
  • c('loadings', 'residuals'): all loadings and (indicator-)residual variances are restricted to equality.
  • c('loadings', 'intercepts', 'means'): all loadings, (indicator-)intercepts, and latent factor means are restricted to equality.
  • c('loadings', 'residuals', 'lv.covariances'): all loadings, (indicator-)residual variances, and latent factor covariances are restricted to equality.

semPower.powerLI implements variance scaling of the factors (the variances of all factors are equal to 1), so invariance of variances ('lv.variances') is always met. Latent means are identified using single occasion identification, so that the factor mean at the first measurement occasion is fixed to zero.

semPower.powerLI provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Detect metric non-invariance

To perform a power analysis to detect whether a metric invariance model is significantly worse than a configural invariance model, use nullEffect = 'metric' in conjunction with comparison = 'configural', and define the factor model in a way that at least one loading differs across measurement occasions (so that metric invariance is violated to the extent defined by the difference in the loadings across measurements).

For instance, the following requests the required sample size (type = 'a-priori') to detect that a metric invariance model (nullEffect = 'metric') differs from a configural invariance model (comparison = 'configural') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The model comprises an attribute measured at two time points which are correlated by .3 (Phi = .3). The attribute is measured by 5 indicators (at both the first and the second measurement, nIndicator = c(5, 5)). At the first measurement, all indicators load by .5, whereas at the second measurement, all indicators load by .6 (loadM = c(.5, .6)). See the chapter on specifying a factor model for alternative (more flexible) ways to define the factor loadings. In addition, the model comprises autocorrelated indicator residuals across time, because the autocorResiduals (which defaults to TRUE) argument is omitted.

powerLI <- semPower.powerLI(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis
  comparison = 'configural', 
  nullEffect = 'metric',
  # define measurement model
  nIndicator = c(5, 5),
  loadM = c(.5, .6),
  Phi = .3
)
summary(powerLI)

The results of the power analysis are printed by calling the summary method on powerLI, which in this example provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerLI <- semPower.powerLI(
  # define type of power analysis
  type = 'post-hoc', alpha = .05, N = 400,
  # define hypothesis
  comparison = 'configural', 
  nullEffect = 'metric',
  # define measurement model
  nIndicator = c(5, 5),
  loadM = c(.5, .6),  # (time 1, time 2)
  Phi = .3
)

Now, summary(powerLI) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

The values provided to the comparison and nullEffect arguments can also be specified according to lavaan conventions as vectors, so the following yields the same results as above:

powerLI <- semPower.powerLI(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis
  comparison = 'none', 
  nullEffect = 'loadings',
  # define measurement model
  nIndicator = c(5, 5),
  loadM = c(.5, .6),  # (time 1, time 2)
  Phi = .3
)

In the example above, all indicators exhibited measurement non-invariance. If only a subset of indicators should show a different loading across time, specify the factor model using the loadings argument. For instance, the following defines a single-factor model measured by three indicators at three time points. At the first measurement occasion, the loadings are .7, .6, and .5, at the second measurement occasion .7, .5, .5, and at the third measurement occasion .7, .4, .5. Thus, there are only differences concerning the loadings of the second indicator. In addition, because the factor is measured at three time points, Phi now becomes a factor correlation matrix, in this case specifying correlations between the first and second measurement as well as the second and third measurement of .3, and between the first and third measurement of .1.

Phi <- matrix(c(
  c(1, .3, .1),
  c(.3, 1, .3),
  c(.1, .3, 1)
), ncol = 3, byrow = TRUE)
powerLI <- semPower.powerLI(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis
  comparison = 'none', 
  nullEffect = c('loadings'),
  # define measurement model
  loadings = list(
    c(.7, .6, .5),  # time 1
    c(.7, .5, .5),  # time 2
    c(.7, .4, .5)   # time 3
  ),
  Phi = Phi
)
Detect scalar non-invariance

Detecting failure of another level of invariance proceeds largely identical to the procedure described in the case of the metric invariance model above. The differences concern that nullEffect now refers to the level of invariance of interest (such as nullEffect = 'scalar') and that the comparison model would usually refer to the model one level lower in the hierarchy (such as comparison = 'metric'). In addition, when comparing a scalar against a metric invariance model, the factor model should be defined in a way that metric invariance holds (identical loadings across groups), whereas there must be at least one difference across groups concerning the indicator intercepts (tau).

For instance, the following requests the required sample (type = 'a-priori') to detect that a scalar invariance model (nullEffect = 'scalar') differs from a metric invariance model (comparison = 'metric') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The model comprises an attribute assessed at two time points, which is measured by 3 indicators that all load by .5 at both measurement occasions (nIndicator = c(3, 3); see definition of the factor model). Importantly, the indicator intercepts (tau) partly differ across time: the intercept of the second indicator is 0 at the first second measurement, but .2 at the second measurement:

powerLI <- semPower.powerLI(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis
  comparison = 'metric', 
  nullEffect = 'scalar',
  # define measurement model
  nIndicator = c(3, 3), loadM = .5,
  tau = c(0, 0, 0,     # intercepts at time 1
          0, .2, 0),   # intercepts at time 2
  Phi = .3
)

Equivalently, comparison and nullEffect can also be provided according to lavaan conventions:

powerLI <- semPower.powerLI(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis
  comparison = c('loadings'), 
  nullEffect = c('loadings', 'intercepts'),
  # define measurement model
  nIndicator = c(3, 3), loadM = .5,
  tau = c(0, 0, 0,     # intercepts at time 1
          0, .2, 0),   # intercepts at time 2
  Phi = .3
)
Detect residual non-invariance

Detecting residual non-invariance again proceeds largely identical to the procedure described above. The main difference is that now the Theta matrix providing the residual variances needs to be provided. Theta can be a diagonal matrix only providing the residual variances. When specifying non-zero off-diagonal elements reflecting correlated indicator residuals, this should be done in a way that only the residuals of the same indicator shows correlations across measurement occasions (and the autocorrelations are estimated by setting autocorResiduals = TRUE), because otherwise this would incur misfit (in the H1 model).

For instance, the following uses lavaan style restrictions to request the required sample (type = 'a-priori') to detect that a residual invariance model (nullEffect = c('loadings', 'residuals')) differs from a metric invariance model (comparison = c('loadings')) on alpha = .05 (alpha = .05) with a power of 80% (power = .80). Importantly, the residual variances (Theta) differ across groups: At the first measurement, all residual variances are .75, whereas at the second measurement, the residual variances are .70, .50, and .60.

powerLI <- semPower.powerLI(
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis
  comparison = c('loadings'), 
  nullEffect = c('loadings', 'residuals'),
  # define measurement model
  nIndicator = c(3, 3),
  loadM = c(.5, .5), 
  Phi = .3,
  # diagonal matrix for Theta
  Theta = diag(  
    c(.75, .75, .75,   # time 1
      .70, .50, .60)   # time 2
  )
)

Note that if defining the invariance models using the predefined constants (nullEffect = 'residual'), invariance of intercepts is also assumed, so the proper comparison model would be comparison = 'scalar'.

Detect whether latent means differ across measurements

Detecting that latent factor means differ across time again proceeds largely identical to the procedure described above for tests of other levels of invariance. However, the model definition must include a statement about indicator intercepts (tau) and latent means (Alpha).

For instance, the following requests the required sample (type = 'a-priori') to detect that the latent means differ across time (nullEffect = c('loadings', 'intercepts', 'means')) in comparison to a scalar invariance model (comparison = c('loadings', 'intercepts')) on alpha = .05 (alpha = .05) with a power of 80% (power = .80). All indicator intercepts (tau) equal zero at both measurements. Importantly, the latent mean (Alpha) is 0 at the first measurement occasion, but .25 at the second measurement occasion (thus corresponding to a standardized mean difference of .25, because the factor variances are fixed to 1):

powerLI <- semPower.powerLI(
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis
  comparison = c('loadings', 'intercepts'), 
  nullEffect = c('loadings', 'intercepts', 'means'),
  # define measurement model
  nIndicator = c(3, 3),
  loadM = c(.5, .5),
  Phi = .3,
  # invariant indicator intercepts
  tau = c(0, 0, 0, 0, 0, 0),
  # non-invariant latent means
  Alpha = c(0, .25)  
)

Note that the latent means are identified by setting the factor mean at the first measurement occasion to zero, which in the example above matches the value provided for Alpha. However, the first mean in Alpha may also take a value different from zero without affecting power, because the remaining means are then simply rescaled.

The particular comparison performed above cannot be defined using the predefined constants (e.g., means vs. scalar), because using nullEffect = 'means' instead includes all invariance constraints of the previous levels (i.e, loadings, intercepts, residuals, and factor covariances), so the proper comparison model would then be comparison = 'covariances'.

Detect whether factor covariances differ across measurements

Detecting that the factor covariances differ across time again proceeds largely identical to the procedure described above for tests of other levels of invariance. Note that semPower.powerLI uses variance scaling of the factors, so that the factor variances are always invariant across time.

For instance, the following requests the required sample (type = 'a-priori') to detect that the factor covariances differ across time (nullEffect = c('loadings', 'lv.covariances')) in comparison to a metric invariance model (comparison = c('loadings')) on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The correlation (because the variances are 1) between the first and second measurement is .3, the correlation between the second and third measurement is .2, and the correlation between the first and third measurement is .4. The null hypothesis states that all these correlations are equal.

Phi <- matrix(c(
  c(1, .3, .2),
  c(.3, 1, .4),
  c(.2, .4, 1)
), ncol = 3, byrow = TRUE)
powerLI <- semPower.powerLI(
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis
  comparison = c('loadings'), 
  nullEffect = c('loadings', 'lv.covariances'),
  # define measurement model
  nIndicator = c(3, 3, 3),
  loadM = c(.5, .5, .5), 
  Phi = Phi
)

Note that this particular comparison cannot be defined using the predefined constants. If using nullEffect = 'covariances' instead, this would include all invariance constraints of the previous levels (i.e, loadings, intercepts, and residuals), so the proper comparison model would be comparison = 'residuals'.

4.10 Autoregressive models

semPower.powerAutoreg is used to perform power analyses for hypotheses arising in an autoregressive model of the form \(X_{t} = \beta_{t,(t-1)} \cdot X_{(t-1)}\) (e.g., X1 -> X2 -> X3 -> X4), which may also include lag-2 (e.g., X1 -> X3, X2 -> X4) and lag-3 effects (e.g., X1 -> X4). semPower.powerAutoreg provides interfaces to perform power analyses concerning the following hypotheses:

  • whether the autoregressive lag-1 (nullEffect = 'lag1'), lag-2 ('lag2'), or lag-3 ('lag3') effects are equal across waves (stationarity of autoregressive parameters).
  • whether the residual-variances of \(X\) are equal across waves (stationarity of variance; 'var').
  • whether the conditional means of \(X\) are equal across waves (stationarity of means; 'mean').
  • whether the autoregressive lag-1 ('lag1 = 0'), lag-2 ('lag2 = 0'), or lag-3 ('lag3 = 0') effect is zero.
  • whether the autoregressive lag-1 effect of \(X\) is equal across groups ('lag1A = lag1B').

semPower.powerAutoreg only addresses hypotheses in autoregressive models. For other hypotheses concerning longitudinal measurement invariance, see semPower.powerLI. For ARMA models, see semPower.powerARMA. For CLPM models, see semPower.powerCLPM.

semPower.powerAutoreg expects the following arguments:

  • nWaves: The number of waves, must be \(\geq\) 2.
  • lag1 or equivalently autoregEffects: Vector of the autoregressive effects, e.g., c(.7, .6, .5) for autoregressive effects of .7 for X1 -> X2, .6 for X2 -> X3, and .5 for X3 -> X4.
  • lag2: Vector of lag-2 effects or NULL for no lag-2 effects, e.g., c(.2, .1) for lag-2 effects of .2 for X1 -> X3 and .1 for X2 -> X4.
  • lag3: Vector of lag-3 effects or NULL for no lag-3 effects, e.g., c(.05) for a lag-3 effect of .05 for X1 -> X4.
  • means: Vector of means for \(X\). Can be NULL for no meanstructure.
  • variances: Vector of (residual-)variances for \(X\). When provided, standardized must be FALSE. When omitted and standardized = FALSE, all (residual-)variances are equal to 1. When omitted and standardized = TRUE, the (residual-)variances are determined so that all variances are 1, and will thus typically differ from each other.
  • waveEqual: Parameters that are assumed to be equal across waves in both the H0 and the H1 model. Valid are 'lag1' (or equivalently 'autoreg'), 'lag2', and 'lag3' to constrain the autoregressive effects of the specified lag, or NULL for none (so that all parameters are freely estimated, subject to the constraints defined in nullEffect).
  • nullEffect: Defines the hypothesis of interest. Valid are the same arguments as in waveEqual and additionally 'lag1 = 0' (or equivalently 'autoreg = 0'), 'lag2 = 0', 'lag3 = 0' to constrain the lag-1, lag-2, or lag-3 effects to zero, and 'lag1A = lag1B' (or equivalently 'autoregA = autoregB') to constrain the lag-1 effects be equal across groups.
  • nullWhich: Defines which parameter(s) is targeted by the hypothesis defined in nullEffect when there are > 2 waves and the targeted parameter is not part of waveEqual.
  • nullWhichGroups: For hypotheses involving cross-groups comparisons, vector indicating the groups for which equality constrains should be applied. By default, the restrictions apply to all groups.
  • standardized: Whether all parameters are standardized (TRUE, the default) or unstandardized (FALSE).
  • invariance: Whether metric invariance over waves (and scalar invariance when means are part of the model) is assumed (TRUE, the default) or not (FALSE).
  • autocorResiduals: Whether the residuals of the indicators of latent variables are autocorrelated over waves (TRUE, the default) or not (FALSE).
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model, where the order of factors is (\(X_1\), \(X_2\), …, \(X_{nWaves}\)).

semPower.powerAutoreg provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Detect whether an autoregressive effect differs from zero

To perform a power analysis to detect whether an autoregressive effect differs from zero, use nullEffect = 'autoreg' (or equivalently nullEffect = 'lag1') for lag-1 effects, nullEffect = 'lag2' for lag-2 effects, and nullEffect = 'lag3' for lag-3 effects.

For instance, the following requests the required sample size (type = 'a-priori') to detect that the autoregressive lag-1 effect (nullEffect = 'lag1') differs from zero with a power of 80% (power = .80) on alpha = .05 (alpha = .05). The model comprises an attribute measured at three occasions (nWaves = 3) by 3 indicators each (nIndicator = c(3, 3, 3)), where all loadings are equal to .50 (loadM = .5). See the chapter on specifying a factor model for alternative (more flexible) ways to define the factor loadings. The autoregressive lag-1 effects are .5 for X1 -> X2 and .7 for X2 -> X3 (autoregEffects = c(.5, .7)). Given that there are two autoregressive effects, the nullWhich argument is used to define that the first autoregressive effect is targeted by the null hypothesis (nullWhich = 1). There are no lag-2 or lag-3 effects (because the corresponding arguments are omitted). In addition, the model comprises autocorrelated indicator residuals across time, because the autocorResiduals (which defaults to TRUE) argument is omitted, assumes metric invariance across measurement occasions, because the invariance argument (which defaults to TRUE) is also omitted, and assumes that all parameters are standardized, because the standardized argument (which defaults to TRUE) is omitted as well.

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  autoregEffects = c(.5, .7),  # x1->x2, x2->x3
  nullEffect = 'autoreg = 0',
  nullWhich = 1,
  # define measurement model
  nIndicator = c(3, 3, 3), loadM = .5
)
summary(powerAutoreg)

The results of the power analysis are printed by calling the summary method on powerAutoreg, which in this example provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  type = 'post-hoc', alpha = .05, N = 300,
  # define hypothesis 
  nWaves = 3,
  autoregEffects = c(.5, .7),  # x1->x2, x2->x3
  nullEffect = 'autoreg = 0',
  nullWhich = 1,
  # define measurement model
  nIndicator = c(3, 3, 3), loadM = .5
)

Now, summary(powerAutoreg) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

In the examples above, a power analysis was performed by comparing the implied H0 model against a less restrictive H1 model (by omitting the comparison argument which defaults to 'restricted'). If one rather wants to compare the H0 model against the saturated model, use comparison = 'saturated'. See the chapter on the definition of the comparison model for a detailed discussion.

If the \(X\) variables are observed variables rather than latent factors, the only change refers to the definition of the measurement model. Below, Lambda = diag(4) defines three dummy factors with a single indicator loading by 1, so these become observed variables:

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  type = 'post-hoc', alpha = .05, N = 300,
  # define hypothesis 
  nWaves = 3,
  autoregEffects = c(.5, .7),  # x1->x2, x2->x3
  nullEffect = 'autoreg = 0',
  nullWhich = 1,
  # define measurement model
  Lambda = diag(3)
)

If the autoregressive effects are considered stable across measurement occasions, the waveEqual can be used to define both the H0 and the H1 models to implement equality restrictions on these parameters. For instance, the following defines the autoregressive effects of X1 -> X2 and the one of X2 -> X3 to be .50 (autoregEffects = c(.5, .5)), and restricts these to equality in both the H0 and the H1 model using waveEqual = c('autoreg') (or equivalently waveEqual = c('lag1')). Given that there is now a single autoregressive parameter, the nullWhich argument can be omitted.

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  autoregEffects = c(.5, .5),  # x1->x2, x2->x3
  waveEqual = c('autoreg'),
  nullEffect = 'autoreg = 0',
  # define measurement model
  nIndicator = c(3, 3, 3), loadM = .5
)

The models may also include lag-2 or lag-3 effects. For instance, the following defines 4 measurement occasions (nWaves = 4) and all lag-1 effects to be .5 (autoregEffects = c(.5, .5, .5)), all lag-2 effects to be .2 (lag2Effects = c(.2, .2)), and the lag-3 effect to be .1 (lag3Effects = .1). Further, both the lag-1 and the lag-2 effects are constrained to be equal across waves in both the H0 and the H1 model (waveEqual = c('lag1', 'lag2')). Then, the required sample size is requested to detect that the lag-3 effect differs from zero (nullEffect = 'lag3 = 0').

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 4,
  lag1Effects = c(.5, .5, .5),     # x1->x2, x2->x3, x3->x4
  lag2Effects = c(.2, .2),         # x1->x3, x2->x4
  lag3Effects = .1,                # x1->x4
  waveEqual = c('lag1', 'lag2'),
  nullEffect = 'lag3 = 0',
  # define measurement model
  nIndicator = c(3, 3, 3, 3), loadM = .5
)
Detect non-stationarity of autoregressive parameters

To perform a power analysis to detect whether the autoregressive parameters differ across measurement occasions (non-stationarity of the autoregressive parameters), use nullEffect = 'autoreg' (or equivalently nullEffect = 'lag1') for lag-1 parameters, nullEffect = 'lag2' for lag-2 parameters, and nullEffect = 'lag3' for lag-3 parameters.

For instance, the following defines an autoregressive model with 3 measurement occasions (nWaves = 3) and requests the required sample size (type = 'a-priori') to detect that the autoregressive effects X1 -> X2 (of. 50) and X2 -> X3 (of .60) differ (nullEffect = 'lag1') with a power of 80% (power = .80) on alpha = .05 (alpha = .05).

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  autoregEffects = c(.5, .6),  # x1->x2, x2->x3
  nullEffect = 'autoreg',
  # define measurement model
  nIndicator = c(3, 3, 3), loadM = .5,
  standardized = FALSE
)

The procedure is equivalent when investigating stationarity of the autoregressive lag-2 or lag-3 parameters. For instance, the following defines an autoregressive model with 4 measurement occasions (nWaves = 4), defines wave-constant autoregressive lag-1 effects, and requests the required sample size (type = 'a-priori') to detect that the autoregressive lag-2 effects X1 -> X3 (of. 20) and X2 -> X4 (of .10) differ (nullEffect = 'lag2').

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 4,
  lag1Effects = c(.5, .5, .5),  # x1->x2, x2->x3, x3->x4
  lag2Effects = c(.2, .1),      # x1->x3, x2->x4
  waveEqual = 'lag1',
  nullEffect = 'lag2',
  # define measurement model
  nIndicator = c(3, 3, 3, 3), loadM = .5
)
Detect non-stationarity of variance

To perform a power analysis to detect whether the residual variances of \(X\) differ across measurement occasions (non-stationarity of variance), use nullEffect = 'var'. Note that the hypothesis of stationarity of variance does not include the first measurement of \(X\), because this variance differs in meaning from the variance associated with the remaining measurements.

For instance, the following defines an autoregressive model with 3 measurement occasions (nWaves = 3) and requests the required sample size (type = 'a-priori') to detect that the residual variances of \(X_2\) and \(X_3\) differ (nullEffect = 'var'). Both autoregressive effects are .50 and are restricted to equality in both the H0 and the H1 model (waveEqual = c('autoreg')). The variance of \(X\) at the first measurement occasion is 1, the residual variance of \(X\) at the second measurement occasion is .75, and the residual variance of \(X\) at the third measurement occasion is .50 (variances = c(1, .75, .50)). Because variances are now subject to the hypothesis and thus need to be provided, standardized must be set to FALSE, so that all parameters are treated as unstandardized. However, in the present example, the only parameters that change by standardization are the autoregressive effect of \(X_2\) on \(X_3\) and the residual variance of \(X_3\).

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  autoregEffects = c(.5, .5),  # x1->x2, x2->x3
  variances = c(1, .75, .50),  # x1, x2, x3
  waveEqual = c('autoreg'),
  nullEffect = 'var',
  # define measurement model
  nIndicator = c(3, 3, 3), loadM = .5,
  standardized = FALSE
)
Detect non-stationarity of means

To perform a power analysis to detect whether the conditional means of the \(X\) differ across measurement occasions (non-stationarity of means), use nullEffect = 'mean'. Note that the hypothesis of stationarity of means does not include the first measurement of \(X\), because its mean differs in meaning from those of the remaining measurements.

For instance, the following defines an autoregressive model with 3 measurement occasions (nWaves = 3) and requests the required sample size (type = 'a-priori') to detect that the conditional means of \(X_2\) and \(X_3\) differ (nullEffect = 'mean'). Both autoregressive effects are .50. The variance of \(X_1\) is 1 and the residual variances of \(X_2\) and \(X_3\) are .75. Both the autoregressive effects and the residual variances are restricted to equality in both the H0 and the H1 model (waveEqual = c('autoreg', 'var')). The mean of \(X\) at the first measurement occasion is 0, the conditional mean of \(X\) at the second measurement occasion is .25, and the conditional mean of \(X\) at the third measurement occasion is .50 (means = c(0, .25, .50)). Invariance constraints on loadings and indicator intercepts are employed across waves (by omitting the invariance argument, which defaults to TRUE). Because the variances are provided, standardized must be set to FALSE. However, in the present example, all standardized parameters equal the unstandardized parameters, so the change in the means can be interpreted in terms of standardized differences.

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  autoregEffects = c(.5, .5),  # x1->x2, x2->x3
  variances = c(1, .75, .75),  # x1, x2, x3
  means = c(0, .25, .50),
  waveEqual = c('autoreg', 'var'),
  nullEffect = 'mean',
  # define measurement model
  nIndicator = c(3, 3, 3), loadM = .5,
  standardized = FALSE
)

Note that the latent means are identified resorting to single occasion identification (i.e., by setting the factor mean at the first measurement occasion to zero), which matches the first value provided for means in the example above. However, the first mean may also take a value different from zero without affecting power, because the remaining means are then simply rescaled.

Detect whether the lag-1 autoregressive effects differ across groups

To perform a power analysis to detect whether the lag-1 autoregressive effects differ across groups, use nullEffect = 'autoregA = autoregB'.

For instance, the following defines an autoregressive model with 3 measurement occasions (nWaves = 3), where \(X\) is measured by three indicators at each wave, with all loadings equal to .5. The measurement model is identical in both groups. However, different autoregressive effects are defined for each group by using a list structure for the autoregEffects argument: In the first group, both autoregressive effects are .5, whereas in the second group, both autoregressive effects are .4. In both groups, the autoregressive effects are constant across waves (waveEqual = c('autoreg')). Then, the required sample (type = 'a-priori') is requested to detect that the autoregressive effect differs across groups (nullEffect = 'autoregA = autoregB'). Note that metric invariance constraints are applied across both waves and groups (by omitting the invariance argument, which defaults to TRUE). Furthermore, in multiple group models the N argument also needs to be provided as a list, which, in case of an a priori power analysis, gives the group weights. N = list(1, 1) requests equally sized groups. If using N = list(2, 1) instead, the first group would be twice as large as the second group. If a post hoc or compromise power analysis is requested, N is a list providing the number of observations for each group.

powerAutoreg <- semPower.powerAutoreg(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 3,
  autoregEffects = list(
    c(.5, .5),  # group 1: x1->x2, x2->x3
    c(.4, .4)   # group 2: x1->x2, x2->x3    
  ),
  waveEqual = c('autoreg'),
  nullEffect = 'autoregA = autoregB',
  # define measurement model
  nIndicator = c(3, 3, 3), loadM = .5,
)

If there are more than two groups, the autoregressive effects are held equal across all groups by default. If the constraints should only be placed in specific groups, nullWhichGroups is used to identify the groups to which the equality restrictions apply. For instance, nullWhichGroups = c(1, 3) defines that the autoregressive effects should only be restricted to equality across the first and the third group.

4.11 ARMA models

semPower.powerARMA is used to perform power analyses for hypothesis arising in models with autoregressive and moving average parameters (ARMA models) of the form \(X_{t} = \beta_{t,(t-1)} \cdot X_{(t-1)} + \gamma_{t,t} \cdot N_{t} + \gamma_{t,(t-1)} \cdot N_{(t-1)}\), where one variable \(X\) is repeatedly assessed at different time points (waves), and autoregressive (lag-1, lag-2, or lag-3) effects (e.g., X1 -> X2 -> X3), and lag-1, lag-2, or lag-3 moving average parameters (e.g., N1 -> X2) are assumed. semPower.powerARMA provides interfaces to perform power analyses concerning the following hypotheses:

  • whether the autoregressive lag-1 (nullEffect = 'autoreg'), lag-2 ('autoregLag2'), or lag-3 ('autoregLag3') effects are equal across waves (stationarity of autoregressive effects).
  • whether the moving average lag-1 ('mvAvg'), lag-2 ('mvAvgLag2'), or lag-3 ('mvAvgLag3') parameters are equal across waves (stationarity of moving average effects).
  • whether the variances of the noise factors \(N\) (= the residual variances of \(X\)) are equal across waves (stationarity of variance; 'var').
  • whether the conditional means of \(X\) are equal across waves (stationarity of means; 'mean').
  • whether the autoregressive effect of a certain lag is zero ('autoreg = 0', 'autoregLag2 = 0', 'autoregLag3 = 0').
  • whether the moving average parameter of a certain lag is zero ('mvAvg = 0', 'mvAvgLag2 = 0', 'mvAvgLag3 = 0').
  • whether the autoregressive lag-1 effect is equal across groups (autoregA = autoregB).
  • whether the moving average lag-1 parameter is equal across groups (mvAvgA = mvAvgB).
  • whether the variance of the noise factors are equal across groups (varA = varB).
  • whether the latent means are equal across groups (meanA = meanB).

semPower.powerARMA only addresses hypotheses arising in ARMA models. For simple autoregressive models models, see semPower.powerAutoreg. For hypotheses concerning longitudinal measurement invariance, see semPower.powerLI.

semPower.powerARMA expects the following arguments:

  • nWaves: The number of waves (measurement occasions), must be \(\geq\) 2.
  • autoregLag1 (or equivalently, autoregEffects): Vector of autoregressive lag-1 effects, e.g., c(.7, .6, .5) for autoregressive effects of .7 for X1 -> X2, .6 for X2 -> X3, and .5 for X3 -> X4.
  • autoregLag2: Vector of lag-2 effects or NULL for no lag-2 effects, e.g., c(.2, .1) for lag-2 effects of .2 for X1 -> X3 and .1 for X2 -> X4.
  • autoregLag3: Vector of lag-3 effects or NULL for no lag-3 effects, e.g., c(.05) for a lag-3 effect .05 for X1 -> X4.
  • mvAvgLag1: Vector of the lag-1 moving average parameters, e.g., c(.7, .6, .5) for moving average parameters effects of .7 for N1 -> X2, .6 for N2 -> X3, and .5 for N3 -> X4.
  • mvAvgLag2: Vector of the lag-2 moving average parameters or NULL for no lag-2 effects, e.g., c(.2, .1) for lag-2 effects of .2 for N1 -> X3 and .1 for N2 -> X4.
  • mvAvgLag3: Vector of the lag-3 moving average parameters or NULL for no lag-3 effects, e.g., c(.05) for a lag-3 effect of .05 for N1 -> X4.
  • means: Vector of (conditional) means for \(X\). Can be omitted for no meanstructure.
  • variances: Vector of the variances of the noise factors \(N\) (= the residual variances of \(X\)).
  • waveEqual: Parameters that are assumed to be equal across waves in both the H0 and the H1 model. Because ARMA models are likely not identified when no such constraints are imposed, waveEqual may not be empty. Valid are 'autoreg', 'autoregLag2', and 'autoregLag3' for autoregressive effects, 'mvAvg', 'mvAvgLag2', and 'mvAvgLag3' for moving average effects, var for the variance of the noise factors \(N_2 \ldots N_nWaves\) , and mean for the conditional means of \(X_2 \ldots X_nWaves\) (starting at the second measurement).
  • groupEqual: Parameters that are restricted across groups in both the H0 and the H1 model, when nullEffect implies a multiple group model. Valid are autoreg for autoregressive effects, mvAvg for moving-average parameters, var for the variances of the noise factors \(N\), and mean for the means of \(X\).
  • nullEffect: Defines the hypothesis of interest. Valid are the same arguments as in waveEqual and additionally 'autoreg = 0', 'autoregLag2 = 0', 'autoregLag3 = 0', 'mvAvg = 0', 'mvAvgLag2 = 0', 'mvAvgLag3 = 0' to constrain the autoregressive or moving average effects to zero, and 'autoregA = autoregB', 'mvAvgA = mvAvgB', 'varA = varB', 'meanA = meanB' to constrain the autoregressive (lag-1) effects, moving average (lag-1) parameters, variances of the noise factors, or means of \(X\) to be equal across groups.
  • nullWhich: Defines which parameter(s) is targeted by the hypothesis defined in nullEffect when there are > 2 waves and the parameter is not part of waveEqual.
  • nullWhichGroups: For hypotheses involving cross-groups comparisons, a vector indicating the groups for which equality constrains should be applied. By default, the relevant parameter(s) is restricted across all groups.
  • invariance: Whether metric invariance (and scalar invariance if means are part of the model) over waves is assumed (TRUE, the default) or not (FALSE).
  • autocorResiduals: Whether the residuals of the indicators of the \(X\) are autocorrelated over waves (TRUE, the default) or not (FALSE).
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model, where the order of factors is (\(X_1\), \(X_2\), …, \(X_{nWaves}\)).

semPower.powerARMA provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Detect non-stationarity of autoregressive effects

To perform a power analysis to detect whether the autoregressive effects differ across measurements (non-stationarity of autoregressive effects), use nullEffect = 'autoreg' for lag-1 effects, nullEffect = 'autoregLag2' for lag-2 effects, and nullEffect = 'autoregLag3' for lag-3 effects.

For instance, the following requests the required sample size (type = 'a-priori') to detect that the autoregressive lag-1 effect (nullEffect = 'autoreg') differs across measurements with a power of 80% (power = .80) on alpha = .05 (alpha = .05). The model comprises an attribute measured at five occasions (nWaves = 5) by 3 indicators each (nIndicator = rep(3, 5)), where all loadings are equal to .50 (loadM = .5). See the chapter on specifying a factor model for alternative (more flexible) ways to define the factor loadings. The autoregressive (lag-1) effects are .5 for X1 -> X2, .4 for X2 -> X3, .3 for X3 -> X4, and .2 for X4 -> X5(autoregLag1 = c(.5, .4, .3, .2)). The moving average (lag-1) parameters are .3 for N1 -> X2, .4 for N2 -> X3, .5 for N3 -> X4, and .4 for N4 -> X5(mvAvgLag1 = c(.3, .4, .5, .4)). The variances of the noise variables (= the residual variances of \(X\)) are 1 at each measurement occasion (variances = c(1, 1, 1, 1, 1)). To identify the model, the waveEqual argument is set so that the variances are restricted to be equal across measurements (excluding the first wave) in both the H0 and the H1 model (waveEqual = c('var')). In addition, the model comprises autocorrelated indicator residuals across time, because the autocorResiduals (which defaults to TRUE) argument is omitted, and assumes metric invariance across measurement occasions, because the invariance argument (which defaults to TRUE) is also omitted.

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .4, .3, .2),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .4, .5, .4),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var'),
  nullEffect = 'autoreg',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)
summary(powerARMA)

The results of the power analysis are printed by calling the summary method on powerARMA, which in this example provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'post-hoc', alpha = .05, N = 300,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .4, .3, .2),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .4, .5, .4),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var'),
  nullEffect = 'autoreg',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)

Now, summary(powerARMA) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

In the examples above, a power analysis was performed by comparing the implied H0 model against a less restrictive H1 model (by omitting the comparison argument which defaults to 'restricted'). If one rather wants to compare the H0 model against the saturated model, use comparison = 'saturated'. See the chapter on the definition of the comparison model for a detailed discussion.

If the \(X\) variables are observed variables rather than latent factors, the only change refers to the definition of the measurement model. Below, Lambda = diag(5) defines five dummy factors with a single indicator loading by 1, so these become observed variables:

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .4, .3, .2),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .4, .5, .4),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var'),
  nullEffect = 'autoreg',
  # define measurement model
  Lamda = diag(5)
)

The models may also include lag-2 or lag-3 effects. For instance, the following defines 5 measurements (nWaves = 5) and all lag-1 autoregressive effects to be .4 (autoregLag1 = c(.4, .4, .4, .4)), the lag-2 autoregressive effects to be .15, .1, and .05 (autoregLag2 = c(.15, .1, .05)), and the lag-3 autoregressive effects to be .05 and .10 (autoregLag3 = c(.05, .10)). The lag-1 moving average parameters are all equal to .30 (mvAvgLag1 = c(.3, .3, .3, .3)) and there are no lag-2 or lag-3 moving average parameters. Furthermore, the variances, the lag-1 autoregressive effects, and the lag-1 moving average parameters are constrained to be equal across waves in both the H0 and the H1 model (waveEqual = c('var', 'autoreg', 'mvAvg')). Then, the required sample size is requested to detect that the lag-2 autoregressive effects differ across waves (nullEffect = 'autoregLag2').

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.4, .4, .4, .4),  # x1->x2, x2->x3, x3->x4, x4->x5 
  autoregLag2 = c(.15, .1, .05),    # x1->x3, x2->x4, x3->x5 
  autoregLag3 = c(.05, .10),        # x1->x5, x2->x5 
  mvAvgLag1 = c(.3, .3, .3, .3),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var', 'autoreg', 'mvAvg'),
  nullEffect = 'autoregLag2',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)
Detect non-stationarity of moving average parameters

The procedure to perform a power analysis to detect whether the moving average parameters differ across measurements (non-stationarity of the moving average parameters) is largely equivalent to the case described in the previous section, except that nullEffect refers to the moving average parameters: nullEffect = 'mvAvg' for lag-1 effects, nullEffect = 'mvAvgLag2' for lag-2 effects, and nullEffect = 'mvAvgLag3' for lag-3 effects.

For instance, the following sets up a population model largely identical to the example described above, but this time requests the required sample size to detect that the moving average parameters differ across measurements (nullEffect = 'mvAvg').

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .4, .3, .2),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .4, .5, .4),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var'),
  nullEffect = 'mvAvg',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)

Power analyses to detect non-stationarity of the lag-2 or lag-3 moving average parameters are performed similarly. For instance, the following defines 5 measurements (nWaves = 5) and all lag-1 autoregressive effects to be .4 (autoregLag1 = c(.4, .4, .4, .4)), all lag-2 autoregressive effects to be .10 (autoregLag2 = c(.1, .1, .1)), and all lag-3 autoregressive effects to be .05 (autoregLag3 = c(.05, .05)). All lag-1 moving average parameters are .30 (mvAvgLag1 = c(.3, .3, .3, .3)), the lag-2 moving average parameters are .20, .10, and .05 (mvAvgLag2 = c(.20, .10, .05)), and the lag-3 moving average parameters are .05 and .10 (mvAvgLag3 = c(.05, .10)). The waveEqual arguments specifies stability of the variances, of the the lag-1, lag-2, and lag-3 autoregressive effects, and of the lag-1 moving average parameters (waveEqual = c('var', 'autoreg', 'autoregLag2', 'autoregLag3', 'mvAvg')). Then, the required sample size is requested to detect that the lag-2 moving average parameters differ across waves (nullEffect = 'mvAvgLag2').

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.4, .4, .4, .4),  # x1->x2, x2->x3, x3->x4, x4->x5 
  autoregLag2 = c(.1, .1, .1),   # x1->x3, x2->x4, x3->x5 
  autoregLag3 = c(.05, .05),        # x1->x5, x2->x5 
  mvAvgLag1 = c(.3, .3, .3, .3),    # n1->x2, n2->x3, n3->x4, n4->x5 
  mvAvgLag2 = c(.20, .10, .05),     # n1->x3, n2->x4, n3->x5 
  mvAvgLag3 = c(.05, .10),          # n1->x5, x2->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var', 'autoreg', 'autoregLag2', 'autoregLag3', 'mvAvg'),
  nullEffect = 'mvAvgLag2',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)
Detect non-stationarity of variances

To perform a power analysis to detect whether the variances of the noise factors (= the residual variances of \(X\)) differ across measurements (non-stationarity of variances), use nullEffect = 'var'. Note that the hypothesis of stationarity of variance excludes the first noise factor (i.e., the residual variance of the first measurement of \(X\)), because its variance differs in meaning from those of the remaining measurements.

For instance, the following defines 5 measurements (nWaves = 5), all lag-1 autoregressive effects to be .5 (autoregLag1 = c(.5, .5, .5, .5)), and all lag-1 moving average parameters to be .30 (mvAvgLag1 = c(.3, .3, .3, .3)). The waveEqual arguments specifies stability of the autoregressive and the moving average effects in both the H0 and the H1 model (waveEqual = c('autoreg', 'mvAvg')). The variances of the noise factors are 1, 1, .75, .5, and .5 (variances = c(1, 1, .75, .5, .5)). Then, the required sample size is requested to detect that the variances of the noise factors differ across waves 2 - 5 (nullEffect = 'var').

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .5, .5, .5),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .3, .3, .3),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, .75, .5, .5), # n1, n2, n3, n4, n5
  waveEqual = c('autoreg', 'mvAvg'),
  nullEffect = 'var',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)
Detect non-stationarity of means

To perform a power analysis to detect whether the conditional means of the \(X\) differ across measurements (non-stationarity of means), use nullEffect = 'mean'. As is the case for variances, the hypothesis of stationarity of means excludes the first measurement of \(X\), because its mean differs in meaning from those of the remaining measurements.

For instance, the following defines 5 measurements (nWaves = 5), all lag-1 autoregressive effects to be .5 (autoregLag1 = c(.5, .5, .5, .5)), all lag-1 moving average parameters to be .30 (mvAvgLag1 = c(.3, .3, .3, .3)), and all variances of the noise factors to be equal to 1 (variances = c(1, 1, 1, 1, 1)). The waveEqual arguments specifies stability of the variances, the autoregressive, and the moving average effects in both the H0 and the H1 model (waveEqual = c('var, 'autoreg', 'mvAvg')). The means of the \(X\) are 0, .3, .2, .5, and .4 (means = c(0, .3, .2, .5, .4)). In addition, equal loadings and equal intercepts are assumed across measurements, as the invariance argument (which defaults to TRUE) is omitted. Then, the required sample size is requested to detect that the means differ across waves 2 - 5 (nullEffect = 'mean').

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .5, .5, .5),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .3, .3, .3),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  means = c(0, .3, .2, .5, .4),     # x1, x2, x3, x4, x5
  waveEqual = c('autoreg', 'mvAvg', 'var'),
  nullEffect = 'mean',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)

Note that the latent means of the \(X\) are identified resorting to single occasion identification (i.e., by setting the factor mean at the first measurement occasion to zero), which matches the first value provided for means in the example above. However, the first mean may also take a value different from zero without affecting power, because the remaining means are then simply rescaled.

Detect whether autoregressive or moving average parameters differ from zero

To perform a power analysis to detect whether the autoregressive effects differ from zero, use nullEffect = 'autoreg = 0' for lag-1 effects, nullEffect = 'autoregLag2 = 0' for lag-2 effects, and nullEffect = 'autoregLag3 = 0' for lag-3 effects. To detect whether the moving average parameters differ from zero, use nullEffect = 'mvAvg = 0' for lag-1 effects, nullEffect = 'mvAvgLag2 = 0' for lag-2 effects, and nullEffect = 'mvAvgLag3 = 0' for lag-3 parameters.

For instance, the following requests the required sample size (type = 'a-priori') to detect that the first autoregressive lag-1 effect (nullEffect = 'autoreg = 0') differs from zero with a power of 80% (power = .80) on alpha = .05 (alpha = .05). The model comprises five measurement occasions (nWaves = 5), so that there are four autoregressive lag-1 effects, namely are .5 for X1 -> X2, .4 for X2 -> X3, .3 for X3 -> X4, and .2 for X4 -> X5(autoregLag1 = c(.5, .4, .3, .2)). Given that there are several autoregressive effects, the nullWhich argument is used to define that the first autoregressive effect is targeted by the null hypothesis (nullWhich = 1). If the autoregressive effects are considered stable across measurement occasions (e.g., waveEqual = c('autoreg'), there is only a single autoregressive parameter and the nullWhich argument can be omitted.

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .4, .3, .2),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .3, .3, .3),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var', 'mvAvg'),
  nullEffect = 'autoreg = 0',
  nullWhich = 1,
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)

Note that the first autoregressive effect (X1 -> X2) is only identified when both variances and moving average parameters are constrained to equality across measurement occasions.

Power analyses to detect that moving average parameters differ from zero are performed analogously. For instance, the following determines the required sample size to detect that the lag-2 moving average parameters (nullEffect = 'mvAvgLag2 = 0'), which are stable across measurements (waveEqual includes 'mvAvgLag2'), differ from zero:

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.4, .4, .4, .4),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .3, .3, .3),    # n1->x2, n2->x3, n3->x4, n4->x5 
  mvAvgLag2 = c(.2, .2, .2),        # n1->x3, n2->x4, n3->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var', 'autoreg', 'mvAvg', 'mvAvgLag2'),
  nullEffect = 'mvAvgLag2 = 0',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)
Detect whether the lag-1 autoregressive or moving average parameters differ across groups

To perform a power analysis to detect whether the lag-1 autoregressive or lag-1 moving average parameters differ across groups, use nullEffect = 'autoregA = autoreg' and nullEffect = 'mvAvgA = mvAvgA', respectively.

The general syntax is similar as in the previous examples, with the only difference that the parameters targeted by the null hypothesis need to be provided in a list structure giving the relevant parameters separately for each group. If no list is provided for a particular parameter, these take identical values in all groups (but are freely estimated in each group by default).

For instance, the following defines a two-group model involving 5 measurement occasions (nWaves = 5), where \(X\) is measured by three indicators at each measurement, with all loadings equal to .5. The measurement model is identical for both groups. Also identical across groups are the noise variances of 1 at each measurement, and equal moving average effects of .3 (mvAvgLag1 = c(.3, .3, .3, .3)). Whereas the moving average parameters are freely estimated in each group, the variances are restricted to equality across groups in both the H0 and the H1 model by using groupEqual = c('var'). However, different lag-1 autoregressive effects are defined for each group by using a list structure for the autoregLag1 argument: In the first group, all autoregressive effects are .5, whereas in the second group, all autoregressive effects are .3. In both groups, the variance, autoregressive, and moving average parameters are constant across waves (waveEqual = c('var', 'autoreg', 'mvAvg')). Metric invariance constraints are applied across all waves and groups (by omitting the invariance argument, which defaults to TRUE). Then, the required sample size (type = 'a-priori') is requested to detect that the autoregressive effect differs across groups (nullEffect = 'autoregA = autoregB'). Furthermore, in multiple group models the N argument also needs to be provided as a list, which in case of an a priori power analysis gives the group weights. N = list(1, 1) requests equally sized groups. If using N = list(2, 1) instead, the first group would be twice as large as the second group. If a post hoc or compromise power analysis is requested, N is a list providing the number of observations for each group.

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = list(
    c(.5, .5, .5, .5),  # group 1: x1->x2, x2->x3, x3->x4, x4->x5     
    c(.3, .3, .3, .3)   # group 2: x1->x2, x2->x3, x3->x4, x4->x5     
  ),
  mvAvgLag1 = c(.3, .3, .3, .3),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var', 'autoreg', 'mvAvg'),
  groupEqual = c('var'),
  nullEffect = 'autoregA = autoRegB',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)

If there are more than two groups, the autoregressive effects are held equal across all groups by default. If the constraints should only be placed in specific groups, nullWhichGroups is used to identify the groups to which the equality restrictions apply. For instance, nullWhichGroups = c(1, 3) defines that the autoregressive effects should only be restricted to equality across the first and the third group.

Performing a power analysis to detect that the moving average parameters differ across groups proceeds analogously. For instance, the following assumes wave-constant (waveEqual = c('var', 'autoreg')) autoregressive effects and variances, both of which take the same values groups. The variances, but not the autogressive effects, are also restricted to be equal across groups. Further, the moving average parameters are defined to differ across both waves and groups. In the first groups, the moving average parameters are .5, .5, .4, and .3, whereas in the second group these are .5, .2, .4, .3. As there are now several moving average parameters, nullWhich = 2 defines that the second parameter (of .5 vs .2) is targeted by the hypothesis of group-equality (nullEffect = 'mvAvgA = mvAvgB').

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .5, .5, .5),
  mvAvgLag1 = list(
    c(.5, .5, .4, .3),  # group 1: n1->x2, n2->x3, n3->x4, n4->x5     
    c(.5, .2, .4, .3)   # group 2: n1->x2, n2->x3, n3->x4, n4->x5     
  ),
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  waveEqual = c('var', 'autoreg'),
  groupEqual = c('var'),
  nullEffect = 'mvAvgA = mvAvgB',
  nullWhich = 2,
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)
Detect whether the variances or means differ across groups

To perform a power analysis to detect whether the variances of the noise factors (= the residual variances of \(X\)) differ across groups, use nullEffect = 'varA = varB'. To detect whether the means of \(X\) differ across groups, use nullEffect = 'meanA = meanB'.

The general syntax is highly similar as in the previous examples. For instance, the following defines autoregressive and moving average parameters that are identical across waves and across groups, but defines the all variances in the first group equal 1, whereas all variances in the second group are .6. The variances, autoregressive effects, and moving average parameters are constant across measurements in both groups (waveEqual = c('var', 'mvAvg', 'autoreg')). Then, the required sample size to detect that the variances differ across groups is requested (nullEffect = 'varA = varB').

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .5, .5, .5),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .3, .3, .3),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = list(
    c(1, 1, 1, 1, 1),         # n1, n2, n3, n4, n5
    c(.6, .6, .6, .6, .6)     # n1, n2, n3, n4, n5
  ),
  waveEqual = c('var', 'mvAvg', 'autoreg'),
  nullEffect = 'varA = varB',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)

The test for the inequality of the latent means of \(X\) proceeds analogously. For instance, the following defines variances, autoregressive effects, and moving average parameters that are identical across waves and across groups, but defines all means in the first group to equal 0, whereas the conditional means (i.e., those of \(X_2\) - \(X_5\)) in the second group are .5. In addition, the loadings and indicator intercepts are equal across both waves and groups, because the invariance argument (which defaults to TRUE) is omitted. Then, the required sample to detect that the means differ across groups is requested (nullEffect = 'meanA = meanB').

powerARMA <- semPower.powerARMA(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 5,
  autoregLag1 = c(.5, .5, .5, .5),  # x1->x2, x2->x3, x3->x4, x4->x5 
  mvAvgLag1 = c(.3, .3, .3, .3),    # n1->x2, n2->x3, n3->x4, n4->x5 
  variances = c(1, 1, 1, 1, 1),     # n1, n2, n3, n4, n5
  means = list(
    c(0, 0, 0, 0, 0),         # n1, n2, n3, n4, n5
    c(0, .5, .5, .5, .5)     # n1, n2, n3, n4, n5
  ),
  waveEqual = c('var', 'mvAvg', 'autoreg', 'mean'),
  nullEffect = 'meanA = meanB',
  # define measurement model
  nIndicator = rep(3, 5), loadM = .5
)

4.12 CLPM models

semPower.powerCLPM is used to perform power analyses for hypothesis arising in cross-lagged panel models (CLPM). In a standard CLPM implemented here, two variables \(X\) and \(Y\) are repeatedly assessed at two (or more) different time points (waves) yielding autoregressive effects (stabilities; X1 -> X2 and Y1 -> Y2), synchronous effects (X1 <-> Y1 and X2 <-> Y2), and cross-lagged effects (X1 -> Y2 and Y1 -> X2). semPower.powerCLPMprovides interfaces to perform power analyses concerning the following hypotheses:

  • whether the autoregressive effects of \(X\) (X1 -> X2, nullEffect = autoregX = 0) or \(Y\) (Y1 -> Y2, autoregY = 0) differ from zero.
  • whether the cross-lagged effect of \(X\) on \(Y\) (X -> Y, crossedX) or the cross-lagged effect of \(Y\) on \(X\) (Y -> X, crossedY) differs from zero.
  • whether the autoregressive effects of \(X\) and \(Y\) are equal (autoregX = autoregY).
  • whether the cross-lagged effect of \(X\) on \(Y\) and the cross-lagged effect of \(Y\) on \(X\) are equal (crossedX = crossedY).
  • whether the autoregressive effect of \(X\) (autoregX) or the autoregressive effect of \(Y\) (autoregY) are equal across waves.
  • whether the cross-lagged effect of \(X\) on \(Y\) (crossedX) or the cross-lagged effect of \(Y\) on \(X\) (crossedY) are equal across waves.
  • whether the (residual-)correlations between \(X\) and \(Y\) are equal across waves (corXY).
  • whether the autoregressive effects of \(X\) (autoregXA = autoregXB) or \(Y\) (autoregXA = autoregXB) differ across groups.
  • whether the cross-lagged effect of \(X\) on \(Y\) (crossedXA = crossedXB) or the cross-lagged effect of \(Y\) on \(X\) (crossedYA = crossedYB) differs across groups.

semPower.powerCLPM only addresses hypotheses arising in a CLPM structure. semPower provides other convenience functions for hypothesis arising in random intercept cross-lagged panel models and in generic path models. For hypotheses regarding global model fit, a model-free power analysis should be performed.

semPower.powerCLPM expects the following arguments:

  • nWaves: The number of waves, must be at \(\geq\) 2.
  • autoregEffects: Vector of the autoregressive effects of \(X\) and \(Y\) (constant across waves), or a list of vectors of autoregressive effects for \(X\) and \(Y\) from wave to wave.
  • crossedEffects: Vector of cross-lagged effects of \(X\) on \(Y\) (X -> Y) and of \(Y\) on \(X\) (Y -> X) (both constant across waves), or a list of vectors of cross-lagged effects for each wave.
  • rXY: vector of (residual-)correlations between \(X\) and \(Y\) for each wave or NULL for no (residual-)correlations.
  • waveEqual: Parameters that are assumed to be equal across waves in both the H0 and the H1 model. Valid are 'autoregX' and 'autoregY' for autoregressive effects, 'crossedX' and 'crossedY' for cross-lagged effects, 'corXY' for residual correlations, or NULL for none.
  • nullEffect: Defines the hypothesis of interest. Valid are the same arguments as in waveEqual and additionally 'autoregX = 0', 'autoregY = 0', 'crossedX = 0', 'crossedY = 0' to constrain the \(X\) or \(Y\) autoregressive effects or the crossed effects to zero, 'autoregX = autoregY' and 'crossedX = crossedY' to constrain them to be equal for \(X\) and \(Y\), and 'autoregXA = autoregXB', 'autoregYA = autoregYB', 'crossedXA = crossedXB', 'crossedYA = crossedYB' to constrain them to be equal across groups.
  • nullWhich: Defines which parameter(s) is targeted by the hypothesis defined in nullEffect when there are > 2 waves.
  • nullWhichGroups: For hypotheses involving cross-groups comparisons, vector indicating the groups for which equality constrains should be applied.
  • standardized: Whether all parameters are standardized (TRUE, the default) or unstandardized (FALSE).
  • metricInvariance: Whether metric invariance over waves is assumed (TRUE, the default) or not (FALSE). This generally affects power and may also affect the df, depending on the comparison model.
  • autocorResiduals: Whether the residuals of the indicators of latent variables are autocorrelated over waves (TRUE, the default) or not (FALSE). This generally affects power and may also affect the df, depending on the comparison model.
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model, where the order of factors is (\(X_1\), \(Y_1\), \(X_2\), \(Y_2\), …, \(X_{nWaves}\), \(Y_{nWaves}\)).

semPower.powerCLPM provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Defining the CLPM structure

semPower.powerCLPM assumes that two variables \(X\) and \(Y\) are repeatedly assessed (a CLPM involving more than two variables can be more laboriously specified as a generic path model). The structure of a CLPM is defined by specifying nWaves, autoregEffects, crossedEffects, rXY, standardized, and autocorResiduals.

nWaves defines the number of measurements of \(X\) and \(Y\) and must be two or larger. Setting nWaves = 2 implies the following regression equations: \[ \hat{X_2} = \beta_{X_2,X_1} \cdot X_1 + \beta_{X_2, Y_1} \cdot Y_1 \\ \hat{Y_2} = \beta_{Y_2,X_1} \cdot X_1 + \beta_{Y_2, Y_1} \cdot Y_1 \] So that \(\beta_{X_2,X_1}\) is the autoregressive effect (the stability) of \(X\) (X1 -> X2), \(\beta_{Y_2, Y_1}\) is the autoregressive effect of \(Y\) (Y1 -> Y2), \(\beta_{X_2, Y_1}\) is the cross-lagged effect of \(Y\) on \(X\) (Y1 -> X2), and \(\beta_{Y_2, X_1}\) is the cross-lagged effect of \(X\) on \(Y\) (X1 -> Y2). The correlation between \(X_1\) and \(Y_1\) (X1 <-> Y1) as well as the residual correlation between \(X_2\) and \(Y_2\) (X2 <-> Y2; the synchronous effects) are defined in rXY, which defaults to NULL, implying zero correlations. If setting rXY = c(.3, .5) the correlation between \(X_1\) and \(Y_1\) is .3, and the residual correlation between \(X_2\) and \(Y_2\) is .5.

When nWaves = 3, the system becomes \[ \hat{X_2} = \beta_{X_2,X_1} \cdot X_1 + \beta_{X_2, Y_1} \cdot Y_1 \\ \hat{Y_2} = \beta_{Y_2,X_1} \cdot X_1 + \beta_{Y_2, Y_1} \cdot Y_1 \\ \hat{X_3} = \beta_{X_3,X_2} \cdot X_2 + \beta_{X_3, Y_2} \cdot Y_2 \\ \hat{Y_3} = \beta_{Y_3,X_2} \cdot X_2 + \beta_{Y_3, Y_2} \cdot Y_2 \] so that there are now two autoregressive effects and two cross-lagged effects each for both \(X\) and \(Y\). Note that there are no lag-2 effects, meaning that neither \(X_3\) nor \(Y_3\) are predicted by \(X_1\) and \(Y_1\). This assumption cannot be relaxed in semPower.powerCLPM. Consider power analyses for generic path models when you need lag-2 effects.

The population values for the autoregressive effects, the cross-lagged effects, and the (residual-)correlations are defined in autoregEffects, crossedEffects, and rXY, respectively. When these effects are assumed to be constant across waves (as must be the case when there are two waves), autoregEffects and crossedEffects are vectors of size 2, giving the effects of \(X\) and \(Y\). For instance, autoregEffects = c(.7, .6) defines the autoregressive effect(s) of \(X\) to be equal to .7, and those of \(Y\) to be equal to .6. Similarly, crossedEffects = c(.1, .2) defines the cross-lagged effect of \(X\) on \(Y\) (X -> Y) to be .1, and the cross-lagged effect of \(Y\) on \(X\) (Y -> X) to be .20. Non-zero (residual) correlations always need to refer to each wave, so in case of two waves, rXY is a vector comprising two entries, the first giving the correlation between \(X_1\) and \(Y_1\), the second the residual correlation between \(X_2\) and \(Y_2\).

When there are more than two waves and wave-dependent autoregressive or cross-lagged effects are assumed, a list of vectors is supplied. For instance, in the case of three waves, autoregEffects = list(c(.8, .7), c(.6, .5)) defines an autoregressive effect of .8 for X1 -> X2 and of .6 for X2 -> X3, and an autoregressive effect of .7 for Y1 -> Y2 and of .5 for Y2 -> Y3. When additionally specifying crossedEffects = list(c(.1, .2), c(.3, .4)), the system becomes:

\[ \hat{X_2} = .8 \cdot X_1 + .2 \cdot Y_1 \\ \hat{Y_2} = .1 \cdot X_1 + .7 \cdot Y_1 \\ \hat{X_3} = .6 \cdot X_2 + .4 \cdot Y_2 \\ \hat{Y_3} = .3 \cdot X_2 + .5 \cdot Y_2 \] By default, all defined parameters in autoregEffects, crossedEffects, and corXY are treated as completely standardized parameters (standardized = TRUE). In this case semPower defines the residual variances in \(\Psi\) such that all variances are 1. When standardized is set to FALSE, all diagonal elements of \(\Psi\) are equal to 1, in turn leading to unstandardized parameters.

All of the above holds regardless of whether \(X\) and \(Y\) are latent factors or observed variables. To define the measurement model for \(X\) and \(Y\), and thereby also whether these are latent factors or observed variables, arguments defining the factor model are used assuming the order (\(X_1\), \(Y_1\), \(X_2\), \(Y_2\), …, \(X_{nWaves}\), \(Y_{nWaves}\)). For instance, when there are two waves (nWaves = 2), the following shows three equivalent ways to define three indicators for \(X\) and four indicators for \(Y\) at both waves, where the loadings of each indicator on \(X\) are equal to .50 and the loadings on \(Y\) are equal to .60:

# using Lambda
Lambda <- matrix(c(
  #  X1   Y1   X2   Y2
  c(0.5, 0.0, 0.0, 0.0),
  c(0.5, 0.0, 0.0, 0.0),
  c(0.5, 0.0, 0.0, 0.0),
  c(0.0, 0.6, 0.0, 0.0),
  c(0.0, 0.6, 0.0, 0.0),
  c(0.0, 0.6, 0.0, 0.0),
  c(0.0, 0.6, 0.0, 0.0),
  c(0.0, 0.0, 0.5, 0.0),
  c(0.0, 0.0, 0.5, 0.0),
  c(0.0, 0.0, 0.5, 0.0),
  c(0.0, 0.0, 0.0, 0.6),
  c(0.0, 0.0, 0.0, 0.6),
  c(0.0, 0.0, 0.0, 0.6),
  c(0.0, 0.0, 0.0, 0.6)
), ncol = 4, byrow = TRUE)
# using loadings
loadings <- list(
  c(.5, .5, .5),       # X1
  c(.6, .6, .6, .6),   # Y1
  c(.5, .5, .5),       # X2
  c(.6, .6, .6, .6)    # Y2
)
# using nIndicator and loadM
nIndicator = c(3, 4, 3, 4)    # X1, Y1, X2, Y2
loadM = c(.5, .6, .5, .6)     # X1, Y1, X2, Y2

Generally, when homogeneous loadings are assumed, nIndicator and loadM are probably the way to go, whereas the simplest way to specify heterogeneous loadings is offered by the loadings argument. When \(X\) and \(Y\) shall be observed variables (rather than latent factors), dummy factors with a single indicator loading by 1 are defined, which is most easily done by setting Lambda = diag(p), where p is replaced by \(2 \cdot nWaves\), so in case of two waves this becomes Lambda = diag(4). For more details, see the chapter on the definition of the factor model.

Beyond the structure of the CLPM and particular values for the population parameters, semPower.powerCLPM also requires to define certain defaults that apply to both analysis models, i.e., to both the H0 model reflecting the hypothesis of interest and to the H1 comparison model. First, by default, the analyses models impose no equality constraints on the autoregressive effects, cross-lagged effects, and residual correlations over waves, i.e., the respective parameters (such as \(\beta_{X_2,X_1}\) and \(\beta_{X_3,X_2}\)) are freely estimated for each wave. The waveEqual argument can be set to change this behavior. For instance, when setting waveEqual = c('autoregX', 'autoregY') both the autoregressive effect of \(X\) and \(Y\) are held equal over waves. When setting waveEqual = c('autoregX', 'autoregY', 'crossedX', 'crossedY'), both the autoregressive and the cross-lagged effects are constrained to equality over waves, so in this example the system becomes:

\[ \hat{X_2} = \beta_{X,X} \cdot X_1 + \beta_{X, Y} \cdot Y_1 \\ \hat{Y_2} = \beta_{Y,X} \cdot X_1 + \beta_{Y, Y} \cdot Y_1 \\ \hat{X_3} = \beta_{X,X} \cdot X_2 + \beta_{X, Y} \cdot Y_2 \\ \hat{Y_3} = \beta_{Y,X} \cdot X_2 + \beta_{Y, Y} \cdot Y_2 \]

To assume that the residual correlations between \(X_2\) and \(Y_2\) as well as between \(X_3\) and \(Y_3\) shall be equal, add 'corXY' to waveEqual. Although the constraints defined in waveEqual apply to both the H0 and the H1 model, this will nevertheless affect statistical power.

Second, when \(X\) and/or \(Y\) are latent factors, the analyses models impose metric invariance constraints (equal loadings) over waves for the measurement model by default (metricInvariance = TRUE) and also assume that the residuals of the indicators are autocorrelated across waves (autocorResiduals = TRUE). Both can be set to FALSE to disable invariance constraints over waves and to disable estimation of autocorrelated residuals. However, it is generally recommended to interpret a CLPM structure only when invariance is met. In addition, the presence of invariance constraints as well as the presence of autocorrelated residuals also affects power for hypotheses concerning the CLPM parameters. Note that when metricInvariance = TRUE, the measurement model should be defined in a way that the invariance constraints are actually met by specifying identical loadings over waves for both \(X\) and \(Y\).

Detect whether a cross-lagged effect differs from zero

To perform a power analysis to detect whether the cross-lagged effect of either \(X\) (X -> Y) or \(Y\) (Y -> X) differs from zero, use nullEffect = 'crossedX = 0' or nullEffect = 'crossedY = 0'.

For instance, the following sets up a CLPM with two waves (nWaves = 2) and requests the required sample size (type = 'a-priori') to detect that a cross-lagged effect of \(X\) of at least .10 (crossedEffects = c(.10, .15)) differs from zero (nullEffect = 'crossedX = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The cross-lagged effect of \(Y\) is .15, the autoregressive effects of \(X\) and \(Y\) are .6 and .7 (autoregEffects = c(.6, .7)), respectively, and the synchronous (residual-)correlations at the first and second wave are .3 and .1 (rXY = c(.3, .1)), respectively. \(X\) is a latent factor measured by 5 indicators loading by .5 each (at both waves) and \(Y\) is measured by 3 indicators loading by .6 each (at both waves; nIndicator = c(5, 3, 5, 3) and loadM = c(.5, .6, .5, .6)). See the section on defining the CLPM structure for details.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                nullEffect = 'crossedX = 0',
                                nWaves = 2,
                                autoregEffects = c(.60, .70),
                                crossedEffects = c(.10, .15),
                                rXY = c(.3, .1),
                                # define measurement model
                                nIndicator = c(5, 3, 5, 3),
                                loadM = c(.5, .6, .5, .6)
                                )
summary(powerCLPM)

The results of the power analysis are printed by calling the summary method on powerCLPM, which, in this example, provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'post-hoc', alpha = .05, N = 500,
                                # define hypothesis
                                nullEffect = 'crossedX = 0',
                                nWaves = 2,
                                autoregEffects = c(.60, .70),
                                crossedEffects = c(.10, .15),
                                rXY = c(.3, .1),
                                # define measurement model
                                nIndicator = c(5, 3, 5, 3),
                                loadM = c(.5, .6, .5, .6)
                                )

Now, summary(powerCLPM) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

If the model does not include any factor, but is based on observed variables, the only change refers to the definition of the measurement model. Below, Lambda = diag(4) defines four dummy factors with a single indicator loading by 1, so these become observed variables:

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                nullEffect = 'crossedX = 0',
                                nWaves = 2,
                                autoregEffects = c(.60, .70),
                                crossedEffects = c(.10, .15),
                                rXY = c(.3, .1),
                                # define measurement model
                                Lambda = diag(4)
                                )

If the model comprises more than two waves and the autoregressive effects, cross-lagged effects, and/or synchronous residual correlations are considered stable across waves, the waveEqual can be used to define both the H0 and the H1 models to implement equality restrictions on these parameters. For instance, the following sets up a CLPM with three waves (nWaves = 3). Only a single value is provided for both \(X\) and \(Y\) concerning the autoregressive effects (autoregEffects = c(.60, .70)) and cross-lagged effects (crossedEffects = c(.10, .15)), so these are constant across waves. Similarly, rXY = c(.3, .1, .1) defines the residual correlations between \(X\) and \(Y\) at waves 2 and 3 to be equal to .10. The arguments provided to waveEqual ensure that the analyses models implement equality restrictions over waves on the autoregressive effects, crossed-lag effects, and the residual correlations at wave 2 and 3. Both the H0 model and the comparison model implement these restrictions and merely differ with respect to the hypothesized effect (the absence of the cross-lagged effect of \(X\), nullEffect = 'crossedX = 0').

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                nullEffect = 'crossedX = 0',
                                nWaves = 3,
                                autoregEffects = c(.60, .70),
                                crossedEffects = c(.10, .15),
                                rXY = c(.3, .1, .1),
                                waveEqual = c('autoregX', 'autoregY', 'crossedX', 'crossedY', 'corXY'),
                                # define measurement model
                                Lambda = diag(6)
                                )

In all examples above, a power analysis was performed by comparing the implied H0 model against a less restrictive H1 model (by omitting the comparison argument which defaults to 'restricted'). If one rather wants to compare the H0 model against the saturated model, use comparison = 'saturated'. See the chapter on the definition of the comparison model for a detailed discussion.

Detect whether an autoregressive effect differs from zero

To perform a power analysis to detect whether the autoregressive effect of either \(X\) or \(Y\) differs from zero, use nullEffect = 'autoregX = 0' or nullEffect = 'autoregY = 0'.

For instance, the following sets up a CLPM with two waves (nWaves = 2) and requests the required sample size (type = 'a-priori') to detect that an autoregressive effect of \(X\) of at least .50 (autoregEffects = c(.50, .80)) differs from zero (nullEffect = 'autoregX = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The autoregressive effect of \(Y\) is .80, the cross-lagged effects of \(X\) and \(Y\) are .10 and .05 (crossedEffects = c(.10, .05)), respectively, and the synchronous (residual-)correlations at the first and second wave are .2 and .3 (rXY = c(.2, .3)), respectively. \(X\) is a latent factor measured by 3 indicators loading by .5 each (at both waves) and \(Y\) is measured by 3 indicators loading by .7 each (at both waves; nIndicator = c(3, 3, 3, 3) and loadM = c(.5, .7, .5, .7)). See the section on defining the CLPM structure for details.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                nullEffect = 'autoregX = 0',
                                nWaves = 2,
                                autoregEffects = c(.50, .80),
                                crossedEffects = c(.10, .05),
                                rXY = c(.2, .3),
                                # define measurement model
                                nIndicator = c(3, 3, 3, 3),
                                loadM = c(.5, .7, .5, .7)
                                )
Detect whether the cross-lagged effects of \(X\) and \(Y\) differ

To perform a power analysis to detect whether the cross-lagged effect of \(X\) (X -> Y) differs from the cross-lagged effect of \(Y\) (Y -> X) , use nullEffect = 'crossedX = crossedY'.

For instance, the following sets up a CLPM with three waves (nWaves = 3) and requests the required sample size (type = 'a-priori') to detect that the cross-lagged effect of \(X\) differs from the cross-lagged effect of \(Y\) (nullEffect = 'crossedX = crossedY') at the first wave (nullWhich = 1) on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The autoregressive effects of both \(X\) and \(Y\) are constant across waves (autoregEffects = c(.60, .70)), which is also an assumption made in both the H0 and the H1 model (waveEqual = c('autoregX', 'autoregY', 'corXY')). Likewise, the analyses models restrict the residual correlations between \(X\) and \(Y\) at the second and third wave to equality, which matches the defined values in the population (rXY = c(.3, .1, .1)). However, the cross-lagged effects for \(X\) and \(Y\) differ across waves (crossedEffects): the cross-lagged effects of \(X\) are .05 and .15 at wave 2 (X1 -> Y2) and 3 (X2 -> Y3), respectively, and the cross-lagged effects of \(Y\) are .15 and .20 at wave 2 (Y1 -> X2) and 3 (Y2 -> X3), respectively. Note that waveEqual does not constrain the cross-lagged effect to be equal across waves. The nullWhich = 1 argument now indicates which of the two cross-lagged effects of \(X\) and \(Y\) are targeted by the hypothesis, namely the first cross-lagged effects, i.e., the effect from wave 1 to wave 2 (X1 -> Y2 and Y1 -> X2). Finally, \(X\) is defined to be a latent factor measured by 10 indicators loading by .7 each (at all waves) and \(Y\) is measured by 8 indicators loading by .7 each (at all waves; nIndicator = c(10, 8, 10, 8, 10, 8) and loadM = c(.7, .6, .7, .6, .7, .6)). See the section on defining the CLPM structure for details.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                nullEffect = 'crossedX = crossedY',
                                nullWhich = 1, 
                                nWaves = 3,
                                autoregEffects = c(.60, .70),
                                crossedEffects = list(
                                  c(.05, .15),     # X1 -> Y2; X2 -> Y3
                                  c(.15, .20)      # Y1 -> X2; Y2 -> X3
                                ),
                                rXY = c(.3, .1, .1),
                                waveEqual = c('autoregX', 'autoregY', 'corXY'),
                                # define measurement model
                                nIndicator = c(10, 8, 10, 8, 10, 8),
                                loadM = c(.7, .6, .7, .6, .7, .6)
                                )
Detect whether the autoregressive effects of \(X\) and \(Y\) differ

To perform a power analysis to detect whether the autoregressive effect of \(X\) differs from the autoregressive effect of \(Y\), use nullEffect = 'autoregX = autoregY'.

For instance, the following sets up a CLPM in the same way as in the previous section and requests the required sample size (type = 'a-priori') to detect that the autoregressive effect of \(X\) differs from the autoregressive effect of \(Y\) (nullEffect = 'autoregX = autoregY') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). Because the autoregressive effects of both \(X\) and \(Y\) are constant across waves (autoregEffects = c(.60, .70)), which is also an assumption made in both the H0 and the H1 model (waveEqual = c('autoregX', 'autoregY', 'corXY')), there is in fact only one autoregressive effect each for \(X\) and \(Y\), so the nullWhich argument can be omitted.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                nullEffect = 'autoregX = autoregY',
                                nWaves = 3,
                                autoregEffects = c(.60, .70),
                                crossedEffects = list(
                                  c(.05, .15),     # X1 -> Y2; X2 -> Y3
                                  c(.15, .20)      # Y1 -> X2; Y2 -> X3
                                ),
                                rXY = c(.3, .1, .1),
                                waveEqual = c('autoregX', 'autoregY', 'corXY'),
                                # define measurement model
                                nIndicator = c(10, 8, 10, 8, 10, 8),
                                loadM = c(.7, .6, .7, .6, .7, .6)
                                )
Detect whether the cross-lagged effects differ across waves

To perform a power analysis to detect whether the cross-lagged effects of either \(X\) or \(Y\) differ across waves, use nullEffect = 'crossedX' or nullEffect = 'crossedY'. This is only meaningful when there are at least 3 waves.

For instance, the following sets up a CLPM with three waves (nWaves = 3) and requests the required sample size (type = 'a-priori') to detect that the cross-lagged effects of \(X\) differ across waves (nullEffect = 'crossedX') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). Both the cross-lagged effects of \(X\) and \(Y\) differ across waves (crossedEffects): the cross-lagged effects of \(X\) are .05 and .15 at wave 2 (X1 -> Y2) and 3 (X2 -> Y3), respectively, and the cross-lagged effects of \(Y\) are .15 and .20 at wave 2 (Y1 -> X2) and 3 (Y2 -> X3), respectively. Similarly, the autoregressive effects of \(X\) (but not \(Y\)) differ across waves as well (autoregEffects). The analyses models restrict the residual correlations between \(X\) and \(Y\) at the second and third wave to equality (waveEqual = c('corXY')), which matches the defined values in the population (rXY = c(.3, .1, .1)). Finally, \(X\) is defined to be a latent factor measured by 10 indicators loading by .7 each (at all waves) and \(Y\) is measured by 8 indicators loading by .7 each (at all waves; nIndicator = c(10, 8, 10, 8, 10, 8) and loadM = c(.7, .6, .7, .6, .7, .6)). See the section on defining the CLPM structure for details.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                nullEffect = 'crossedX',
                                nWaves = 3,
                                autoregEffects = list(
                                  c(.60, .70),     # X1 -> X2; X2 -> X3
                                  c(.70, .70)      # Y1 -> Y2; Y2 -> Y3
                                ),
                                crossedEffects = list(
                                  c(.05, .15),     # X1 -> Y2; X2 -> Y3
                                  c(.15, .20)      # Y1 -> X2; Y2 -> X3
                                ),
                                rXY = c(.3, .1, .1),
                                waveEqual = c('corXY'),
                                # define measurement model
                                nIndicator = c(10, 8, 10, 8, 10, 8),
                                loadM = c(.7, .6, .7, .6, .7, .6)
                                )
Detect whether the autoregressive effects differ across waves

To perform a power analysis to detect whether the autoregressive effects of either \(X\) or \(Y\) differ across waves, use nullEffect = 'autoregX' or nullEffect = 'autoregY'. This is only meaningful when there are at least 3 waves.

For instance, the following sets up a CLPM in the same way as in the previous section and requests the required sample size (type = 'a-priori') to detect that the autoregressive effect of \(X\) differs across waves (nullEffect = 'autoregX') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). Note that the autoregressive effects of \(Y\) do not change over waves, so in this example performing a power analysis with nullEffect = 'autoregY' would not be possible, as there is no difference to detect.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                nullEffect = 'autoregX',
                                nWaves = 3,
                                autoregEffects = list(
                                  c(.60, .70),     # X1 -> X2; X2 -> X3
                                  c(.70, .70)      # Y1 -> Y2; Y2 -> Y3
                                ),
                                crossedEffects = list(
                                  c(.05, .15),     # X1 -> Y2; X2 -> Y3
                                  c(.15, .20)      # Y1 -> X2; Y2 -> X3
                                ),
                                rXY = c(.3, .1, .1),
                                waveEqual = c('corXY'),
                                # define measurement model
                                nIndicator = c(10, 8, 10, 8, 10, 8),
                                loadM = c(.7, .6, .7, .6, .7, .6)
                                )
Detect whether the residual correlations between \(X\) and \(Y\) differ across waves

To perform a power analysis to detect whether the residual correlations between \(X\) and \(Y\) differ across waves, use nullEffect = 'corXY'. This is only meaningful when there are at least 3 waves, because otherwise there is only a single residual correlation (namely the one at wave 2; the synchronous correlation at wave 1 is not residualized and therefore not subject of any equality constraint).

For instance, the following sets up a CLPM with three waves (nWaves = 3) and requests the required sample size (type = 'a-priori') to detect that the synchronous correlations between \(X\) and \(Y\) differs at wave 2 from the one at wave 3 (nullEffect = 'corXY') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The synchronous correlations are defined to be .3, .2, and .1 at the first, second, and third wave, respectively (rXY = c(.3, .2, .1)). The cross-lagged effects of both \(X\) and \(Y\) (crossedEffects = c(.05, .15)) are constant across waves, as is the autoregressive effect of \(Y\), whereas the autoregressive effect of \(X\) is not (autoregEffects). The analyses models restrict the cross-lagged effects and the autoregressive effect of \(Y\) to be equal across waves (waveEqual = c('crossedX', 'crossedY', 'autoregY')). Finally, \(X\) is defined to be a latent factor measured by 10 indicators loading by .7 each (at all waves) and \(Y\) is measured by 8 indicators loading by .7 each (at all waves; nIndicator = c(10, 8, 10, 8, 10, 8) and loadM = c(.7, .6, .7, .6, .7, .6)). See the section on defining the CLPM structure for details.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80,
                                # define hypothesis
                                nullEffect = 'corXY',
                                nWaves = 3,
                                autoregEffects = list(
                                  c(.60, .70),     # X1 -> X2; X2 -> X3
                                  c(.70, .70)      # Y1 -> Y2; Y2 -> Y3
                                ),
                                crossedEffects = c(.05, .15),
                                rXY = c(.3, .2, .1),
                                waveEqual = c('crossedX', 'crossedY', 'autoregY'),
                                standardized = FALSE, 
                                # define measurement model
                                nIndicator = c(10, 8, 10, 8, 10, 8),
                                loadM = c(.7, .6, .7, .6, .7, .6)
                                )

Whereas all previous examples were based on standardized effects (by omitting the standardized argument which defaults to TRUE), hypotheses referring to the residual correlations require standardized to be set to FALSE, because the equality restrictions are always applied on unstandardized parameters, which in case of the residual correlations generally differ from the standardized parameters.

Detect whether the cross-lagged effect of \(X\) or \(Y\) differs across groups

To perform a power analysis to detect whether the cross-lagged effect of \(X\) or the one of \(Y\) differs across groups, use nullEffect = 'crossedXA = crossedXB' or nullEffect = 'crossedYA = crossedYB'.

In multigroup analysis, autoregEffects and/or crossedEffects must be provided in a list structure, where each component specifies the values for a particular group. For instance, the following sets up a CLPM with three waves (nWaves = 3) and requests the required sample size (type = 'a-priori') to detect that the cross-lagged effect of \(X\) differs across groups (nullEffect = 'crossedXA = crossedXB') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). In the first group, the cross-lagged effect of \(X\) is .10 in all waves, in the second group the cross-lagged effect of \(X\) is .20 in all waves (crossedEffects). The cross-lagged effect of \(Y\) is .05 in both groups and in all waves. The autoregressive effects of \(X\) and \(Y\) are also constant across waves, but differ from each other and concerning \(X\) also across groups (with .8 in the first group and .6 in the second group, autoregEffects). The synchronous correlations are zero in all waves and in both groups (rXY = NULL). Further, both the H0 and the H1 model assume the autoregressive and cross-lagged effects to be equal across waves (but not across groups; waveEqual = c('autoregX', 'autoregY', 'crossedX', 'crossedY')). Because the cross-lagged effects of \(Y\) are held constant over waves, in the H0 model only a single parameter differs across groups, so the df of the power analysis become 1.

Since semPower.powerCLPM by default applies metric invariance constraints across both waves and groups (metricInvariance = TRUE), the measurement model is defined to be identical in both groups: \(X\) is a latent factor measured by 5 indicators loading by .5 each (at all three waves) and \(Y\) is measured by 3 indicators loading by .4 each (at all three waves; nIndicator and loadM). Furthermore, in multiple group models, the N argument also needs to be provided as a list, which in case of an a priori power analysis gives the group weights. N = list(1, 1) requests equally sized groups. If using N = list(2, 1) instead, the first group would be twice as large as the second group. If a post hoc or compromise power analysis is requested, N is a list providing the number of observations for each group.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                                # define hypothesis
                                nullEffect = 'crossedXA = crossedXB',
                                nWaves = 3,
                                autoregEffects = list(
                                  # group 1
                                  list(c(.80, .80),   # X1 -> X2, X2 -> X3 
                                       c(.70, .70)),  # Y1 -> Y2, Y2 -> Y3
                                  # group 2
                                  list(c(.60, .60),   # X1 -> X2, X2 -> X3
                                       c(.70, .70))   # Y1 -> Y2, Y2 -> Y3
                                ),
                                crossedEffects = list(
                                  # group 1
                                  list(c(.10, .10),   # X1 -> Y2, X2 -> Y3 
                                       c(.05, .05)),  # Y1 -> X2, Y2 -> X3
                                  # group 2
                                  list(c(.20, .20),   # X1 -> Y2, X2 -> Y3
                                       c(.05, .05))   # Y1 -> X2, Y2 -> X3
                                ),
                                waveEqual = c('autoregX', 'autoregY', 'crossedX', 'crossedY'),
                                rXY = NULL,
                                # define measurement model
                                nIndicator = c(5, 3, 5, 3, 5, 3),
                                loadM = c(.5, .4, .5, .4, .5, .4)
                                )

If there are more than two groups, the targeted cross-lagged effect is held equal across all groups by default. If the cross-lagged effect should only be constrained to equality in specific groups, nullWhichGroups is used to identify the groups to which the equality restrictions apply. For instance, the following defines three equally sized groups, but only asks for the required sample to detect that the cross-lagged effect of \(X\) in group 1 (of .05) differs from the one in group 3 (of .20; nullWhichGroups = c(1, 3)). Also, because the autoregressive effects of \(X\) are not assumed to be constant across wave, nullWhich identifies which of the autoregressive effects to constrain to equality across groups.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80, N = list(1, 1, 1),
                                # define hypothesis
                                nullEffect = 'crossedXA = crossedXB',
                                nullWhich = 1,
                                nullWhichGroups = c(1, 3),
                                nWaves = 3,
                                autoregEffects = c(.80, .70),  # (X->Y, Y->X) in all groups and all waves 
                                crossedEffects = list(
                                  # group 1
                                  list(c(.05, .20),   # X1 -> Y2, X2 -> Y3 
                                       c(.00, .05)),  # Y1 -> X2, Y2 -> X3
                                  # group 2
                                  list(c(.10, .10),   # X1 -> Y2, X2 -> Y3 
                                       c(.15, .15)),  # Y1 -> X2, Y2 -> X3
                                  # group 3
                                  list(c(.20, .20),   # X1 -> Y2, X2 -> Y3
                                       c(.05, .10))   # Y1 -> X2, Y2 -> X3
                                ),
                                waveEqual = c('autoregX', 'autoregY'),
                                rXY = c(.3, .3, .3),  # for all groups
                                # define measurement model
                                nIndicator = c(5, 3, 5, 3, 5, 3),
                                loadM = c(.5, .4, .5, .4, .5, .4)
                                )
Detect whether the autoregressive effect of \(X\) or \(Y\) differs across groups

To perform a power analysis to detect whether the autoregressive effect of \(X\) or \(Y\) differs across groups, use nullEffect = 'autoregXA = autoregXB' or nullEffect = 'autoregYA = autoregYB'.

For instance, the following defines a CLPM similar to the example in the previous section, with three alterations: First, no list is provided as argument to crossedEffects, so the crossed-effects of both \(X\) and \(Y\) are assumed to be both constant across waves and equal across groups. Second, both the H0 and the H1 model merely assume the crossed effects to be constant across waves (waveEqual = c('crossedX', 'crossedY')). Third, the autoregressive effect of \(X\) differs both across waves and across groups (autoregEffects). Specifically, whereas the autoregressive effect of \(X\) from wave 1 to wave 2 is .80 in both groups, the autoregressive effect of \(X\) from wave 2 to wave 3 is .80 in the first group, but .60 in the second group. Because there are now multiple autoregressive effects for \(X\), nullWhich = 2 is used to identify the effect on which the cross-group constraint should be applied.

powerCLPM <- semPower.powerCLPM(
                                # define type of power analysis
                                type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                                # define hypothesis
                                nullEffect = 'autoregXA = autoregXB',
                                nullWhich = 2,
                                nWaves = 3,
                                autoregEffects = list(
                                  # group 1
                                  list(c(.80, .80),   # X1 -> X2, X2 -> X3 
                                       c(.70, .70)),  # Y1 -> Y2, Y2 -> Y3
                                  # group 2
                                  list(c(.80, .60),   # X1 -> X2, X2 -> X3
                                       c(.70, .70))   # Y1 -> Y2, Y2 -> Y3
                                ),
                                crossedEffects = c(.10, .05), # (X->Y, Y->X) in all groups and all waves
                                waveEqual = c('crossedX', 'crossedY'),
                                rXY = NULL,
                                # define measurement model
                                nIndicator = c(5, 3, 5, 3, 5, 3),
                                loadM = c(.5, .4, .5, .4, .5, .4)
                                )

4.13 RI-CLPM models

semPower.powerRICLPM is used to perform power analyses for hypotheses arising in random intercept cross-lagged panel models (RI-CLPM). Given that RI-CLPMs extend traditional CLPMs, the following only covers the differences between semPower.powerRICLPM and semPower.powerCLPM. Consult the chapter on the traditional CLPM for a detailed description of the relevant parameters and arguments shared by both methods.

The arguments expected by semPower.powerRICLPM are identical to those expected by semPower.powerCLPM, with the following exceptions:

  • nWaves must be \(\geq\) 3.
  • rBXBY defines the correlation between the random intercept factors (or NULL for no correlation).
  • nullEffect also accepts 'corBXBY = 0' to constrain the correlation between the random intercept factors to zero and 'corBXBYA = corBXBYB' to constrain the correlation between the random intercept factors to be equal across groups.
  • Standardized parameters (standardized) are not available.
Detect whether the correlation between the random intercept factors differs from zero

To perform a power analysis to detect whether the correlation between the random intercept factors differs from zero, use nullEffect = 'corBXBY = 0'.

For instance, the following sets up a RI-CLPM with three waves (nWaves = 3) and requests the required sample size (type = 'a-priori') to detect that the correlation between the random intercept factors (of .30; rBXBY= .30) differs from zero (nullEffect = 'corBXBY = 0') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). The autoregressive effects of \(X\) and \(Y\) are .5 and .4 (autoregEffects = c(.5, .4)), respectively. The cross-lagged effects of \(X\) is .2, the cross-lagged effect of \(Y\) is .1 (crossedEffects = c(.2, .1)). Both the autoregressive and the cross-lagged effects of both \(X\) and \(Y\) are equal across waves (but freely estimated in each wave, as waveEqual is not set). All synchronous correlations (rXY = NULL) are zero. For more details on the arguments, see the chapter on traditional CLPMs.

powerRICLPM <- semPower.powerRICLPM(
                                    # define type of power analysis
                                    type = 'a-priori', alpha = .05, power = .80,
                                    # define hypothesis
                                    nullEffect = 'corBXBY = 0',
                                    nWaves = 3,
                                    autoregEffects = c(.5, .4),   # (X, Y)
                                    crossedEffects = c(.2, .1),   # (X, Y)
                                    rXY = NULL,                   # diagonal
                                    rBXBY = .30,                  
                                    # define measurement model
                                    nIndicator = rep(3, 6),
                                    loadM = rep(c(.5, .6), 3)
                                    )
Detect whether the correlations between the random intercept factors differ across groups

To perform a power analysis to detect whether the correlation between the random intercept factors differs across groups, use nullEffect = 'corBXBYA = corBXBYB'.

When performing a multigroup analysis, autoregEffects and/or crossedEffects must be provided in a list structure, where each component specifies the values for a particular group. When aiming to detect a group difference in the correlation between the intercept factors, rBXBY must be a list as well.

For instance, the following sets up a RI-CLPM with three waves (nWaves = 3) and requests the required sample size (type = 'a-priori') to detect that the correlation between the random intercept factors in group 1 differs from the correlation in group 2 (nullEffect = 'corBXBYA = corBXBYB') on alpha = .05 (alpha = .05) with a power of 80% (power = .80). In the first group, the correlation between the random intercept factors is .3, in the second group .6 (rBXBY = list(.3, .6)). There are further group differences in the cross-lagged effects (crossedEffects), whereas the autoregressive effects of both \(X\) and \(Y\) are constant across waves and groups (but freely estimated for each wave and group). For the remaining arguments, see the chapter on traditional CLPMs.

powerRICLPM <- semPower.powerRICLPM(
                                    # define type of power analysis
                                    type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
                                    # define hypothesis
                                    nullEffect = 'crossedXA = crossedXB',
                                    nullWhich = 2,
                                    nWaves = 3,
                                    autoregEffects = c(.5, .4),   # constant across waves and groups
                                    crossedEffects = list(
                                    # group 1
                                    list(
                                      c(.25, .10),    # X1 -> Y2, X2 -> Y3 
                                      c(.05, .15)     # Y1 -> X2, Y2 -> X3 
                                      ),
                                    # group 2
                                    list(
                                       c(.15, .05),   # X1 -> Y2, X2 -> Y3 
                                       c(.01, .10)    # Y1 -> X2, Y2 -> X3 
                                       )
                                    ),
                                    rXY = NULL,                   # diagonal
                                    rBXBY = list(.3, .6),         # (group1, group2) 
                                    # define measurement model (same for all groups)
                                    nIndicator = rep(3, 6),
                                    loadM = rep(c(.5, .6), 3)
                                    )

4.14 Latent growth curve models

semPower.powerLGCM is used to perform power analyses for hypotheses arising in latent growth curve models (LGCM), where one variable \(X\) is repeatedly assessed at different occasions (waves). A latent intercept and a linear (and optionally a quadratic) latent slope factor representing the random effects capturing individual differences in change over time is assumed, and there might also be a time-invariant covariate. semPower.powerLGCM provides interfaces to perform power analyses concerning the following hypotheses:

  • whether the mean of the intercept (nullEffect = 'iMean = 0'), linear slope ('sMean = 0'), or quadratic slope ('s2Mean = 0') factor, respectively, is zero.
  • whether the variance of the intercept ('iVar = 0'), linear slope ('sVar = 0'), or quadratic slope ('s2Var = 0') factor, respectively, is zero.
  • whether the covariance between the intercept and linear slope ('isCov = 0'), between the intercept and the quadratic slope ('is2Cov = 0'), or between the linear and the quadratic slope ('ss2Cov = 0') is zero.
  • whether the slope for an exogenous time-invariant covariate in the prediction of the intercept factor ('betaIT = 0'; \(I = \beta_{it} \cdot TIC\)), the linear slope factor ('betaST = 0'; \(S = \beta_{st} \cdot TIC\)), or the quadratic slope factor ('betaS2T = 0'; \(S^2 = \beta_{s^2t} \cdot TIC\)), respectively, is zero.
  • whether the slope for the intercept factor ('betaTI = 0'), the linear slope factor ('betaTS = 0'), or the quadratic slope factor ('betaTS2 = 0'), respectively, in the prediction of an endogenous time-invariant covariate is zero (i.e., \(TIC = \beta_{ti} \cdot I + \beta_{ts} \cdot S + \beta_{ts^2} \cdot S^2\)).
  • whether the means of the intercept ('iMeanA = iMeanB'), linear slope ('sMeanA = sMeanB'), or quadratic slope ('s2MeanA = s2MeanB') factor, respectively, are equal across groups.
  • whether the variances of the intercept ('iVarA = iVarB'), linear slope ('sVarA = sVarB'), or quadratic slope ('s2VarA = s2VarB') factor, respectively, are equal across groups.
  • whether the covariances between the intercept and linear slope ('isCovA = isCovA'), between the intercept and the quadratic slope ('is2CovA = is2CovA'), or between the linear and the quadratic slope ('ss2CovA = ss2CovA') factor are equal across groups.
  • whether the slopes for the time-invariant covariate in the prediction of the intercept ('betaITA = betaITB'), the linear slope ('betaSTA = betaSTB'), or the quadratic slope ('betaS2TA = betaS2TB') factor, respectively, are equal across groups.
  • whether the slope the intercept ('betaTIA = betaTIB'), the linear slope ('betaTSA = betaTSB'), or the quadratic slope ('betaTS2A = betaTS2B') factor, respectively, in the prediction of the time-invariant covariate are equal across groups.

For hypotheses regarding longitudinal invariance, see semPower.powerLI().

Note that power analyses concerning the hypotheses iVar = 0, sVar = 0, and s2Var = 0 are only approximate, because the H0 model involves a parameter constraint on the boundary of the parameter space (a variance of zero), so that the correct limiting distribution is a mixture of non-central \(\chi^2\) distributions (see Stoel et al., 2006). In effect, power is (slightly) underestimated.

semPower.powerLGCM expects the following arguments:

  • nWaves: The number of waves (measurement occasions), must be \(\geq\) 3 for linear and \(\geq\) 4 for quadratic trends.
  • quadratic: Whether to include a quadratic slope factor in addition to a linear slope factor. Defaults to FALSE for no quadratic slope factor.
  • means: Vector of means for the intercept factor, linear slope factor, and quadratic slope factor (if present), e.g. c(.5, .25, .1) for a mean of the intercept factor of .5, of the linear slope factor of .25, and of the quadratic slope factor of .1.
  • variances: Vector of variances for the intercept factor, linear slope factor, and quadratic slope (if present), e.g. c(1, .5, .2) for a variance of the intercept factor of 1, of the linear slope factor of .5, and of the quadratic slope factor of .2. Can be omitted if a matrix is provided to the covariances argument.
  • covariances: Either the variance-covariance matrix between the intercept and the linear slope (and the quadratic slope factor if present), or a single number giving the covariance between the intercept and linear slope factor, or NULL for orthogonal factors. If a matrix is provided and the variances argument is also set, the diagonal is replaced by the values defined in variances.
  • timeCodes: Vector of length nWaves defining the loadings on the slope factor. If omitted, the time codes default to \((0, 1, ..., (nWaves - 1))\).
  • ticExogSlopes: Vector defining the slopes for an exogenous time-invariant covariate (TIC) in the prediction of the intercept and slope factors (and the quadratic slope factor, if present), e.g. c(.5, .4, .3) for \(I = .5 \cdot TIC\), \(S = .4 \cdot TIC\), and \(S^2 = .3 \cdot TIC\). Can be omitted for no exogenous covariate.
  • ticEndogSlopes: Vector defining the slopes for the intercept and slope factors (and the quadratic slope factor, if present) in the prediction of an endogenous time-invariant covariate (TIC), e.g. c(.5, .4, .3) for \(TIC = .5 \cdot I + .4 \cdot S + .3 \cdot S^2\). Can be omitted for no endogenous covariate.
  • groupEqual: Parameters that are restricted to equality across groups in both the H0 and the H1 model, when nullEffect implies a multiple group model. Valid are 'imean', 'smean', 's2mean' to restrict the means of the intercept, linear slope, and quadratic slope factors, and 'ivar', 'svar', 's2var' for their variances, and 'iscov', 'is2cov', 'ss2cov' for the covariances between intercept and slope, intercept and quadratic slope, and linear and quadratic slope, respectively.
  • nullEffect: Defines the hypothesis of interest. See the previous paragraph for valid arguments.
  • nullWhichGroups: For hypotheses involving cross-groups comparisons, vector indicating the groups for which equality constrains should be applied. By default, the relevant parameter(s) are restricted to equality across all groups.
  • autocorResiduals: Whether the residuals of the indicators of \(X\) are autocorrelated over waves (TRUE, the default) or not (FALSE).
  • additional arguments specifying the type of power analysis.
  • additional arguments defining the factor model, where the order of factors is (\(X_1\), \(X_2\), …, \(X_{nWaves}\), \(TIC_{exog}\), \(TIC_{endog}\)), where \(TIC_{endog}\) takes the place of \(TIC_{exog}\) if there is no exogenous TIC.

semPower.powerLGCM provides a list as result. Use the summary method to obtain formatted results. The list contains the following components:

  • The results of the power analysis, which contain the same information as the corresponding model-free counterpart (see a-priori power analysis, post-hoc power analysis, and compromise hoc power analysis).
  • Sigma and mu: Variance-covariance matrix and means in the population.
  • SigmaHat and muHat: Model-implied variance-covariance matrix and means.
  • modelH0 and modelH1: lavaan model strings defining the H0 and the H1 model (only if type = 'restricted').
  • simRes: Detailed simulation results when a simulated power analysis (simulatedPower = TRUE) was performed.
Detect that the mean of the intercept, linear or quadratic slope factor differs from zero

To perform a power analysis to detect whether the mean of the intercept factor, the slope factor, or the quadratic slope factor (if present) differs from zero, use nullEffect = 'iMean', nullEffect = 'sMean', and nullEffect = 'iMean2', respectively.

For instance, the following requests the required sample size (type = 'a-priori') to detect that the mean of the slope factor (nullEffect = 'sMean') differs from zero with a power of 80% (power = .80) on alpha = .05 (alpha = .05). The model comprises an attribute measured at three occasions (nWaves = 3) by 3 indicators each (nIndicator = rep(3, 5)), where all loadings are equal to .50 (loadM = .5). See the chapter on specifying a factor model for alternative (more flexible) ways to define the factor loadings. The intercept factor has a mean of .5 (means = c(.5, .2) and a variance of 1 (variances = c(1, .5). The slope factor has a mean of .2 and a variance of .5. The covariance between the intercept and the slope factor is .25 (covariances = .25). In addition, the model comprises autocorrelated indicator residuals across time, because the autocorResiduals (which defaults to TRUE) argument is omitted. Also, loadings and indicator intercepts are always constrained to be equal across measurement occasions.

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),     # i, s
  variances = c(1, .5),  # i, s
  covariances = .25,
  nullEffect = 'sMean = 0',
  # define measurement model
  nIndicator = rep(3, 3), loadM = .5
)
summary(powerLGCM)

The results of the power analysis are printed by calling the summary method on powerLGCM, which in this example provides the same information as a model-free a-priori power analysis counterpart.

If a post hoc power analysis is desired, the arguments related to the power analysis need to be adapted accordingly:

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'post-hoc', alpha = .05, N = 300,
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),     # i, s
  variances = c(1, .5),  # i, s
  covariances = .25,
  nullEffect = 'sMean = 0',
  # define measurement model
  nIndicator = rep(3, 3), loadM = .5
)

Now, summary(powerLGCM) provides the same information as a model-free post-hoc power analysis counterpart. A compromise power analysis (type = 'compromise') is performed analogously.

In the examples above, a power analysis was performed by comparing the implied H0 model against a less restrictive H1 model (by omitting the comparison argument which defaults to 'restricted'). If one rather wants to compare the H0 model against the saturated model, use comparison = 'saturated'. See the chapter on the definition of the comparison model for a detailed discussion.

Instead of defining the variances and the covariances separately, a variance-covariance matrix can also be provided to the covariances argument. For instance, the following is equivalent to the example above:

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),     # i, s
  covariances = matrix(c(
      # i, s
    c(1, .25),
    c(.25, .5)
    ), ncol = 2, byrow = TRUE),
  nullEffect = 'sMean = 0',
  # define measurement model
  nIndicator = rep(3, 3), loadM = .5
)

If the LGCM should be based on observed rather than latent indicator variables (yielding a first-order LGCM), the only change refers to the definition of the measurement model. Below, Lambda = diag(3) defines three dummy factors with a single indicator loading by 1, so these become observed variables:

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),     # i, s
  variances = c(1, .5),  # i, s
  covariances = .25,
  nullEffect = 'sMean = 0',
  # define measurement model
  Lambda = diag(3)
)

It is also possible to include a quadratic slope factor by adding quadratic = TRUE. For instance, the following defines a LGCM involving 4 measurements (nWaves = 4) and includes a quadratic slope factor (quadratic = TRUE) in addition to the intercept and the linear slope factor. means now gives the means for the intercept, linear, and quadratic slope factors, and covariances the associated variance-covariance matrix. Thus, the quadratic slope factor has a mean of .10, a variance of .10, and exhibits a covariance of .10 to the intercept factor and of .05 to the slope factor. Then, the required sample size to detect that the mean of the quadratic slope factor differs from zero (nullEffect = 's2Mean = 0') is requested.

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 4,
  quadratic = TRUE, 
  means = c(.5, .2, .1),     # i, s, s2
  covariances = matrix(c(
    # i,   s,   s2
    c(1, .25, .10),
    c(.25, .5, .05),
    c(.10, .05, .10)
  ), ncol = 3, byrow = TRUE),
  nullEffect = 's2Mean = 0',
  # define measurement model
  nIndicator = rep(3, 4), loadM = .5
)
Detect that the variance of the intercept, linear or quadratic slope factor differs from zero

Performing a power analysis concerning the hypothesis that the variance of the mean, the linear slope, or the quadratic slope factor is zero proceeds largely identically as described above, with the exception that nullEffect now refers to the variances. To detect that the variance of the intercept factor differs from zero, use nullEffect = 'iVar', to detect that the variance of the slope factor differs from zero, use nullEffect = 'sVar', and to detect that the variance of the quadratic slope factor differs from zero, use nullEffect = 's2Var'.

For instance, the following defines a LGCM involving 4 measurements (nWaves = 4) and includes a quadratic slope factor (quadratic = TRUE) in addition to the intercept and the linear slope factor. The variances of the intercept, linear slope, and quadratic slope factors are 1, .5, and .10 (the diagonal elements of the matrix provided to covariances). Then, the required sample size to detect that the variance of the intercept factor differs from zero (nullEffect = 'iVar = 0') is requested.

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 4,
  quadratic = TRUE, 
  means = c(.5, .2, .1),     # i, s, s2
  covariances = matrix(c(
    # i,   s,   s2
    c(1, .25, .10),
    c(.25, .5, .05),
    c(.10, .05, .10)
  ), ncol = 3, byrow = TRUE),
  nullEffect = 'iVar = 0',
  # define measurement model
  nIndicator = rep(3, 4), loadM = .5
)
Detect that a covariance between intercept, linear or quadratic slope factors differs from zero

Performing a power analysis concerning the hypothesis that a covariance between intercept, slope, and quadratic slope factor is zero again proceeds largely identically as described above, with the exception that nullEffect now refers to the covariances. To detect that the covariance between the intercept and the linear slope factor differs from zero, use nullEffect = 'isCov', to detect that the covariance between the intercept and the quadratic slope factor differs from zero, use nullEffect = 'is2Cov', and detect that the covariance between the linear and the quadratic slope factor differs from zero, use nullEffect = 'ss2Cov',

For instance, the following defines a LGCM involving 4 measurements (nWaves = 4) and includes a quadratic slope factor (quadratic = TRUE) in addition to the intercept and the linear slope factor. The covariance between the intercept and the linear slope is .25, the covariance between the intercept and the quadratic slope is .10, and the covariance between the linear and the quadratic slope factors is .05 (the off-diagonal elements of the matrix provided to covariances). Then, the required sample size to detect that the covariance between the intercept and the linear slope factor differs from zero (nullEffect = 'isCov = 0') is requested.

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 4,
  quadratic = TRUE, 
  means = c(.5, .2, .1),     # i, s, s2
  covariances = matrix(c(
    # i,   s,   s2
    c(1, .25, .10),
    c(.25, .5, .05),
    c(.10, .05, .10)
  ), ncol = 3, byrow = TRUE),
  nullEffect = 'isCov = 0',
  # define measurement model
  nIndicator = rep(3, 4), loadM = .5
)
Detect that a slope of a time-invariant covariate differs from zero

To perform a power analysis to detect whether the slope of an exogenous time-invariant covariate (TIC) differs from zero, use nullEffect = 'betaIT' for the prediction of the intercept factor, nullEffect = 'betaST' for the prediction of the slope factor, and nullEffect = 'betaS2T' for the prediction of the quadratic slope factor. These types of hypotheses assume that an additional exogenous TIC is present, which is used to predict the intercept factor, the slope factor, and the quadratic slope factor, implying the following regression equations: \[\hat{I} = \beta_{IT} \cdot TIC \\ \hat{S} = \beta_{ST} \cdot TIC \\ \hat{S^2} = \beta_{S^2T} \cdot TIC\]

For instance, the following defines a LGCM involving 3 waves along with an additional exogenous time-invariant covariate. The measurement model for the covariate also needs to be defined, which in the example is done via the loadings argument. An exogenous covariate is defined after the factors associated with the measurements of \(X\), so in the present example factors 1 - 3 refer to the repeated assessments of the attribute under consideration, and the fourth factor is the covariate. Here, the covariate is measured by 4 indicators loading by .8, .6, .5, and .6. The slopes of the covariate in the prediction of the intercept and slope factors are defined in ticExogSlopes, so that the slope of the covariate in the prediction of the intercept factor is .1, and the slope in the prediction of the slope factor is .5 (ticExogSlopes = c(.1, .5)). Then, the required sample size is requested to detect that the slope of the covariate in the prediction of the slope factor differs from zero (nullEffect = 'betaST = 0').

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),           # i, s
  variances = c(1, 1),         # i, s
  covariances = .3,
  ticExogSlopes = c(.1, .5),   # i, s
  nullEffect = 'betaST = 0',
  # define measurement model
  loadings = list(
    c(.5, .6, .7),       # x1
    c(.5, .6, .7),       # x2
    c(.5, .6, .7),       # x3  
    c(.8, .6, .5, .6)    # tic
  )
)
Detect that a slope in the prediction of an time-invariant covariate differs from zero

To perform a power analysis to detect whether a slope in the prediction of an endogenous time-invariant covariate (TIC) differs from zero, use nullEffect = 'betaTI' for the slope of the intercept factor, nullEffect = 'betaTS' for the slope of the slope factor, and nullEffect = 'betaTS2' for the slope of the quadratic slope factor. These types of hypotheses assume that an additional endogenous TIC is present, which is predicted by the intercept factor, the slope factor, and the quadratic slope factor, implying the following regression equation: \[\widehat{TIC} = \beta_{TI} \cdot I + \beta_{TS} \cdot S + \beta_{TS^2} \cdot S^2\]

For instance, the following defines a LGCM involving 3 waves along with an additional endogenous time-invariant covariate. The measurement model for the covariate also needs to be defined, which in the example is done via the loadings argument. An endogenous covariate is defined after the factors associated with the measurements of \(X\), and - if also present - after an exogenous covariate, so in the present example factors 1 - 3 refer to the repeated assessments of the attribute under consideration, and the fourth factor is the covariate. Here, the covariate is measured by 4 indicators loading by .8, .6, .5, and .6. The slopes of intercept and the slope factors in the prediction of the covariate are defined in ticEndogSlopes, so that the slope of the intercept factor is .3, and the slope of the slope factor is .2 (ticEndogSlopes = c(.3, .2)). Then, the required sample size is requested to detect that the slope of the intercept factor in the prediction covariate differs from zero (nullEffect = 'betaTI = 0').

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80,
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),           # i, s
  variances = c(1, 1),         # i, s
  covariances = .3,
  ticEndogSlopes = c(.3, .2),  # i, s
  nullEffect = 'betaTI = 0',
  # define measurement model
  loadings = list(
    c(.5, .6, .7),       # x1
    c(.5, .6, .7),       # x2
    c(.5, .6, .7),       # x3  
    c(.8, .6, .5, .6)    # tic
  )
)
Detect whether the means or variances of the intercept and slope factors differ across groups

To perform a power analysis to detect whether the means of the intercept, slope, or quadratic slope factors differ across groups, use nullEffect = 'iMeanA = iMeanB',nullEffect = 'sMeanA = sMeanB', and nullEffect = 's2MeanA = s2MeanB', respectively. To detect that the respective variances differ across groups, use nullEffect = 'iVarA = iVarB',nullEffect = 'sVarA = sVarB', and nullEffect = 's2VarA = s2VarB', respectively.

The general syntax is similar as in the previous examples, with the only difference that the parameters targeted by the null hypothesis need to be provided in a list structure giving the relevant parameters separately for each group. If no list is provided for a particular parameter, these take identical values in all groups (but are freely estimated in each group by default).

For instance, the following defines a two-group model involving 3 measurement occasions (nWaves = 3), where \(X\) is measured by three indicators at each measurement, with all loadings equal to .5. The measurement model is identical for both groups. Also identical in both groups (but freely estimated) are the variances of the intercept and slope factors, which are 1 (variances = c(1, 1)), as well as the intercept-slope covariance (covariances = .3,). However, different intercept and slope factor means for each group are defined by using a list structure for the means argument: In the first group, the means of the intercept and slope factors are .5 and .2, respectively, whereas in the second group, the means are 0 and .4. Loadings and indicator intercepts are always constrained to be equal across all waves and groups. Then, the required sample size is requested to detect that the means of the intercept factor differ across groups (nullEffect = 'iMeanA = iMeanB'). Furthermore, in multiple group models the N argument also needs to be provided as a list, which in case of an a priori power analysis gives the group weights. N = list(1, 1) requests equally sized groups. If using N = list(2, 1) instead, the first group would be twice as large as the second group. If a post hoc or compromise power analysis is requested, N is a list providing the number of observations for each group.

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 3,
  means = list(
    c(.5, .2),          # group 1: i, s
    c(.0, .4)           # group 2: i, s  
  ),          
  variances = c(1, 1),         # i, s
  covariances = .3,
  nullEffect = 'iMeanA = iMeanB',
  # define measurement model
  nIndicator = rep(3, 3), loadM = rep(.5, 3)
)

If there are more than two groups, the autoregressive effects are held equal across all groups by default. If the constraints should only be placed in specific groups, nullWhichGroups is used to identify the groups to which the equality restrictions apply. For instance, nullWhichGroups = c(1, 3) defines that the means should only be restricted to equality across the first and the third group.

Performing a power analysis to detect that the variances of the intercept, slope, or quadratic slope factor differs across groups proceeds analogously. For instance, the following defines a two-group model with equal means for the intercept (of .5) and slope (of .2) factors in both groups (means = c(.5, .2)), but uses a list structure for the variances argument to define that the variance of the intercept is .5 in both groups, whereas the variance of the slope is 1 in the first, but .5 in the second group. Then, the required sample size is requested to detect that the variances of the slope factors differ across groups (nullEffect = 'sVarA = sVarB').

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),           # i, s
  variances = list(
    c(.5, 1),           # group 1: i, s
    c(.5, .5)           # group 2: i, s
  ),         
  covariances = .3,
  nullEffect = 'sVarA = sVarB',
  # define measurement model
  nIndicator = rep(3, 3), loadM = rep(.5, 3)
)
Detect whether the intercept-slope covariances differ across groups

To perform a power analysis to detect group-differences in the covariances between intercept and slope factor, intercept and quadratic slope factor, and linear slope and quadratic slope factor, use nullEffect = 'isCovA = isCovB',nullEffect = 'is2CovA = is2CovB', and nullEffect = 'ss2CovA = ss2CovB', respectively.

The procedure is largely identical as described in the previous section. For instance, the following defines a two-group model with equal means and variances for the intercept and slope factors in both groups, but uses covariances = list(.3, 0) to define that the intercept-slope covariance is .3 in the first group, but 0 in the second group. Moreover, the variances of both the intercept and the slope factor have identical values in both groups (variances = c(1, 1)) and are also restricted to be equal across groups in both the H0 and the H1 model by using groupEqual = c('iVar', 'sVar'). Then, the required sample size is requested to detect that the covariances differ across groups (nullEffect = 'isCovA = isCovB').

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),              # i, s
  variances = c(1, 1),            # i, s
  covariances = list(.3, 0),      # group1, group2
  groupEqual = c('iVar', 'sVar'),
  nullEffect = 'isCovA = isCovB',
  # define measurement model
  nIndicator = rep(3, 3), loadM = rep(.5, 3)
)
Detect whether slopes involving a time-invariant covariate differ across groups

To perform a power analysis to detect group-differences in the slope for an exogenous time-invariant covariate, use nullEffect = 'betaITA = betaITB' concerning the prediction of the intercept factor, nullEffect = 'betaSTA = betaSTB' concerning the prediction of the slope factor, and nullEffect = 'betaS2TA = betaS2TB' concerning the prediction of the quadratic slope factor.

For instance, the following defines a two-group model involving 3 measurement occasions (nWaves = 3). The measurement model is identical for both groups. Also identical across groups are the means, variances and covariances of the intercept and slope factors. Variances and covariances are also restricted to be equal across groups in both the H0 and the H1 model by using groupEqual = c('iVar', 'sVar', 'isCov'). However, different slopes for the covariate in the prediction of the intercept and the slope factors are defined by using a list structure for the ticExogSlopes argument: The slope in the prediction of the intercept factor is .5 in the first group, but .1 in the second group, and the slope in the prediction of the slope factor is .1 in the first group, but .2 in the second group. Then, the required sample size is requested to detect that the slopes for the covariate in the prediction of the intercept factors differ across groups (nullEffect = 'betaITA = betaITB').

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),     # i, s
  variances = c(1, 1),   # i, s
  covariances = .3,
  groupEqual = c('iVar', 'sVar', 'isCov'),
  ticExogSlopes = list(
    c(.5, .1),           # group 1: i, s
    c(.1, .2)            # group 2: i, s
  ),
  nullEffect = 'betaITA = betaITB',
  # define measurement model
  loadings = list(
    c(.5, .6, .7),       # x1
    c(.5, .6, .7),       # x2
    c(.5, .6, .7),       # x3  
    c(.8, .6, .5, .6)    # tic
  )
)

When the slopes of the growth factors in the prediction of an endogenous time-invariant covariate are of concern, use nullEffect = 'betaTIA = betaTIB' to detect that the slope of the intercept factor differs across groups, nullEffect = 'betaTSA = betaTSB' for the slope of the linear slope factor, and nullEffect = 'betaTS2A = betaTS2B' for the slope of the quadratic slope factor.

For instance, the following defines group-specific slopes for the intercept and the slope factor in the prediction of the covariate by using a list structure for the ticEndogSlopes argument: The slopes for the intercept in the prediction of the covariate factor is .3 in the first group, but .1 in the second group, and the slope of the slope factor is .2 in the first group, but .5 in the second group. Then, the required sample size is requested to detect that the slopes for the slope factor in the prediction of the covariate differ across groups (nullEffect = 'betaTSA = betaTSB').

powerLGCM <- semPower.powerLGCM(
  # define type of power analysis
  type = 'a-priori', alpha = .05, power = .80, N = list(1, 1),
  # define hypothesis 
  nWaves = 3,
  means = c(.5, .2),     # i, s
  variances = c(1, 1),   # i, s
  covariances = .3,
  groupEqual = c('iVar', 'sVar', 'isCov'),
  ticEndogSlopes = list(
    c(.3, .2),           # group 1: i, s
    c(.1, .5)            # group 2: i, s
  ),
  nullEffect = 'betaTSA = betaTSB',
  # define measurement model
  loadings = list(
    c(.5, .6, .7),       # x1
    c(.5, .6, .7),       # x2
    c(.5, .6, .7),       # x3  
    c(.8, .6, .5, .6)    # tic
  )
)

4.15 Generic model-based power analysis

All of the functions described above implement a high-level approach towards performing model-based power analysis to facilitate the definition of the relevant models and hypotheses. At times, one might want to perform model-based power analyses that are not immediately covered by one of these functions. Therefore, semPower also provides a more generic way to perform a model-based power analysis via the semPower.powerLav function that requires the direct specification of the H0 and H1 lavaan model strings as well as either the population covariance matrix (and means) or the population model.

Consider the situation that one is interested in determining whether the observed responses on 8 items reflect two separate (but correlated) factors or can be described by assuming just a single factor. A suitable model to test this hypothesis would specify two factors and constrain their correlation to 1. When this constrained model fits the data, a single factor is sufficient. Otherwise, two factors are required.

Suppose that a correlation between two factors of \(r > .9\) is considered as implying that these are practically equivalent, so these could be collapsed into a single factor. The misfit associated with a model assuming a correlation of 1 when, in reality, the true correlation is \(\leq\) .9 is supposed to define the magnitude of effect. This scenario is not covered by the semPower.powerCFA function, so some manual work to achieve this is required. Three steps are required to perform a power analysis in this scenario:

  • Define the population model that describes the true situation in the population or provide the population covariance matrix.
  • Define an (incorrect) analysis model that reflects the null hypothesis of interest.
  • Optionally, define a (correct) analysis model that reflects the alternative hypothesis.
Define the true affairs in the population

One option to define the true affairs in the population is to use a lavaan model string specifying the values for each single (non-zero) parameter. Thus, recurring to the example above, one needs to define each loading, each residual variance, the variance of the factors as well as their covariance. Suppose the standardized loadings vary between .4 and .8, and that - in the population - the two factors correlate by .9.

# define (true) population model
modelPop <- '
# define relations between factors and items in terms of loadings
f1 =~ .7*x1 + .7*x2 + .5*x3 + .5*x4
f2 =~ .8*x5 + .6*x6 + .6*x7 + .4*x8
# define the unique variances of the items to be equal to 1-loading^2, 
# so that the loadings above are in a standardized metric
x1 ~~ .51*x1
x2 ~~ .51*x2
x3 ~~ .75*x3
x4 ~~ .75*x4
x5 ~~ .36*x5
x6 ~~ .64*x6
x7 ~~ .64*x7
x8 ~~ .84*x8
# define the variances of f1 and f2 to be 1
f1 ~~ 1*f1   
f2 ~~ 1*f2   
# define covariance (=correlation, because factor variances are 1) 
# between the factors to be .9
f1 ~~ 0.9*f2 
'

Instead of defining a population model via a lavaan model string, it is also possible to provide the population covariance matrix directly. A useful utility function supporting this process is semPower.genSigma. For instance, the following returns the very same population covariance matrix (generated$Sigma) as implied by the lavaan model string defined above:

generated <- semPower.genSigma(Phi = .90, 
                               loadings = list(
                                 c(.7, .7, .5, .5),
                                 c(.8, .6, .6, .4)))

See semPower.genSigma for more information.

Define the H0 and the H1 models

Having defined the true affairs in the population, one now needs to define the (incorrect) analysis model reflecting the null hypothesis of interest. This model should make at least one restriction which is factually wrong and thereby defines the effect of interest. In the present case, the factually wrong restriction is that the two factors correlate to 1:

# define (wrong) analysis model
modelH0 <- '
f1 =~ NA*x1 + x2 + x3 + x4
f2 =~ NA*x5 + x6 + x7 + x8
# define variances of f1 and f2 to be 1
f1 ~~ 1*f1   
f2 ~~ 1*f2   
# set correlation between the factors to 1
f1 ~~ 1*f2
' 

Whereas the population model (or the population covariance matrix) and the H0 model are sufficient to perform a power analysis, one can also define an explicit H1 model which is to be compared against the H0 model. Below, the H1 model is defined in a way that it is correct, so this only affects the df for the power analysis.

# define (correct) comparison model
modelH1 <- '
f1 =~ NA*x1 + x2 + x3 + x4
f2 =~ NA*x5 + x6 + x7 + x8
# define variances of f1 and f2 to be 1
f1 ~~ 1*f1   
f2 ~~ 1*f2   
# freely estimate the correlation between the factors to 1
f1 ~~ f2
' 

When semPower.genSigma is used to obtain a population covariance matrix, a correct H1 model string is also returned (generated$modelTrue), so one might skip the explicit definition of the H1 model and just use the returned string. Indeed, using this approach, one might also modify the returned model string to reflect the actual hypothesis of interest. Instead of writing the complete H0 and H1 model strings as tediously done in full detail above, the very same may also be achieved through

generated <- semPower.genSigma(Phi = .90, 
                               loadings = list(
                                 c(.7, .7, .5, .5),
                                 c(.8, .6, .6, .4)))
modelH1 <- generated$modelTrue
# define modelH0 as function of modelH1 plus the 
# additional constraint of interest
modelH0 <- paste(modelH1, 'f1 ~~ 1*f2', sep = '\n')
Perform a power analysis

Finally, all these information are plugged into semPower.powerLav(), in this example, requesting an a priori power analysis:

# using the population model 
ap <- semPower.powerLav(type = 'a-priori', alpha = .05, power = .80,
                        modelPop = modelPop, modelH0 = modelH0, modelH1 = modelH1)
# using the population covariance matrix 
ap <- semPower.powerLav(type = 'a-priori', alpha = .05, power = .80,
                        Sigma = generated$Sigma, modelH0 = modelH0, modelH1 = modelH1)
summary(ap)

The output shows that 323 observations are required to detect a correlation between the factors of \(r \leq .9\) with a power of 80% on alpha = .05.4 Note that there is only a single df, because the H1 model was also explicitly defined and just differs from the H0 model in a single parameter (namely the freely estimated correlation between the factors). If the H1 model is omitted from the function call, power will be determined relative to the saturated model, which in this example leads to 20 df and thus to a very different sample size required to detect the specified effect (namely 860 observations).

Beyond the results of the power analysis, the result returned by semPower.powerLav also contains a number of additional information, in particular, the generated population covariance matrix \(\Sigma\) (Sigma) and the model-implied covariance matrix \(\hat{\Sigma}\) (SigmaHat). If using lavaan to generate the population covariance matrix, it is a good idea to check whether the population model actually defined all parameters in line with the expectations. To verify, one can just fit the correct H1 model to the population covariance matrix which should yield a perfect fit and give the same parameter estimates used in the definition of the model string.

library(lavaan)
summary(sem(modelH1, sample.cov = ap$Sigma, 
            sample.nobs = 1000, sample.cov.rescale = FALSE), 
        stand = TRUE, fit = TRUE)

Note that the sample.nobs are arbitrary and that sample.cov.rescale = FALSE must ne added to prevent lavaan from modifying the provided covariance matrix.

5 Power Plots

Power plots show the implied power as a function of some other variable. semPower provides two different types of power plots. One can either plot the achieved power to detect a certain effect over a range of different sample sizes (semPower.powerPlot and semPower.powerPlot.byN). Alternatively, one can plot the achieved power with a given \(N\) to detect a range of different effect size magnitudes (semPower.powerPlot.byEffect).

5.1 Power by N for a given effect

The result of any model-based power analysis can be plugged into semPower.powerPlot to obtain a plot showing the achieved power to detect the effect of interest over a range of sample sizes. For instance, the following first performs a power analysis concerning the correlation between two CFA factors (see the corresponding chapter for details), and then plugs the result into the semPower.powerPlot function.

# perform a power analysis
powerCFA <- semPower.powerCFA(
  type = 'post-hoc', alpha = .05, N = 300, 
  Phi = .15, nIndicator = c(5, 4), loadM = c(.5, .6))
# show plot
semPower.powerPlot(powerCFA)
Power as function of the N to detect the effect as defined in powerCFA

Figure 5.1: Power as function of the N to detect the effect as defined in powerCFA

This shows that the power to detect the specified effect is high when N > 1,000, whereas power is small when N < 250.

A more generic way is provided by the function semPower.powerPlot.byN, which creates a plot showing the achieved power to detect a given effect on a given alpha error over a range of sample sizes. However, because it is difficult to specify diagnostic sample sizes for a given effect, the semPower.powerPlot.byN instead asks to provide the desired power range. For example, suppose one is interested in how the power to detect an effect corresponding to RMSEA = .05 changes as a function of the number of observations N. The interest lies on the power ranging from .05 to .99 (note that the power cannot be smaller than alpha). This is achieved by setting the arguments power.min = .05 and power.max = .99. In addition, as in any a priori power analysis, the type and magnitude of effect, the df, and the alpha error need to be defined: effect = .05, effect.measure = 'RMSEA',alpha = .05, df = 100.

semPower.powerPlot.byN(effect = .05, effect.measure = 'RMSEA', 
                       alpha = .05, df = 100, power.min = .05, power.max = .99)

5.2 Power by the magnitude of effect for a given N

The function semPower.powerPlot.byEffect creates a plot showing the achieved power at a given sample size over a range of effect sizes. For example, suppose one is interested in how the power at N = 500 changes as function of effect size magnitude, corresponding to an RMSEA ranging from .001 to .10. This is achieved by setting the arguments effect.measure = 'RMSEA', effect.min = .001 and effect.max = .10. In addition, as in any post hoc power analysis, the sample size, the df, and the alpha error need to be defined: N = 500, df = 100, alpha = .05.

semPower.powerPlot.byEffect(effect.measure = 'RMSEA', alpha = .05, N = 500, 
                            df = 100, effect.min = .001, effect.max = .10)
Power as function of the RMSEA with N = 500.

Figure 5.2: Power as function of the RMSEA with N = 500.

This shows that with N = 500, a model with an associated RMSEA > .04 is detected with a very high power, whereas power for RMSEA < .03 is rather modest.

6 Further topics

6.1 Obtain the model degrees of freedom

Knowledge of the degrees of freedom (df) is required to perform a power analysis. When power refers to the comparison of two explicitly specified, nested models, the resulting df are just the difference between the df of the two models (or equivalently, the number of free parameters removed by the more restrictive model). When power is requested for the comparison of a hypothesized model to the saturated model, the model df are given by

\[df = p\cdot(p+1)/2 - q\]

where \(p\) is the number of observed variables and \(q\) is the number of free parameters of the hypothesized model. The next chapter discusses the difference between a saturated and less restricted H1 comparison model in greater detail.

To obtain the df in a typical SEM, one needs to count the (a) loadings, (b) indicator residual variances, and (c) covariance/regression parameters between factors and between indicator residuals.5 For instance, consider a correlated two-factor CFA model with each factor measured by 4 indicators and no secondary loadings or residual correlations. Thus, there are (a) \(2\cdot4\) loadings, (b) 8 indicator residual variances, and (c) 1 covariance between the factors, which results in 17 free parameters. The df are thus \(8\cdot9/2 - 17 = 19\).

If you are unsure or have a more complicated model (or both), semPower also includes a utility function called semPower.getDf that determines the df for a model provided in the lavaan syntax (note that this function requires the lavaan package). For the model sketched above, running

# define model using standard lavaan syntax
lavmodel <- '
f1 =~ x1 + x2 + x3 + x4
f2 =~ x5 + x6 + x7 + x8
'
# obtain df
semPower.getDf(lavmodel)

gives 19 as result, which matches what was calculated by hand. Similarly, when adding the null hypothesis of a zero correlation between the factors (f1 ~~ 0*f2), one additional df is gained:

# define model using standard lavaan syntax
lavmodel <- '
f1 =~ x1 + x2 + x3 + x4
f2 =~ x5 + x6 + x7 + x8
f1 ~~ 0*f2
'
# obtain df
semPower.getDf(lavmodel)

semPower.getDf can also be used to obtain the df in multigroup settings by setting the arguments nGroups (and group.equal for models involving equality constraints).

# configural invariance
semPower.getDf(lavmodel, nGroups = 3)
# metric invariance
semPower.getDf(lavmodel, nGroups = 3, group.equal = c('loadings'))
# scalar invariance
semPower.getDf(lavmodel, nGroups = 3, group.equal = c('loadings', 'intercepts'))

In determining the df, semPower.getDf also accounts for any additional restrictions on the (defined) parameters and should generally match the df reported by lavaan, except in the case that a \(\chi^2\) statistic is employed that includes a correction of the df (such as the third-moment adjustment by Lin and Bentler, 2012).

6.2 Definition of the comparison (H1) model

Power analyses always refer to a certain hypothesis (H0) that is to be rejected, so that an (implicit) alternative hypothesis (H1) shall be accepted. All convenience functions performing a model-based power analysis accept the comparison argument which sets the relevant comparison (H1) model, which can either refer to a saturated model (comparison = 'saturated') or a model (comparison = 'restricted') that omits the H0 constraints reflecting the hypothesis of interest, but is otherwise identical to the H0 model.

For instance, consider a two-factor CFA model, where each factor is measured by 4 indicators, and suppose power is to be determined to detect a factor correlation of \(r \geq .20\). A standard CFA model freely estimating the factor correlation involves 19 df. The relevant H0 model, however, would constrain the factor correlation to zero, and thus involves one additional and thus 20 df.

To determine the power to reject the H0 model, there are now two relevant comparison models defining the H1. One option would be to compare the H0 model against the saturated model (comparison = 'saturated'), so that the power analysis is based on 20 df. Practically speaking, one would just ask whether the model \(\chi^2\) test associated with the H0 model turns out significant. Alternatively, one might want to compare the H0 model against a model that is identical to the H0, except that it freely estimates the factor correlation (comparison = 'restricted'). The difference in the df between these two models (and the difference in model fit) now enter the power analysis, which then would be based on \(20 - 19 = 1\) df.

Either approach is valid. Usually, however, it is more sensible to compare the H0 model against a less restricted H1 model that only differs in the parameter(s) relevant for the tested hypothesis, which has the added benefit of a generally higher power. Thus, the functions performing model-based power analyses by default use comparison = 'restricted'. Of note, the measures of effect given in the output of a power analysis are based on the df provided to the respective power analysis. Thus, in case of a restricted comparison model, these will generally differ from the ones obtained by just fitting the H0 model to some data, and will, in general, also not reflect the simple differences between the indices of the H0 and the H1 model.

6.3 Multiple Group Models

A common application of SEM is to fit a model simultaneously to multiple distinct groups (such as gender or age groups). If a power analysis concerning the hypothesis of whether the model as a whole describes the data is desired, the grouping structure is only relevant with respect to the degrees of freedom of the test (see the chapter on how to obtain the df), so a regular model-free power analysis can be performed. This also holds for the common case of invariance testing when the source of non-invariance is assumed to spread across many parameters and just the overall difference in terms of an effect size is of interest (see power analysis for overall differences).

Other hypotheses arising in multiple group settings are supported in most functions performing a model-based power analyses:

  • See semPower.powerMI for various hypotheses concerning measurement invariance across groups (specifying non-invariant parameters).
  • See semPower.powerCFA to detect that the correlation between two factors differs across groups.
  • See semPower.powerBifactor to detect that a correlation involving a bifactor differs across groups.
  • See semPower.powerRegression and semPower.powerPath to detect that a regression slope differs across groups.
  • See semPower.powerMediation to detect that a mediation effect differs across groups.
  • See semPower.powerAutoreg to detect that autoregressive effect in an autoregressive model differs across groups.
  • See semPower.powerARMA to detect that autoregressive, moving-average parameters, means, or variances in an ARMA model differ across groups.
  • See semPower.powerCLPM to detect that autoregressive effects or cross-lagged effects in a CLPM differ across groups.
  • See semPower.powerRICLPM to detect that autoregressive effects, cross-lagged effects, or the correlation between random intercept factors in a random-intercept CLPM differ across groups.
  • See semPower.powerLGCM to detect that means or variances of the intercept or slope factors, or the slopes involving a time-invariant covariate differ across groups.

6.4 Simulated power

By default, power analyses are performed analytically by relying on asymptotic theory. Alternatively, all model-based a priori and post hoc power analyses6 can also be performed using a simulation approach.

When all relevant assumptions are satisfied, a simulated power analysis with a sufficiently large number of replications will yield the same results as an analytical power analysis. Moreover (and perhaps surprisingly), there will also be - generally speaking - little differences whenever the H0 model is compared against a less restricted H1 model (see the chapter on choosing the comparison model) regardless of whether additional assumptions such as multivariate normality hold. Major differences are thus only to be expected when the H0 model is compared against the saturated model and the model is large or the data are non-normal or incomplete.

To request a simulated power analysis, add simulatedPower = TRUE as an additional argument to any of the convenience functions performing a model-based power analysis or to the semPower.apriori or semPower.posthoc functions when a covariance matrix and a model string is provided. The details of the simulation can be specified in simOptions, which is a list that may have the following components:

  • nReplications: The targeted number of valid simulation runs (defaults to 500). A larger number of replications increases accuracy (at the expense of processing time). Note that an a priori power analysis will only be accurate when the number of replications is large (say, 500 or more).
  • minConvergenceRate: The minimum convergence rate required (defaults to .75). The maximum actual simulation runs are increased by a factor of 1/minConvergenceRate, so that this argument is only used to ensure that the requested number of (successful) replications as defined in nReplications is actually reached, while still having a stopping rule for the simulation. semPower still provides the results when the actual convergence rate is smaller than minConvergenceRate, but will issue a warning.
  • nCores: The number of CPU cores to use for parallel processing. This vastly speeds up computations. Defaults to 1 for no parallel processing. Parallel processing requires that the doFuture package is installed.
  • additional arguments related to non-normal data and missingness

For instance, the following requests a simulated (simulatedPower = TRUE) post hoc power analysis with default settings (i.e., 500 replications, minimum convergence rate of 75%, complete and normally distributed data, regular maximum-likelihood LRT, no parallel processing) to detect that a factor correlation of at least .25 differs from zero on alpha = .05 with a sample of 300 observations. See the chapter on CFA models for a description of the remaining arguments. Simulated power analyses for all other convenience functions providing a model-based power analysis are performed analogously. It is strongly recommended to seed the random generator to obtain reproducible results (set.seed(300121)).

set.seed(300121)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE)

To change the number of replications and/or the minimum convergence rate, further supply the simOptions argument as a list with corresponding components. For instance, the following uses 1,000 replications (nReplications = 1000), a minimum convergence rate of 90% (minConvergenceRate = .90), and requests parallel computation using 8 cores (nCores = 8).

set.seed(300121)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE,
                              # set simulation options
                              simOptions = list(
                                nReplications = 1000,
                                minConvergenceRate = .90,
                                nCores = 8
                              ))

As usual, the output of the power analysis is printed by calling the summary method on the results variable:

summary(powerCFA)

##  semPower: Post hoc power analysis
##  
##  Simulated power based on 1000 successful replications.
##                                               
##                           Analytical Simulated
##                                               
##  F0                       0.027416   0.027677 
##  RMSEA                    0.165577   0.166364 
##  Mc                       0.986386   0.986257 
##  GFI                      0.993945   0.993887 
##  AGFI                     0.727503   0.724922 
##                                               
##  df                       1          1        
##  Num Observations         300        300      
##  NCP                      8.197285   8.275404 
##                                               
##  Critical Chi-Square      3.841459   3.841459 
##  Alpha                    0.050000   0.050000 
##  Beta                     0.183229   0.190000 
##  Power (1 - Beta)         0.816771   0.810000 
##  Implied Alpha/Beta Ratio 0.272883   0.263158 
##  
##  
##  Simulation Results:
##                                                      
##  Convergence Rate (%) of the H0 model        100.00  
##                                                      
##  Chi-Square Bias (%)                                 
##  H0 Model                                    0.23    
##  H1 Model                                    0.02    
##  H0-H1 Difference                            0.81    
##                                                      
##  Chi-Square KS-Distance                              
##  H0 Model                                    0.008227
##  H1 Model                                    0.007083
##  H0-H1 Difference                            0.006270
##                                                      
##  Rejection Rate (%)                                  
##  H0 Model                                    28.00   
##  H1 Model                                    5.20    
##                                                      
##  Average Parameter Bias (%) in the H1 Model:         
##  Loadings                                    0.27    
##  Variances/Covariances                       0.06 

The results of the power analysis provide the same information as a corresponding a model-free power analysis (here: a post-hoc power analysis), but now also include a column giving the simulation results. In this example, the simulated power estimate (of 81.0%) closely matches the analytical result (81.7%). Note that all measures of effect are computed based on the unbiased estimate of the population minimum of the fit function \(F_0\) (\(\hat{F_0} = \hat{F} - df / N\)).

In addition, semPower also outputs a section entitled Simulation Results, where further results of the simulation are reported that help to evaluate whether the empirical \(\chi^2\) distributions match the asymptotically expected distributions and whether the realized sample size is sufficient to support parameter estimation of the model. Recall that power is only one of several aspects to consider in sample size planning. In particular, the achieved convergence rate of the H0 model is always provided (and should be close to 100%, as in the present example). When there is also an explicit comparison (H1) model (which is the case in the example; comparison = 'restricted'), the average parameter biases are also given. These information are only meaningful when the model under scrutiny fits the data, so the parameter biases refer to the H1 model. The average parameter bias shows whether the model parameters are reliably recovered and should thus be close to zero, so the parameter estimates are virtually unbiased in the present example.

Furthermore, the empirically observed \(\chi^2\) distributions are compared against the expected \(\chi^2\) distributions based on asymptotic theory (i.e., the distributions analytical power analysis are based on) using three measures:

  • Chi-Square Bias (%): The observed percentage bias (\((\chi^2 - df) / df\)) concerning the mean of the observed \(\chi^2\) test-statistics. Note that whereas the empirical mean usually only exhibits a rather minor departure from the expected value, the tails might differ considerably.
  • Chi-Square KS-Distance: The average KS distance is measure of the discrepancy between two cumulative distribution functions akin to the Kolmogorov–Smirnov test statistic. Unlike the KS-test, the average KS distance uses the mean absolute distance (\(M(|F_n(x) - F(x)|)\)) instead of the maximum absolute distance (\(\sup |F_n(x) - F(x)|\)), so it ranges between 0 (for no difference) to .50 (for completely non-overlapping distributions). As a rough guideline, an average KS-distance of \(\geq .1\) indicates a substantial difference between the distributions.
  • Rejection Rate (%): The empirical rejection rate on the specified alpha error level is a measure of how strongly the tail of the empirical distribution differs from the asymptotic reference distribution. When computed for a correctly specified model (the H1 model), the empirical rejection rate should match the alpha error. The rejection rate associated with the H0 model is the empirical power estimate when it is compared against the saturated (rather than the H1 model). If there is no H1 model, the empirical power estimate is thus not reported separately.

These measures are provided for the H0 model and - if available - the H1 model as well as the difference between the H0 and the H1 model. Jointly, this gives an idea of whether and why a simulated power estimate might diverge from the analytical power estimate. Generally speaking, one would expect the difference between analytical and simulated power to increase with the extent to which the empirical distributions depart from their respective theoretical distribution. However, when the H0 model is compared against the H1 model (type = restricted), it is also well possible that the empirical distributions under both the H0 and the H1 differ considerably from the respective reference distribution, whereas the difference distribution still follows the expected asympotic difference distribution. Consider the following example (which is based on a three factor model with a total of 45 indicators and severe non-normality):

##  Simulation Results:
##                                                      
##  Convergence Rate (%) of the H0 model        100.00  
##                                                      
##  Chi-Square Bias (%)                                 
##  H0 Model                                    3.88    
##  H1 Model                                    3.90    
##  H0-H1 Difference                            0.48    
##                                                      
##  Chi-Square KS-Distance                              
##  H0 Model                                    0.210252
##  H1 Model                                    0.212962
##  H0-H1 Difference                            0.008880
##                                                      
##  Rejection Rate (%)                                  
##  H0 Model                                    25.80   
##  H1 Model                                    22.20  

Here, the KS-Distances indicate strong departures of the empirical distributions from the respective asymptotic distribution, which is also mirrored by a rejection rate of the (correctly specified) H1 model that clearly exceeds the nominal alpha-error of 5%. However, the difference distribution closely follows the theoretically expected distribution, so in this example there is also virtually no difference between the analytical and the simulated power estimate (not shown in the output given above). Stated differently, although the observed \(\chi^2\) test-statistics are inflated under both the H0 and the H1 model, the distribution of their differences still matches the theoretical difference distribution.

Finally, the results variable also contains a slot simRes comprising the generated data sets (simRes$simData; including those that led to non-convergence), the obtained model fit statistics (simRes$fitH0, simRes$fitH1, simRes$fitDiff), and the parameter estimates associated with the H1 model (simRes$paramEst). For instance, the following compares the empirical distribution of the obtained \(\chi^2\) statistics associated with the H1 model with the theoretically expected distribution.

plot(dchisq(seq(max(powerCFA$simRes$fitH1[ , 'chisq'])), df = powerCFA$simRes$fitH1[1 , 'df']), type = 'l', lty = 2, xlab = 'chi square', ylab = 'density')
lines(density(powerCFA$simRes$fitH1[ , 'chisq']))
Empirical (solid) versus expected (dashed) chi-square distributions

Figure 6.1: Empirical (solid) versus expected (dashed) chi-square distributions

Non-normal data

By default, the data generated in a simulated power analysis are sampled from a multivariate normal distribution (by omitting type = 'normal' in simOptions). Multivariate normality is, of course, a strong assumption which is often violated in actual empirical data. Whereas the point estimates for the model parameters are still correct when the data are not normally distributed, the standard errors and - most importantly in the present context - the model test statistic will be inflated. To evaluate the extent to which power depends on the normality assumption, simulated power analyses can also be performed based on non-normal data (and optionally by also considering an alternative estimator or test-statistic).

Generating non-normal data with a given covariance structure is a tricky issue, so there are quite a number of different approaches that may be employed. A popular method is the multivariate power-constants approach by Vale and Maurelli (1983), which, however, implies a population distribution that does not depart from a multivariate normal distribution. semPower therefore also provides interfaces to three further approaches, each of which is associated with different classes of multivariate distributions and expects different input parameters:

  • type = 'IG' (Foldnes & Olsson, 2016) requires provision of skewness and excess kurtosis of the marginals (i.e., for each variable) akin to the Vale-Maurelli procedure. This requires the covsim package.
  • type = 'mnonr' (Qu et al., 2020) requires provision of Mardia’s multivariate skewness and kurtosis. Note that multivariate normality implies a population kurtosis of \(p \cdot (p + 2)\). This requires the mnonr package.
  • type = 'RK' (Ruscio & Kaczetow, 2008) requires provision of the marginal distributions (i.e., the population distribution of each variable).
  • type = 'VM' (Vale & Maurelli, 1983) requires provision of skewness and excess kurtosis of the marginals (i.e., for each variable). Note that each kurtosis value must be \(\geq \text{skew}^2 - 2\).

For instance, recurring to the example above, the following uses the IG approach (type = 'IG') to generate non-normal data, where the marginal distributions of the indicators of the first factor (variables 1 - 5) have a skewness (skewness) of zero, but a quite strong kurtosis (kurtosis) between 10 - 20, whereas the indicators of the second factor (variables 6 - 9) show a minor skewness of 2 and a more moderate kurtosis between 4 - 6.

set.seed(300121)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE,
                              # set simulation options
                              simOptions = list(
                                type = 'IG',
                                skewness = c(0, 0, 0, 0, 0, 2, 2, 2, 2),
                                kurtosis = c(20, 15, 10, 15, 18, 5, 4, 5, 6),
                                nCores = 8
                              ))

The following also specifies the marginal skewness and kurtosis, but relies on the VM approach (type = 'VM'):

set.seed(300121)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE,
                              # set simulation options
                              simOptions = list(
                                type = 'VM',
                                skewness = c(1, 1, 1, 1, 1, 3, 3, 3, 3),
                                kurtosis = c(4, 4, 4, 4, 4, 15, 15, 15, 15),
                                nCores = 8
                              ))

Alternatively, one may specify Mardia’s multivariate skewness and kurtosis using the mnonr approach (type = 'mnonr'). For instance, in the following, data are sampled from a population with a multivariate skewness of 10 and a multivariate kurtosis of 200:

set.seed(300121)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE,
                              # set simulation options
                              simOptions = list(
                                type = 'mnonr',
                                skewness = 10,
                                kurtosis = 200,
                                nCores = 8
                              ))

Instead of defining the marginal skewness and kurtosis as in the IG or the VM approach, one may specify the marginal distributions themselves using the RK approach (type = 'RK'). This requires that the distributions argument is set as a list of lists, where each component must specify the population distribution for a specific variable (e.g., rnorm) and additional arguments (e.g, list(mean = 0, sd = 10)) also as a list (which may also be empty). For instance, the following defines different marginal distributions for each of the 9 indicator variables:

set.seed(300121)
distributions <- list(
  list('rnorm', list(mean = 0, sd = 10)),          # normal
  list('rnorm', list()),                           # standard normal
  list('rt', list(df = 10)),                       # t
  list('runif', list()),                           # uniform (0 - 1)
  list('rbeta', list(shape1 = 1, shape2 = 2)),     # beta
  list('rexp', list(rate = 1)),                    # exponential   
  list('rpois', list(lambda = 4)),                 # poisson
  list('rchisq', list(df = 4, ncp = 3)),           # chi-square
  list('rbinom', list(size = 1, prob = .5))        # binomial   
)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE,
                              # set simulation options
                              simOptions = list(
                                type = 'RK',
                                distributions = distributions,
                                nCores = 8
                              ))

One may also just define a single distribution to apply to all variables. For instance, distributions = list('rchisq', list(df = 2)) defines the population marginal distributions for all variables to be \(\chi^2\) with 2 df.

Missing data

By default, complete data are generated and used in simulated power analyses. Given that the presence of missing data also affects power, semPower also allows for generating data with missings, which are then analyzed resorting to full-information maximum likelihood.

To request missing data, the missing data mechanism, the proportion of missing data, and the variables on which missing data occur need to be defined by providing the following to simOptions:

  • missingVars: vector specifying the variables containing missing data, e.g., missingVars = c(2, 4, 6) to have the second, fourth, and sixth variable contain missings. The order of the variables reflects the one in the particular factor model defined (see definition of the factor model).
  • missingVarProp can be used instead of missingVars: the proportion of variables containing missing data.
  • missingProp: The proportion of missingness for variables containing missing data, either a single value or a vector giving the probabilities for each variable.
  • missingMechanism: The missing data mechanism, one of 'MCAR' (the default), 'MAR', or 'NMAR'.

For instance, recurring to the example above, the following defines a MAR missing mechanism (missingMechanism = 'MAR'), a proportion of missingness on the variables containing missings of .25 (missingProp = .25), and that variables 1, 2, 8, and 9 contain missing values (missingVars = c(1, 2, 8, 9)). In the present example, this means that the first and second indicator of the first factor as well as the final two indicators of the second factor contain missings.

set.seed(300121)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE,
                              # set simulation options
                              simOptions = list(
                                missingVars = c(1, 2, 8, 9),
                                missingProp = .25,
                                missingMechanism = 'MAR',
                                nCores = 8
                              ))

Alternatively, one may also specify the proportion of missing values for each variable separately. For instance, the following defines the proportion of missingness to be .1, .2, .3, and .4 (missingProp = c(.1, .2, .3, .4)) for variables 1, 2, 8, and 9, respectively.

set.seed(300121)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE,
                              # set simulation options
                              simOptions = list(
                                missingVars = c(1, 2, 8, 9),
                                missingProp = c(.1, .2, .3, .4),
                                missingMechanism = 'MAR',
                                nCores = 8
                              ))

Instead of defining specific variables that contain missings, it is also possible to just define the proportion of variables that contain missings (so that the specific variables containing missings are determined randomly). For instance, the following defines that 50% of the variables contain missings:

set.seed(300121)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE,
                              # set simulation options
                              simOptions = list(
                                missingVarProp = .50,
                                missingProp = .25,
                                missingMechanism = 'MAR',
                                nCores = 8
                              ))
Changing the estimator and test-statistic

By default, simulated power analyses are performed using Maximum-Likelihood estimation with the standard LR test-statistic. When non-normal data are incorporated in the simulation process, the standard test-statistic will be positively biased and thus may incur inflated power estimates, so it is reasonable to employ corrected test-statistics. The test-statistic and the estimator can be changed by providing a list to the lavOptions argument, which is passed to lavaan and thus conforms to the standard lavaan conventions. For example:

  • list(estimator = 'mlm'): requests ML estimation with the Satorra-Bentler scaled test-statistic
  • list(estimator = 'mlr'): requests ML estimation with a corrected test-statistic that is asymptotically equivalent to the Yuan-Bentler statistic
  • list(estimator = 'wls'): requests WLS estimation
  • list(estimator = 'wlsmv'): requests diagonal WLS estimation with a robust test-statistic

See the lavaan manual for all available estimators and test-statistics.

For instance, the following generates non-normal data with missings (see the preceeding chapters) and employs ML estimation with a test-statistic asymptotically equivalent to the Yuan-Bentler statistic (lavOptions = list(estimator = 'mlr')).

set.seed(300121)
powerCFA <- semPower.powerCFA(
                              # define type of power analysis
                              type = 'post-hoc', alpha = .05, N = 300, comparison = 'restricted',
                              # define hypothesis
                              Phi = .25,
                              nullEffect = 'cor = 0',
                              # define measurement model
                              nIndicator = c(5, 4), loadM = c(.5, .6),
                              # request simulated power
                              simulatedPower = TRUE,
                              # set simulation options
                              simOptions = list(
                                type = 'mnonr',
                                skewness = 10,
                                kurtosis = 200,
                                missingVars = c(1, 2, 8, 9),
                                missingProp = c(.1, .2, .3, .4),
                                missingMechanism = 'MAR'
                              ),
                              lavOptions = list(estimator = 'mlr'))

6.5 Power analyses based on covariance matrices

All model-free power analyses provided in semPower also accept covariance matrices as input and then determine the associated effect through the supplied covariance matrices. This offers a very high degree of flexibility, but obviously requires a specification of proper covariance matrices. In this section, various options to obtain these matrices are illustrated in detail, ordered by most to least cumbersome.

The full way

One way is to employ some other SEM software to generate the population and the model-implied covariance matrix (note that a proper definition of the latter usually requires fitting the model to the population data). For illustration, let’s consider how this could be done using lavaan.

library(lavaan)

# define (true) population model
modelPop <- '
f1 =~ .8*x1 + .7*x2 + .6*x3
f2 =~ .7*x4 + .6*x5 + .5*x6
f1 ~~ 1*f1
f2 ~~ 1*f2
f1 ~~ 0.5*f2
x1 ~~ .36*x1
x2 ~~ .51*x2
x3 ~~ .64*x3
x4 ~~ .51*x4
x5 ~~ .64*x5
x6 ~~ .75*x6
'
# define (wrong) H0 model
modelH0<- '
f1 =~ x1 + x2 + x3
f2 =~ x4 + x5 + x6
f1 ~~ 0*f2
'

After loading the lavaan package, two lavaan model strings are defined. The first model string (modelPop) is used to define the population covariance matrix, so all model parameters are defined (at least implicitly, consult the lavaan documentation for defaults). Here, a model is defined comprising two latent factors (each with variance of 1), that are correlated by .5. Each latent factor is measured through three observed indicators (x1 to x6) with (standardized) loadings ranging from .5 to .8. The second model string (modelH0) is used to define the hypothesized (wrong) H0 model. To obtain the model-implied covariance matrix, one needs to fit this model to the population data; so this is basically an ordinary CFA model string with many free parameters. Note that this model constrains the correlation between the two latent factors to zero. When fitting this model to the population data defined previously, the model is thus wrong, since the correlation between these factors in the population was defined to be .50.

Having defined the model strings, one proceeds by actually obtaining the relevant covariance matrices.

# get population covariance matrix; equivalent to a perfectly fitting model
covPop <- fitted(sem(modelPop))$cov

# get covariance matrix as implied by H0 model; note the nobs are arbitrary
fitH0 <- sem(modelH0, sample.cov = covPop, 
              sample.nobs = 250, sample.cov.rescale = FALSE, 
              likelihood = 'wishart')
df <- fitH0@test[[1]]$df
covH0 <- fitted(fitH0)$cov

covPop is now the covariance matrix in the population (\(\Sigma\)). To obtain the model-implied covariance matrix (\(\hat{\Sigma}\)), one needs to fit our hypothesized, wrong, H0 model (modelH0) to the population data (covPop). The model-implied covariance matrix can then be obtained by calling covH0 <- fitted(fitH0)$cov.

The obtained covariance matrix can now be used as input in a power analysis. The following performs a post hoc power-analysis assuming N = 1000 and alpha =.05 by calling semPower.postHoc with the arguments SigmaHat = covH0 and Sigma = covPop.

ph <- semPower.postHoc(SigmaHat = covH0, Sigma = covPop, alpha = .05, N = 250, df = df)
summary(ph)

The output (which is omitted here) indicates that fitting the hypothesized model to the population data is associated with a discrepancy of \(F_0\) = 0.133 (or RMSEA = .121 or SRMR = .140 or …) and that the power to reject the H0 model is very high, 1 - beta = .993.

It is instructive to compare the expected \(\chi^2\) (computed via the obtained \(F_0\)) with the \(\chi^2\) model test statistics as reported by lavaan when fitting the H0 model using the same number of observations as requested in the power analysis:

fitmeasures(fitH0, 'chisq')
ph$fmin * (ph$N-1)

fitmeasures(fitH0, 'chisq') prints the model \(\chi^2\) test as obtained by lavaan when fitting modelH1 to the population data (covPop, see above). The line ph$fmin * (ph$N-1) computes the expected \(\chi^2\), i.e., \(F_0\) multiplied by \((N - 1)\). Obviously, both values match (= 33.17).

Using semPower.powerLav

The process illustrated above can be simplified by calling semPower.powerLav. In essence, all what is needed now are the population model string (modelPop) and the H0 model string (modelH0). This leads to:

ph <- semPower.powerLav(type = 'post-hoc',
                        modelPop = modelPop, modelH0 = modelH0,
                        alpha = .05, N = 250)
summary(ph)

which - of course - yields the same results.

Using semPower.powerLav in conjunction with semPower.genSigma

Instead of defining the population covariance matrix through a lavaan model string, It is often easier to obtain the population covariance matrix through the semPower.genSigma function, because this takes care for many intricacies such as the correct definition of the residual variances. In the scenario above, the following leads to the same results:

generated <- semPower.genSigma(Phi = .50, 
                               loadings = list(
                                 c(.8, .7, .6),
                                 c(.7, .6, .5)))
ph <- semPower.powerLav(type = 'post-hoc', 
                        Sigma = generated$Sigma, modelH0 = modelH0, 
                        alpha = .05, N = 250)
summary(ph)
Using semPower.powerCFA

As the scenario above involves a model and a hypothesis that can be immediately handled by the semPower.powerCFA convenience function, actually none of the above is required and could be achieved by a simple call to semPower.powerCFA:

ph <- semPower.powerCFA(type = 'post-hoc', 
                        comparison = 'saturated',
                        Phi = .5,
                        loadings = list(
                          c(.8, .7, .6),
                          c(.7, .6, .5)),
                        alpha = .05, N = 250)
summary(ph)

In the background, semPower.powerCFA performs all the necessary steps outlined above, so the results will - of course - be the same. Note the argument comparison = 'saturated' was set so to obtain the power to reject the H0 model when compared against the saturated model, which was also done in all examples above.

6.6 Generate a covariance matrix and lavaan model strings

Internally, all convenience functions performing a model-based power analysis generate a population variance-covariance matrix and, along with proper model strings, plug it into the semPower.powerLav function, which in turn transforms a model-based power analysis into a model-free power analysis providing population and model-implied covariance matrices as input.

For situations not covered in the semPower convenience functions, greater flexibility is offered when calling either semPower.powerLav or a model-free power analysis providing population and model-implied means and covariance matrices directly. A useful utility function described in this section is semPower.generateSigma, which offers several ways to generate a population variance-covariance matrix (and means) based on the model matrices or model features, respectively.

semPower.genSigma expects model matrices as input parameters, so different matrices need to be provided depending on whether a CFA or a SEM model is specified.

CFA model

In the CFA model, the model-implied variance-covariance matrix is given by \[\Sigma = \Lambda \Phi \Lambda' + \Theta\] where \(\Lambda\) is the \(p \times m\) loading matrix, \(\Phi\) is the \(m \times m\) variance-covariance matrix of the \(m\) factors, and \(\Theta\) is the residual variance-covariance matrix of the observed variables. The means are \[\mu = \tau + \Lambda \alpha\] with the \(p\) indicator intercepts \(\tau\) and the \(m\) factor means \(\alpha\).

Thus, to generate a CFA model-implied covariance matrix, Phi and Lambda (or respective shortcuts) need to be provided. For instance, the following generates the implied covariance matrix for a three-factor model with factor correlations as defined in Phi and the loading matrix as defined in Lambda:

  Phi <- matrix(c(
    c(1.0, 0.5, 0.1),
    c(0.5, 1.0, 0.2),
    c(0.1, 0.2, 1.0)
  ), byrow = T, ncol=3)
  Lambda <- matrix(c(
    c(0.4, 0.0, 0.0),
    c(0.7, 0.0, 0.0),
    c(0.8, 0.0, 0.0),
    c(0.0, 0.6, 0.0),
    c(0.0, 0.7, 0.0),
    c(0.0, 0.4, 0.0),
    c(0.0, 0.0, 0.8),
    c(0.0, 0.0, 0.7),
    c(0.0, 0.0, 0.8)
  ), byrow = T, ncol = 3)
  
  gen <- semPower.genSigma(Phi = Phi, Lambda = Lambda)

semPower.genSigma returns a list comprising all model matrices (in the example above, Lambda, Phi, and the variance-covariance matrix of the manifest residuals, Theta) and the model-implied variance-covariance matrix Sigma. In addition, various lavaan model strings are also returned:

  • modelPop model string defining a population model corresponding to the model matrices.
  • modelTrue analysis model (yielding a perfect model fit) .
  • modelTrueCFA only for SEM models: a pure CFA analysis model string omitting any regression relationships between the latent factors and rather allowing all latent factors to be correlated.

Thus, plugging the generated variance-covariance matrix and the generated true model string into lavaan yields estimates that mirror the defined population matrices:

library(lavaan)
summary(sem(gen$modelTrue, 
            sample.cov = gen$Sigma, 
            sample.nobs = 1000,
            sample.cov.rescale = FALSE))

Instead of providing the complete loading matrix as an argument for Lambda, semPower.genSigma also understands the shortcuts as described in detail here.

If any of the arguments is provided as a list (e.g., Lambda = list(Lambda1, Lambda2)), multiple implied covariance matrices are returned. This is particularly useful for multigroup analyses, where the covariance matrices differ across groups. For instance, the following defines the same measurement model for two groups (by providing a single argument to Lambda), but defines different factor correlations for the groups, so that two model-implied covariance matrices are returned:

  Phi1 <- matrix(c(
     c(1.0, 0.5, 0.1),
     c(0.5, 1.0, 0.2),
     c(0.1, 0.2, 1.0)
  ), byrow = T, ncol=3)
  Phi2 <- matrix(c(
     c(1.0, 0.6, 0.2),
     c(0.6, 1.0, 0.3),
     c(0.2, 0.3, 1.0)
  ), byrow = T, ncol=3)
  
  gen <- semPower.genSigma(Phi = list(Phi1, Phi2), Lambda = Lambda)
SEM model

If the model-implied covariance matrix is rather to be determined from a structural equation model (instead a CFA model), semPower.genSigma expects as arguments Beta, Psi, and Lambda (or respective shortcuts). In the structural equation model, the model-implied covariance matrix is given by

\[\Sigma = \Lambda (\mathbf{I} - \mathbf{B})^{-1} \Psi [(\mathbf{I} - \mathbf{B})^{-1}]' \Lambda' + \Theta\] where \(\mathbf{B}\) is the \(m \times m\) matrix containing the regression slopes and \(\Psi\) is the (residual) variance-covariance matrix of the \(m\) factors. The means are \[\mu = \tau + \Lambda (\mathbf{I} - \mathbf{B})^{-1} \alpha\]

The structural part of the model is primarily defined through Beta (\(\mathbf{B}\)). As an example, suppose there are four factors (\(X_1\), \(X_2\), \(X_3\), \(X_4\)), and Beta is defined as follows: \[ \begin{array}{lrrr} & X_1 & X_2 & X_3 & X_4\\ X_1 & 0.0 & 0.0 & 0.0 & 0.0 \\ X_2 & 0.0 & 0.0 & 0.0 & 0.0 \\ X_3 & 0.2 & 0.3 & 0.0 & 0.0 \\ X_4 & 0.3 & 0.5 & 0.0 & 0.0 \\ \end{array} \] Each row specifies how a particular factor is predicted by the available factors, so the above implies the following regression relations:

\[ X_1 = 0.0 \cdot X_1 + 0.0 \cdot X_2 + 0.0 \cdot X_3 + 0.0 \cdot X_4 \\ X_2 = 0.0 \cdot X_1 + 0.0 \cdot X_2 + 0.0 \cdot X_3 + 0.0 \cdot X_4 \\ X_3 = 0.2 \cdot X_1 + 0.3 \cdot X_2 + 0.0 \cdot X_3 + 0.0 \cdot X_4 \\ X_4 = 0.3 \cdot X_1 + 0.5 \cdot X_2 + 0.0 \cdot X_3 + 0.0 \cdot X_4 \]

which simplifies to

\[ X_3 = 0.2 \cdot X_1 + 0.3 \cdot X_2 \\ X_4 = 0.3 \cdot X_1 + 0.5 \cdot X_2 \]

Psi (\(\Psi\)) defines the (residual-)variances and whether there are (residual-)covariances between the factors. Suppose that \(\Psi\) is \[ \begin{array}{lrrr} & X_1 & X_2 & X_3 & X_4\\ X_1 & 1.0 & 0.3 & 0.0 & 0.0 \\ X_2 & 0.3 & 1.0 & 0.0 & 0.0 \\ X_3 & 0.0 & 0.0 & 1.0 & 0.2 \\ X_4 & 0.0 & 0.0 & 0.2 & 1.0 \\ \end{array} \]

which implies that the variances of \(X_1\) and \(X_2\) are 1, and that the residual variances of \(X_3\) and \(X_4\) are also 1. Further, the covariance between \(X_1\) and \(X_2\) is .3 (which is also a correlation, because both variances are 1) and a residual covariance between \(X_3\) and \(X_4\) of .2 (which is not a correlation, because the variances of \(X_3\) and \(X_4\) also depend on Beta).

The scenario just described can be achieved by defining Beta and Psi accordingly and plugging these as input to semPower.genSigma:

Beta <- matrix(c(
  c(.00, .00, .00, .00),
  c(.00, .00, .00, .00),
  c(.20, .30, .00, .00),
  c(.30, .50, .00, .00)
), byrow = TRUE, ncol = 4)
Psi <- matrix(c(
  c(1, .30, .00, .00),
  c(.30, 1, .00, .00),
  c(.00, .00, 1, .20),
  c(.00, .00, .20, 1)
), byrow = TRUE, ncol = 4)

gen <- semPower.genSigma(Beta = Beta, Psi = Psi, Lambda = diag(ncol(Beta)))

In the example above, a model with manifest variables only is defined by setting Lambda = diag(ncol(Beta)). If providing a loading matrix or any of the shortcuts instead, a genuine SEM model (including latent factors) is defined, as in the following example:

gen <- semPower.genSigma(Beta = Beta, Psi = Psi, 
                         loadM = .5, nIndicator = c(5, 4, 5, 6))

semPower.genSigma again returns a list comprising all model matrices (here: Lambda, Beta, Psi, and the variance-covariance matrix of the manifest residuals, Theta) and the model-implied variance-covariance matrix Sigma. In addition, the same lavaan model strings as in the CFA case are returned. Thus, again, plugging the generated variance-covariance matrix and the generated true model string into lavaan yields estimates that mirror the defined population matrices:

summary(sem(gen$modelTrue, 
            sample.cov = gen$Sigma, 
            sample.nobs = 1000,
            sample.cov.rescale = FALSE))

In the SEM case, semPower.genSigma also returns a pure CFA model that just estimates the factors and allows these factors to correlate, but discards any regression relationships between the factors:

summary(sem(gen$modelTrueCFA, 
            sample.cov = gen$Sigma, 
            sample.nobs = 1000,
            sample.cov.rescale = FALSE))

References

  • Browne, M. W., & Cudeck, R. (1992). Alternative ways of assessing model fit. Sociological Methods & Research, 21, 230–258.https://doi.org/10.1177/0049124192021002005

  • Foldnes, N. & Olsson, U. H. (2016) A Simple Simulation Technique for Nonnormal Data with Prespecified Skewness, Kurtosis, and Covariance Matrix. Multivariate Behavioral Research, 51, 207-219. https://doi.org/10.1080/00273171.2015.1133274

  • Jöreskog, K. G., & Sörbom, D. (1984). LISREL VI user’s guide (3rd ed.). Scientific Software.

  • Lin, J., & Bentler, P. M. (2012). A third moment adjusted test statistic for small sample factor analysis. Multivariate Behavioral Research, 47(3), 448-462. https://doi.org/10.1080/00273171.2012.673948

  • McDonald, R. P. (1989). An index of goodness-of-fit based on noncentrality. Journal of Classification, 6, 97–103. https://doi.org/10.1007/BF01908590

  • MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1, 130–149. https://doi.org/10.1037/1082-989X.1.2.130

  • Moshagen, M., & Erdfelder, E. (2016). A new strategy for testing structural equation models. Structural Equation Modeling, 23, 54–60. https://doi.org/10.1080/10705511.2014.950896

  • Qu, W., Liu, H., & Zhang, Z. (2020). A method of generating multivariate non-normal random numbers with desired multivariate skewness and kurtosis. Behavior Research Methods, 52, 939-946. [https://doi.org/10.3758/s13428-019-01291-5] (https://doi.org/10.3758/s13428-019-01291-5)

  • Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48, 1-36. https://doi.org/10.18637/jss.v048.i02

  • Ruscio, J., & Kaczetow, W. (2008). Simulating multivariate nonnormal data using an iterative algorithm. Multivariate Behavioral Research, 43, 355-381. https://doi.org/10.1080/00273170802285693

  • Steiger, J. H. (1990). Structural model evaluation and modification: An interval estimation approach. Multivariate Behavioral Research, 25, 173–180. https://doi.org/10.1207/s15327906mbr2502_4

  • Steiger, J. H., & Lind, J. C. (1980). Statistically based tests for the number of common factors. Presented at the Annual meeting of the Psychometric Society, Iowa City.

  • Stoel, R. D., Garre, F. G., Dolan, C., & Van Den Wittenboer, G. (2006). On the likelihood ratio test in structural equation modeling when parameters are subject to boundary constraints. Psychological Methods, 11, 439-455. https://doi.org/10.1037/1082-989X.11.4.439

  • Tofighi, D., & Kelley, K. (2020). Improved inference in mediation analysis: Introducing the model-based constrained optimization procedure. Psychological Methods, 25, 496–515. https://doi.org/10.1037/met0000259

  • Vale, C. D., & Maurelli, V. A. (1983). Simulating Multivariate Nonnormal Distributions. Psychometrika, 48, 465–471. https://doi.org/10.1007/bf02293687


  1. Note that this only applies for effect-size metrics that do not adjust for parsimony. For instance, when using a constant value for the RMSEA as an effect size metric, power decreases with increasing df, because the implied \(F_0\) decreases.↩︎

  2. A third type implements order constraints on parameters, but this is associated with certain intricacies concerning the limiting distributions and not covered by semPower.↩︎

  3. This may change once lavaan supports an optimizer that better copes with non-linear constraints, such as NPSOL or SLSQP.↩︎

  4. The estimated power in this particular example is only approximate, because the H0 model involves a parameter constraint on the boundary of the parameter space (i.e, constraining the factor correlation to 1), so that the correct limiting distribution is a mixture of non-central \(\chi^2\) distributions (see Stoel et al., 2006). In effect, power is (slightly) underestimated.↩︎

  5. Note that factor (residual) variances can be omitted, because each factor needs to be assigned a scale, so one free parameter is lost for each factor anyway. Concerning the number of free parameters, it does not matter whether factors are identified by fixing their variance or by fixing one loading.↩︎

  6. Simulated power is not supported for compromise power analysis, because this required an unfeasible large number of replications to yield reliable results.↩︎