Title: | A Collection of Functions for Negligible Effect/Equivalence Testing |
---|---|
Description: | Researchers often want to evaluate whether there is a negligible relationship among variables. The 'negligible' package provides functions that are useful for conducting negligible effect testing (also called equivalence testing). For example, there are functions for evaluating the equivalence of means or the presence of a negligible association (correlation or regression). Beribisky, N., Mara, C., & Cribbie, R. A. (2020) <doi:10.20982/tqmp.16.4.p424>. Beribisky, N., Davidson, H., Cribbie, R. A. (2019) <doi:10.7717/peerj.6853>. Shiskina, T., Farmus, L., & Cribbie, R. A. (2018) <doi:10.20982/tqmp.14.3.p167>. Mara, C. & Cribbie, R. A. (2017) <doi:10.1080/00220973.2017.1301356>. Counsell, A. & Cribbie, R. A. (2015) <doi:10.1111/bmsp.12045>. van Wieringen, K. & Cribbie, R. A. (2014) <doi:10.1111/bmsp.12015>. Goertzen, J. R. & Cribbie, R. A. (2010) <doi:10.1348/000711009x475853>. Cribbie, R. A., Gruman, J. & Arpin-Cribbie, C. (2004) <doi:10.1002/jclp.10217>. |
Authors: | Robert Cribbie [aut, cre], Udi Alter [aut], Nataly Beribisky [aut], Phil Chalmers [aut], Alyssa Counsell [aut], Linda Farmus [aut], Naomi Martinez Gutierrez [aut], Victoria Ng [ctb] |
Maintainer: | Robert Cribbie <[email protected]> |
License: | GPL-3 |
Version: | 0.1.9 |
Built: | 2024-11-04 20:22:59 UTC |
Source: | https://github.com/cribbie/negligible |
Testing for the presence of a negligible association between two categorical variables
neg.cat( v1 = NULL, v2 = NULL, tab = NULL, eiU = 0.2, data = NULL, plot = TRUE, save = FALSE, nbootpd = 1000, alpha = 0.05 ) ## S3 method for class 'neg.cat' print(x, ...)
neg.cat( v1 = NULL, v2 = NULL, tab = NULL, eiU = 0.2, data = NULL, plot = TRUE, save = FALSE, nbootpd = 1000, alpha = 0.05 ) ## S3 method for class 'neg.cat' print(x, ...)
v1 |
first categorical variable |
v2 |
second categorical variable |
tab |
contingency table for the two predictor variables |
eiU |
upper limit of equivalence interval |
data |
optional data file containing the categorical variables |
plot |
logical; should a plot be printed out with the effect and the proportional distance |
save |
should the plot be saved to 'jpg' or 'png' |
nbootpd |
number of bootstrap samples for calculating the CI for the proportional distance |
alpha |
nominal acceptable Type I error rate level |
x |
Data frame from neg.cat |
... |
extra arguments |
This function evaluates whether a negligible relationship exists among two categorical variables.
The statistical test is based on the Cramer's V statistic; namely addressing the question of whether the upper limit of the confidence interval for Cramer's V falls below the upper bound of the negligible effect (equivalence) interval (eiU).
If the upper bound of the CI for Cramer's V falls below eiU, we can reject Ho: The relationship is nonnegligible (V >= eiU).
eiU is set to .2 by default, but should be set based on the context of the research. Since Cramer's V statistic is in a correlation metric, setting eiU is a matter of determining what correlation is the minimally meaningful effect size (MMES) given the context of the research.
Users can input either the names of the categorical variables (v1, v2) or a frequency (contingency) table (tab).
The proportional distance (V/eiU) estimates the proportional distance of the effect from 0 to eiU, and acts as an alternative effect size measure.
A list
containing the following:
cramv
Cramer's V statistic
propvar
Proportion of variance explained (V^2)
cil
Lower bound of the confidence interval for Cramer's V
ciu
Upper bound of the confidence interval for Cramer's V
eiU
Upper bound of the negligible effect (equivalence) interval
decis
NHST decision
PD
Proportional distance
CI95L
Lower bound of the 1-alpha CI for the PD
CI95U
Upper bound of the 1-alpha CI for the PD
alpha
Nominal Type I error rate
Rob Cribbie [email protected]
The confidence interval for the proportional distance is computed via bootstrapping (percentile bootstrap).
sex<-rep(c("m","f"),c(12,22)) haircol<-rep(c("bld","brn","bld","brn"),c(9,7,11,7)) d <- data.frame(sex,haircol) tab<-table(sex,haircol) neg.cat(tab=tab, alpha=.05, nbootpd=5) neg.cat(v1=sex, v2=haircol, data=d, nbootpd=5)
sex<-rep(c("m","f"),c(12,22)) haircol<-rep(c("bld","brn","bld","brn"),c(9,7,11,7)) d <- data.frame(sex,haircol) tab<-table(sex,haircol) neg.cat(tab=tab, alpha=.05, nbootpd=5) neg.cat(v1=sex, v2=haircol, data=d, nbootpd=5)
Function performs one of six equivalence tests for CFI fit index.
neg.cfi( mod, alpha = 0.05, eq.bound, modif.eq.bound = FALSE, ci.method = "equiv", nbootpd = 50, nboot = 250L, round = 3, plot = TRUE, saveplot = TRUE ) ## S3 method for class 'neg.cfi' print(x, ...)
neg.cfi( mod, alpha = 0.05, eq.bound, modif.eq.bound = FALSE, ci.method = "equiv", nbootpd = 50, nboot = 250L, round = 3, plot = TRUE, saveplot = TRUE ) ## S3 method for class 'neg.cfi' print(x, ...)
mod |
lavaan model object |
alpha |
desired alpha level (default = .05) |
eq.bound |
lower end of equivalence interval for comparison; must be .99, .95, .92 or .90 if modif.eq.bound = TRUE |
modif.eq.bound |
should the lower end of the equivalence interval be modified (default = FALSE) |
ci.method |
method used to calculate confidence interval; options are "yuan", "equiv" or "yhy.boot"; "yuan" corresponds to (1-alpha) percent CI, "equiv" corresponds to (1-2alpha) percent CI, "yhy.boot" corresponds to (1-2alpha) percent boot CI (default = "equiv") |
nbootpd |
number of bootstrap samples by "yhy.boot" for pd function |
nboot |
number of bootstrap samples if "yhy.boot" is selected as ci.method (default = 250L) |
round |
number of digits to round equivalence bound and confidence interval bounds (default = 3) |
plot |
logical, plotting the results (default = TRUE) |
saveplot |
saving plots (default = FALSE) |
x |
object of class |
... |
extra arguments |
#' The user specifies the lavaan fitted model object, the desired equivalence bound, and method of confidence interval computation. By default, the function does not modify the equivalence bounds according to Yuan et al. (2016). The user can also choose to instead run an equivalence test using a modified equivalence bound if the equivalence bound to be modified is .01, .05, .08, or .10. Alpha level can also be modified.
For information on modified equivalence bounds see Yuan, K. H., Chan, W., Marcoulides, G. A., & Bentler, P. M. (2016). Assessing structural equation models by equivalence testing with adjusted fit indexes. Structural Equation Modeling: A Multidisciplinary Journal, 23(3), 319-330. doi: https://doi.org/10.1080/10705511.2015.1065414.
The proportional distance quantifies the proportional distance from 0 to the nearest negligible effect (equivalence) interval (here, eiU). As values get farther from 0 the relationship becomes more substantial, with values greater than 1 indicating that the effect falls outside of the negligible effect (equivalence) interval.
returns a list
containing analysis and respective statistics
and decision.
title1
The title of the CFI equivalence test. The appropriate title of the test will be displayed depending on the ci.method chosen and whether modif.eq.bound is TRUE or FALSE.
cfi_index
The CFI index.
ci.method
The method for confidence interval calculation.
cfi_eq
The lower end of the confidence interval for the CFI index.
eq.bound
The equivalence bound.
PD
Proportional distance (PD).
cilpd
Lower bound of the 1-alpha CI for the PD.
ciupd
Upper bound of the 1-alpha CI for the PD.
Rob Cribbie [email protected] and Nataly Beribisky [email protected]
d <- lavaan::HolzingerSwineford1939 hs.mod <- 'visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9' fit1 <- lavaan::cfa(hs.mod, data = d) neg.cfi(mod = fit1, alpha = .05, eq.bound = .95, modif.eq.bound = FALSE, ci.method = "equiv", round = 3, plot = TRUE)
d <- lavaan::HolzingerSwineford1939 hs.mod <- 'visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9' fit1 <- lavaan::cfa(hs.mod, data = d) neg.cfi(mod = fit1, alpha = .05, eq.bound = .95, modif.eq.bound = FALSE, ci.method = "equiv", round = 3, plot = TRUE)
Function performs an equivalence based test of lack of association with resampling.
neg.cor( v1, v2, eiU, eiL, alpha = 0.05, na.rm = TRUE, plot = TRUE, data = NULL, saveplot = FALSE, seed = NA, ... ) ## S3 method for class 'neg.cor' print(x, ...)
neg.cor( v1, v2, eiU, eiL, alpha = 0.05, na.rm = TRUE, plot = TRUE, data = NULL, saveplot = FALSE, seed = NA, ... ) ## S3 method for class 'neg.cor' print(x, ...)
v1 |
the first variable of interest |
v2 |
the second variable of interest |
eiU |
the upper bound of the equivalence interval, in terms of the magnitude of a correlation |
eiL |
the lower bound of the equivalence interval, in terms of the magnitude of a correlation |
alpha |
desired alpha level |
na.rm |
logical; remove missing values? |
plot |
whether or not to print graphics of the results (default = TRUE) |
data |
data frame where two variables (v1 and y) are contained - optional |
saveplot |
saving plots (default = FALSE) |
seed |
optional argument to set seed |
... |
additional arguments to be passed |
x |
object of class |
From Goertzen, J. R., & Cribbie, R. A. (2010). Detecting a lack of association. British Journal of Mathematical and Statistical Psychology, 63(3), 527–537
This function evaluates whether a negligible relationship exists among two continuous variables.
The statistical test is based on a bootstrap-generated 1-2*alpha CI for the correlation; in other words, does the 1-2*alpha CI for the falls completely within the negligible effect (equivalence) interval.
The user needs to specify the lower and upper bounds of the negligible effect (equivalence) interval (eiL,eiU). Since we working in a correlation magnitude, setting these bounds requires estimating the minimally meaningful effect size (MMES); in this case, the minimally meaningful correlation (e.g., eiL = - .3, eiU = .3).
The 'plot' argument, if TRUE, will generate a plot of the observed effect (correlation) with the associated 1-2*alpha CI, along with a plot of the PD and the associated 1-alpha CI.
A list
including the following:
corxy
Sample correlation value
eiL
Lower bound of the negligible effect (equivalence) interval
eiU
Upper bound of the negligible effect (equivalence) interval
nresamples
Number of resamples for the bootstrapping procedure
q1
Lower bound of the confidence interval for the correlation
q2
Upper bound of the confidence interval for the correlation
PD
Proportional distance
CIPDL
Lower bound of the 1-alpha CI for the PD
CIPDU
Upper bound of the 1-alpha CI for the PD
alpha
Nominal Type I error rate
Rob Cribbie [email protected] Phil Chalmers [email protected] and Nataly Beribisky [email protected]
#Negligible correlation test between v1 and v2 #with an interval of ei=(-.2.2) v1 <- rnorm(50) v2 <- rnorm(50) cor(v1, v2) neg.cor(v1 = v1, v2 = v2, eiU = .2, eiL = -.2)
#Negligible correlation test between v1 and v2 #with an interval of ei=(-.2.2) v1 <- rnorm(50) v2 <- rnorm(50) cor(v1, v2) neg.cor(v1 = v1, v2 = v2, eiU = .2, eiL = -.2)
Function computes the equivalence testing method (total effect) for evaluating substantial mediation and Kenny method for full mediation.
neg.esm( X, Y, M, alpha = 0.05, minc = 0.15, eil = -0.15, eiu = 0.15, nboot = 1000L, data = NULL, plot = TRUE, saveplot = FALSE, seed = NA ) ## S3 method for class 'neg.esm' print(x, ...)
neg.esm( X, Y, M, alpha = 0.05, minc = 0.15, eil = -0.15, eiu = 0.15, nboot = 1000L, data = NULL, plot = TRUE, saveplot = FALSE, seed = NA ) ## S3 method for class 'neg.esm' print(x, ...)
X |
predictor variable |
Y |
outcome variable |
M |
mediator variable |
alpha |
alpha level (default = .05) |
minc |
minimum correlation between x and Y (default is .15) |
eil |
lower bound of equivalence interval in standardized units(default is -.15) |
eiu |
upper bound of equivalence interval in standardized units (default is .15) |
nboot |
number of bootstraps (default = 500L) |
data |
optional data argument |
plot |
logical, plotting the results (default = TRUE) |
saveplot |
saving plots (default = FALSE) |
seed |
optional argument to set seed |
x |
object of class |
... |
extra arguments |
This function evaluates whether a negligible direct effect of X on Y exists after controlling for the mediator. Another way to word this is that the indirect effect accounts for a substantial proportion of the variability in X-Y relationship. See Beribisky, Mara, and Cribbie (https://doi.org/10.20982/tqmp.16.4.p424)
The user specifies the IV (X), DV (Y) and mediator (M). The user can also specify the alpha level, the lower/upper bound of the negligible effect interval (eiL, eiU), the number of bootstrap samples (nboot), as well as the minimum correlation between X and Y that is permitted for a valid test of substantial mediation.
The variables X, Y and M can be specified as stand-alone, or a data argument can be used if the data reside in an R dataset.
For the Kenny method see: https://davidakenny.net/cm/mediate.htm
The proportional distance quantifies the proportional distance from 0 to the nearest negligible effect (equivalence) interval (eiL, eiU). As values get farther from 0 the relationship becomes more substantial, with values greater than 1 indicating that the effect falls outside of the negligible effect (equivalence) interval.
Note that the number of bootstrap samples (nboot) are low for the example since the example has a time limit of 5 seconds to pass CRAN testing; we recommend running a much higher number of bootstrap samples for analyses.
A list
including the following:
minc
Minimum correlation between X and Y for a valid negligible effect (equivalence) test
corxy
Sample correlation between the IV (X) and DV (Y)
dir_eff
Sample standardized direct effect between the IV (X) and DV (Y) after controlling for the mediator (M)
eiL
Lower bound of the negligible effect (equivalence) interval
eiU
Upper bound of the negligible effect (equivalence) interval
cil
Lower bound of the 1-2*alpha CI for the standardized direct effect of X on Y
ciu
Upper bound of the 1-2*alpha CI for the standardized direct effect of X on Y
PD
Proportional distance (PD)
cilpd
Lower bound of the 1-alpha CI for the PD
ciupd
Upper bound of the 1-alpha CI for the PD
ab_par
Standardized indirect effect
abdivc_k
Proportion mediated: Standardized indirect effect divided by the standardized total effect
alpha
Nominal Type I error rate
Rob Cribbie [email protected] and Nataly Beribisky [email protected]
#equivalence test for substantial mediation #with an equivalence interval of -.15 to .15 X<-rnorm(100,sd=2) M<-.5*X + rnorm(100) Y<-.5*M + rnorm(100) neg.esm(X,Y,M, eil = -.15, eiu = .15, nboot = 5)
#equivalence test for substantial mediation #with an equivalence interval of -.15 to .15 X<-rnorm(100,sd=2) M<-.5*X + rnorm(100) Y<-.5*M + rnorm(100) neg.esm(X,Y,M, eil = -.15, eiu = .15, nboot = 5)
This function allows researchers to test whether the difference in the variances of independent populations is negligible, where negligible represents the smallest meaningful effect size (MMES, where in this case the effect is the difference in population variances)
neg.indvars(dv, iv, eps = 0.5, alpha = 0.05, na.rm = TRUE, data = NULL, ...) ## S3 method for class 'neg.indvars' print(x, ...)
neg.indvars(dv, iv, eps = 0.5, alpha = 0.05, na.rm = TRUE, data = NULL, ...) ## S3 method for class 'neg.indvars' print(x, ...)
dv |
Outcome Variable |
iv |
Independent Variable |
eps |
Used to Establish the Equivalence Bound (Conservative: .25; Liberal: .50, according to Wellek, 2010) |
alpha |
Nominal Type I Error Rate |
na.rm |
Missing Data Treatment |
data |
Dataset containing dv and iv |
... |
Extra arguments |
x |
object of class |
This function evaluates whether the difference in the population variances of J independent groups can be considered negligible (i.e., the population variances can be considered equivalent).
The user provides the name of the outcome/dependent variable (should be continuous) and the name of Independent Variable (predictor, should be a factor), as well as the epsilon value (eps) which determines the smallest difference in variances that can be considered non-negligible.
Wellek (2010) suggests liberal and conservative values of eps = .50 and eps = .25, respectively. See Wellek, 2010, pp. 16, 17, 22, for details.
See Mara & Cribbie (2018): https://doi.org/10.1080/00220973.2017.1301356
A list
including the following:
vars
Sample variances
sds
Sample standard deviations
mads
Sample median absolute deviations
ratio
Ratio of the largest to smallest variance
eps
Epsilon (e) can be described as the maximum difference in the variances that one would consider to be unimportant (see Details).
LWW_md
Levene-Wellek-Welch statistic based on the median.
crit_LWW_md
Critical value for the Levene-Wellek-Welch statistic based on the median.
alpha
Nominal Type I error rate
Rob Cribbie [email protected] and Constance Mara [email protected]
#Two Group Example indvar<-rep(c("a","b"),c(10,12)) depvar<-rnorm(22) d<-data.frame(indvar,depvar) neg.indvars(depvar,indvar) neg.indvars(dv=depvar,iv=indvar,eps=.25,data=d) neg.indvars(dv=depvar,iv=indvar,eps=.5) #Four Group Example indvar<-rep(c("a","b","c","d"),c(10,12,15,13)) depvar<-rnorm(50) d<-data.frame(indvar,depvar) neg.indvars(dv=depvar,iv=indvar,eps=.25,data=d) neg.indvars(dv=depvar,iv=indvar)
#Two Group Example indvar<-rep(c("a","b"),c(10,12)) depvar<-rnorm(22) d<-data.frame(indvar,depvar) neg.indvars(depvar,indvar) neg.indvars(dv=depvar,iv=indvar,eps=.25,data=d) neg.indvars(dv=depvar,iv=indvar,eps=.5) #Four Group Example indvar<-rep(c("a","b","c","d"),c(10,12,15,13)) depvar<-rnorm(50) d<-data.frame(indvar,depvar) neg.indvars(dv=depvar,iv=indvar,eps=.25,data=d) neg.indvars(dv=depvar,iv=indvar)
This function allows researchers to test whether the interaction effect among two categorical independent variables, with a continuous outcome variable, is negligible.
neg.intcat( iv1 = NULL, iv2 = NULL, dv = NULL, neiL, neiU, nboot = 50, alpha = 0.05, data = NULL ) ## S3 method for class 'neg.intcat' print(x, ...)
neg.intcat( iv1 = NULL, iv2 = NULL, dv = NULL, neiL, neiU, nboot = 50, alpha = 0.05, data = NULL ) ## S3 method for class 'neg.intcat' print(x, ...)
iv1 |
Levels of the first independent variable |
iv2 |
Levels of the second independent variable |
dv |
Score on the continuous dependent/outcome variable |
neiL |
Lower bound of the negligible effect interval |
neiU |
Upper bound of the negligible effect interval |
nboot |
Number of bootstrap samples for calculating CIs |
alpha |
Nominal Type I Error rate |
data |
Dataset containing iv1, iv2 and dv |
x |
object of class |
... |
extra arguments |
This function allows researchers to test whether the interaction effect among two categorical independent variables, with a continuous outcome variable, is negligible. In this case, 'negligible' represents the minimum meaningful interaction effect.
This test uses an intersection union approach, where a decision regarding the omnibus interaction effect is inferred from the decision regarding all simple (2 x 2) interaction effects; in other words, if all simple interaction effects are deemed negligible, then the omnibus interaction is also deemed negligible.
The test also uses the percentile bootstrap to determine confidence intervals, an approach that has been found to be robust to violations of normality and variance homogeneity.
See Cribbie, R. A., Ragoonanan, C., & Counsell, A. (2016). Testing for negligible interaction: A coherent and robust approach. British Journal of Mathematical and Statistical Psychology, 69, 159-174.
A list
including the following:
meanx
Sample mean of the first population/group.
meany
Sample mean of the second population/group.
trmeanx
Sample trimmed mean of the first population/group.
trmeany
Sample trimmed mean of the second population/group.
sdx
Sample standard deviation of the first population/group.
sdy
Sample standard deviation of the second population/group.
madx
Sample median absolute deviation of the first population/group.
mady
Sample median absolute deviation of the second population/group.
eiL
Lower bound of the negligible effect (equivalence) interval.
eiU
Upper bound of the negligible effect (equivalence) interval.
effsizeraw
Simple difference in the means (or trimmed means if normality = FALSE)
cilraw2
Lower bound of the 1-alpha CI for the raw mean difference.
ciuraw2
Upper bound of the 1-alpha CI for the raw mean difference.
cilraw
Lower bound of the 1-2*alpha CI for the raw mean difference.
ciuraw
Upper bound of the 1-2*alpha CI for the raw mean difference.
effsized
Standardized mean (or trimmed mean if normality = FALSE) difference.
cild
Lower bound of the 1-alpha CI for the standardized mean (or trimmed mean if normality = FALSE) difference.
ciud
Upper bound of the 1-alpha CI for the standardized mean (or trimmed mean if normality = FALSE) difference.
effsizepd
Proportional distance statistic.
cilpd
Lower bound of the 1-alpha CI for the proportional distance statistic.
ciupd
Upper bound of the 1-alpha CI for the proportional distance statistic.
t1
First t-statistic from the TOST procedure.
t1
Second t-statistic from the TOST procedure.
df1
Degrees of freedom for the first t-statistic from the TOST procedure.
df2
Degrees of freedom for the second t-statistic from the TOST procedure.
p1
p value associated with the first t-statistic from the TOST procedure.
p2
p value associated with the second t-statistic from the TOST procedure.
alpha
Nominal Type I error rate
Rob Cribbie [email protected]
outcome<-rnorm(60,mean=50,sd=10) iv_1<-rep(c("male","female"),each=30) iv_2<-rep(c("young","middle","old"),each=10,times=2) d<-data.frame(iv_1,iv_2,outcome) neg.intcat(iv1=iv_1,iv2=iv_2,dv=outcome,neiL=-15,neiU=15,nboot=10) neg.intcat(iv1=iv_1,iv2=iv_2,dv=outcome,neiL=-15,neiU=15,nboot=10,data=d)
outcome<-rnorm(60,mean=50,sd=10) iv_1<-rep(c("male","female"),each=30) iv_2<-rep(c("young","middle","old"),each=10,times=2) d<-data.frame(iv_1,iv_2,outcome) neg.intcat(iv1=iv_1,iv2=iv_2,dv=outcome,neiL=-15,neiU=15,nboot=10) neg.intcat(iv1=iv_1,iv2=iv_2,dv=outcome,neiL=-15,neiU=15,nboot=10,data=d)
Testing for the presence of a negligible interaction between two continuous predictor variables
neg.intcont( outcome = NULL, pred1 = NULL, pred2 = NULL, eiL, eiU, standardized = TRUE, nbootpd = 1000, data = NULL, alpha = 0.05, plot = TRUE, save = FALSE ) ## S3 method for class 'neg.intcont' print(x, ...)
neg.intcont( outcome = NULL, pred1 = NULL, pred2 = NULL, eiL, eiU, standardized = TRUE, nbootpd = 1000, data = NULL, alpha = 0.05, plot = TRUE, save = FALSE ) ## S3 method for class 'neg.intcont' print(x, ...)
outcome |
continuous outcome variable |
pred1 |
first continuous predictor variable |
pred2 |
second continuous predictor variable |
eiL |
lower limit of the negligible effect (equivalence) interval |
eiU |
upper limit of the negligible effect (equivalence) interval |
standardized |
logical; should the solution be based on standardized variables (and eiL/eiU) |
nbootpd |
number of bootstrap samples for the calculation of the CI for the proportional distance |
data |
optional data file containing the categorical variables |
alpha |
nominal acceptable Type I error rate level |
plot |
logical; should a plot be printed out with the effect and the proportional distance |
save |
logical; should the plot be saved |
x |
object of class |
... |
extra arguments |
This function evaluates whether the interaction between two continuous predictor variables is negligible. This can be important for deciding whether to remove an interaction term from a model or to evaluate a hypothesis related to negligible interaction.
eiL/eiU represent the bounds of the negligible effect (equivalence) interval (i.e., the minimally meaningful effect size, MMES) and should be set based on the context of the research. When standardized = TRUE, Acock (2014) suggests that the MMES for correlations can also be applied to standardized effects - Acock, A. C. (2014). A Gentle Introduction to Stata (4th ed.). Texas: Stata Press.
User can input the outcome variable and two predictor variable names directly (i.e., without a data statement), or can use the data statement to indicate the dataset in which the variables can be found.
The advantage of this approach when standardized = TRUE and there are only two predictors is that the Delta method is adopted. However, for general cases researchers can also use the neg.reg function.
The proportional distance (interaction coefficient/negligible effect bound) estimates the proportional distance of the effect from 0 to negligible effect bound, and acts as an alternative effect size measure.
The confidence interval for the proportional distance is computed via bootstrapping (percentile bootstrap).
A list
containing the following:
intcoef
Interaction coefficient
intcil
Lower bound of the 1-alpha CI for the interaction coefficient
intciu
Upper bound of the 1-alpha CI for the interaction coefficient
eiL
Lower bound of the negligible effect (equivalence) interval
eiU
Upper bound of the negligible effect (equivalence) interval
sprs
Semi-partial correlation squared for the interaction term
PD
Proportional distance
CI95L
Lower bound of the 1-alpha CI for the PD
CI95U
Upper bound of the 1-alpha CI for the PD
alpha
Nominal Type I error rate
Rob Cribbie [email protected]
y<-rnorm(25) x1<-rnorm(25) x2<-rnorm(25) d<-data.frame(y,x1,x2) neg.intcont(outcome = y, pred1 = x1, pred2 = x2, data = d, eiL = -.25, eiU = .25, standardized = TRUE, nbootpd = 100)
y<-rnorm(25) x1<-rnorm(25) x2<-rnorm(25) d<-data.frame(y,x1,x2) neg.intcont(outcome = y, pred1 = x1, pred2 = x2, data = d, eiL = -.25, eiU = .25, standardized = TRUE, nbootpd = 100)
This function allows researchers to test whether the distribution of scores in a distribution has a Shapiro-Wilk W statistic that is negligibly different from 1.
neg.normal(x, eiL = 0.95, nboot = 1000, plot = TRUE, alpha = 0.05, data = NULL) ## S3 method for class 'neg.normal' print(x, ...)
neg.normal(x, eiL = 0.95, nboot = 1000, plot = TRUE, alpha = 0.05, data = NULL) ## S3 method for class 'neg.normal' print(x, ...)
x |
object of class |
eiL |
Lower Bound of the Negligible Effect Interval for W |
nboot |
Number of Bootstrap Samples for computing the CIs |
plot |
If the user prefers plots to be generated |
alpha |
Nominal Type I Error Rate |
data |
Dataset containing x |
... |
Extra arguments |
#' This function allows researchers to test whether the distribution of scores in a distribution has a Shapiro-Wilk W statistic that is negligibly different from 1. I.e., we are testing the null hypothesis that W is less than or equal to some prespecified lower bound for W (i.e., the least extreme value of W that is non-negligibly different from 1). We recommend .95 and .975 as liberal and conservative bounds, respectively
A list
including the following:
sw
Sample Shapiro-Wilk W statistic
sskew
Sample skewness
skurt
Sample kurtosis
sddiff_mn_mdn
Standardized difference between the sample mean and median
sddiff_mn_trmn
Standardized difference between the sample mean and trimmed mean
lb
Lower bound of 1-alpha CI for W
eiL
Maximum W for which the degree of nonnormality is considered extreme
Rob Cribbie [email protected] and Linda Farmus [email protected]
#Normal Distribution xx<-stats::rnorm(200) neg.normal(xx) #Positive Skewed Distribution xx<-stats::rchisq(200, df=3) neg.normal(xx)
#Normal Distribution xx<-stats::rnorm(200) neg.normal(xx) #Positive Skewed Distribution xx<-stats::rchisq(200, df=3) neg.normal(xx)
This function allows researchers to test whether the difference between the means of two dependent populations is negligible, where negligible represents the smallest meaningful effect size (MMES)
neg.paired( var1 = NULL, var2 = NULL, outcome = NULL, group = NULL, ID = NULL, neiL, neiU, normality = TRUE, nboot = 10000, alpha = 0.05, plot = TRUE, saveplot = FALSE, data = NULL, seed = NA, ... ) ## S3 method for class 'neg.paired' print(x, ...)
neg.paired( var1 = NULL, var2 = NULL, outcome = NULL, group = NULL, ID = NULL, neiL, neiU, normality = TRUE, nboot = 10000, alpha = 0.05, plot = TRUE, saveplot = FALSE, data = NULL, seed = NA, ... ) ## S3 method for class 'neg.paired' print(x, ...)
var1 |
Data for Group 1 (if outcome, group and ID are omitted) |
var2 |
Data for Group 2 (if outcome, group and ID are omitted) |
outcome |
Dependent Variable (if var1 and var2 are omitted) |
group |
Dichotomous Predictor/Independent Variable (if var1 and var2 are omitted) |
ID |
participant ID (if var1 and var2 are omitted) |
neiL |
Lower Bound of the Equivalence Interval |
neiU |
Upper Bound of the Equivalence Interval |
normality |
Are the population variances (and hence the residuals) assumed to be normally distributed? |
nboot |
Number of bootstrap samples for calculating CIs |
alpha |
Nominal Type I Error rate |
plot |
Should a plot of the results be produced? |
saveplot |
Should the plot be saved? |
data |
Dataset containing var1/var2 or outcome/group/ID |
seed |
Seed number |
... |
Extra arguments |
x |
object of class |
This function evaluates whether the difference in the means of 2 dependent populations can be considered negligible (i.e., the population means can be considered equivalent).
The user specifies either the data associated with the first and second groups/populations (var1, var2, both should be continuous) or specifies the Indepedent Variable/Predictor (group, should be a factor) and the Dependent Variable (outcome, should be continuous). A 'data' statement can be used if the variables are stored in an R dataset.
The user must also specify the lower and upper bounds of the negligible effect (equivalence) interval. These are specified in the original units of the outcome variable.
A list
including the following:
meanx
Sample mean of the first population/group.
meany
Sample mean of the second population/group.
medx
Sample median of the first population/group.
medy
Sample median second population/group.
sdx
Sample standard deviation of the first population/group.
sdy
Sample standard deviation of the second population/group.
madx
Sample median absolute deviation of the first population/group.
mady
Sample median absolute deviation of the second population/group.
neiL
Lower bound of the negligible effect (equivalence) interval.
neiU
Upper bound of the negligible effect (equivalence) interval.
effsizeraw
Simple difference in the means (or medians if normality = FALSE)
cilraw2
Lower bound of the 1-alpha CI for the raw mean difference.
ciuraw2
Upper bound of the 1-alpha CI for the raw mean difference.
cilraw
Lower bound of the 1-2*alpha CI for the raw mean difference.
ciuraw
Upper bound of the 1-2*alpha CI for the raw mean difference.
effsized
Standardized mean (or median if normality = FALSE) difference.
cild
Lower bound of the 1-alpha CI for the standardized mean (or median if normality = FALSE) difference.
ciud
Upper bound of the 1-alpha CI for the standardized mean (or median if normality = FALSE) difference.
effsizepd
Proportional distance statistic.
cilpd
Lower bound of the 1-alpha CI for the proportional distance statistic.
ciupd
Upper bound of the 1-alpha CI for the proportional distance statistic.
t1
First t-statistic from the TOST procedure.
t1
Second t-statistic from the TOST procedure.
df1
Degrees of freedom for the first t-statistic from the TOST procedure.
df2
Degrees of freedom for the second t-statistic from the TOST procedure.
pval1
p value associated with the first t-statistic from the TOST procedure.
pval2
p value associated with the second t-statistic from the TOST procedure.
alpha
Nominal Type I error rate
seed
Seed number
Rob Cribbie [email protected] Naomi Martinez Gutierrez [email protected]
#wide format ID<-rep(1:20) control<-rnorm(20) intervention<-rnorm(20) d<-data.frame(ID, control, intervention) head(d) neg.paired(var1=control,var2=intervention,neiL=-1,neiU=1,plot=TRUE, data=d) neg.paired(var1=d$control,var2=d$intervention,neiL=-1,neiU=1,plot=TRUE) neg.paired(var1=d$control,var2=d$intervention,neiL=-1,neiU=1,normality=FALSE, nboot=10,plot=TRUE) ## Not run: #long format sample1<-sample(1:20, 20, replace=FALSE) sample2<-sample(1:20, 20, replace=FALSE) ID<-c(sample1, sample2) group<-rep(c("control","intervention"),c(20,20)) outcome<-c(control,intervention) d<-data.frame(ID,group,outcome) neg.paired(outcome=outcome,group=group,ID=ID,neiL=-1,neiU=1,plot=TRUE,data=d) neg.paired(outcome=d$outcome,group=d$group,ID=d$ID,neiL=-1,neiU=1,plot=TRUE) neg.paired(outcome=d$outcome,group=d$group,ID=d$ID,neiL=-1,neiU=1,plot=TRUE, normality=FALSE) #long format with multiple variables sample1<-sample(1:20, 20, replace=FALSE) sample2<-sample(1:20, 20, replace=FALSE) ID<-c(sample1, sample2) attendance<-sample(1:3, 20, replace=TRUE) group<-rep(c("control","intervention"),c(20,20)) outcome<-c(control,intervention) d<-data.frame(ID,group,outcome,attendance) neg.paired(outcome=outcome,group=group,ID=ID,neiL=-1,neiU=1,plot=TRUE,data=d) neg.paired(outcome=d$outcome,group=d$group,ID=d$ID,neiL=-1,neiU=1,plot=TRUE) #open a dataset library(negligible) d<-perfectionism names(d) head(d) neg.paired(var1=atqpre.total,var2=atqpost.total, neiL=-10,neiU=10,data=d) #Dataset with missing data x<-rnorm(10) x[c(3,6)]<-NA y<-rnorm(10) y[c(7)]<-NA neg.paired(x,y,neiL=-1,neiU=1, normality=FALSE) ## End(Not run)
#wide format ID<-rep(1:20) control<-rnorm(20) intervention<-rnorm(20) d<-data.frame(ID, control, intervention) head(d) neg.paired(var1=control,var2=intervention,neiL=-1,neiU=1,plot=TRUE, data=d) neg.paired(var1=d$control,var2=d$intervention,neiL=-1,neiU=1,plot=TRUE) neg.paired(var1=d$control,var2=d$intervention,neiL=-1,neiU=1,normality=FALSE, nboot=10,plot=TRUE) ## Not run: #long format sample1<-sample(1:20, 20, replace=FALSE) sample2<-sample(1:20, 20, replace=FALSE) ID<-c(sample1, sample2) group<-rep(c("control","intervention"),c(20,20)) outcome<-c(control,intervention) d<-data.frame(ID,group,outcome) neg.paired(outcome=outcome,group=group,ID=ID,neiL=-1,neiU=1,plot=TRUE,data=d) neg.paired(outcome=d$outcome,group=d$group,ID=d$ID,neiL=-1,neiU=1,plot=TRUE) neg.paired(outcome=d$outcome,group=d$group,ID=d$ID,neiL=-1,neiU=1,plot=TRUE, normality=FALSE) #long format with multiple variables sample1<-sample(1:20, 20, replace=FALSE) sample2<-sample(1:20, 20, replace=FALSE) ID<-c(sample1, sample2) attendance<-sample(1:3, 20, replace=TRUE) group<-rep(c("control","intervention"),c(20,20)) outcome<-c(control,intervention) d<-data.frame(ID,group,outcome,attendance) neg.paired(outcome=outcome,group=group,ID=ID,neiL=-1,neiU=1,plot=TRUE,data=d) neg.paired(outcome=d$outcome,group=d$group,ID=d$ID,neiL=-1,neiU=1,plot=TRUE) #open a dataset library(negligible) d<-perfectionism names(d) head(d) neg.paired(var1=atqpre.total,var2=atqpost.total, neiL=-10,neiU=10,data=d) #Dataset with missing data x<-rnorm(10) x[c(3,6)]<-NA y<-rnorm(10) y[c(7)]<-NA neg.paired(x,y,neiL=-1,neiU=1, normality=FALSE) ## End(Not run)
Proportional Distance Function (post hoc function - not to be used independently)
neg.pd(effect, PD, eil, eiu, PDcil, PDciu, cil, ciu, Elevel, Plevel, save, oe)
neg.pd(effect, PD, eil, eiu, PDcil, PDciu, cil, ciu, Elevel, Plevel, save, oe)
effect |
observed effect |
PD |
proportional distance for effect |
eil |
lower bound of the equivalence interval |
eiu |
upper bound of the equivalence interval |
PDcil |
lower bound of the CI for the proportional distance |
PDciu |
upper bound of the CI for the proportional distance |
cil |
lower bound of the CI for the effect |
ciu |
upper bound of the CI for the effect |
Elevel |
1-2alpha CI for the effect |
Plevel |
1-alpha CI for the PD |
save |
Whether to save the plot or not |
oe |
Name of the original units of the effect of interest |
nothing is returned
## Not run: 1+1 ## End(Not run)
## Not run: 1+1 ## End(Not run)
This function evaluates whether a certain predictor variable in a multiple regression model can be considered statistically and practically negligible according to a predefined interval. i.e., minimally meaningful effect size (MMES)/smallest effect size of interest (SESOI). Where the effect tested is the relationship between the predictor of interest and the outcome variable, holding all other predictors constant.
neg.reg( data = NULL, formula = NULL, predictor = NULL, b = NULL, se = NULL, nop = NULL, n = NULL, eil, eiu, alpha = 0.05, test = "AH", std = FALSE, bootstrap = TRUE, nboot = 1000, plots = TRUE, saveplots = FALSE, seed = NA, ... ) ## S3 method for class 'neg.reg' print(x, ...)
neg.reg( data = NULL, formula = NULL, predictor = NULL, b = NULL, se = NULL, nop = NULL, n = NULL, eil, eiu, alpha = 0.05, test = "AH", std = FALSE, bootstrap = TRUE, nboot = 1000, plots = TRUE, saveplots = FALSE, seed = NA, ... ) ## S3 method for class 'neg.reg' print(x, ...)
data |
a data.frame or matrix which includes the variables considered in the regression model |
formula |
an argument of the form y~x1+x2...xn which defines the regression model |
predictor |
name of the variable/predictor upon which the test will be applied |
b |
effect size of the regression coefficient of interest, can be in standardized or unstandardized units |
se |
standard error associated with the above regression coefficient effect size, pay close attention to standardized vs. unstandardized |
nop |
number of predictors (excluding intercept) in the regression model |
n |
the sample size used in the regression analysis |
eil |
lower bound of the equivalence interval measured in the same units as the regression coefficients (can be either standardized or unstandardized) |
eiu |
upper bound of the equivalence interval measured in the same units as the regression coefficients (can be either standardized or unstandardized) |
alpha |
desired alpha level, default is .05 |
test |
AH is the default based on recommendation in Alter & Counsell (2020), TOST is an additional option |
std |
indicate if eil and eiu along with b (when dataset is not entered) are in standardized units |
bootstrap |
logical, default is TRUE, incorporating bootstrapping when calculating regression coefficients, SE, and CIs |
nboot |
1000 is the default. indicate if other number of bootstrapping iterations is desired |
plots |
logical, plotting the results. TRUE is set as default |
saveplots |
FALSE for no, "png" and "jpeg" for different formats |
seed |
to reproduce previous analyses using bootstrapping, the user can set their seed of choice |
... |
extra arguments |
x |
object of class |
This function evaluates whether a certain predictor variable in a multiple regression model can be considered statistically and practically negligible according to a predefined interval. i.e., minimally meaningful effect size (MMES)/smallest effect size of interest (SESOI). Where the effect tested is the relationship between the predictor of interest and the outcome variable, holding all other predictors constant.
Unlike the most common null hypothesis significance tests looking to detect a difference or the existence of an effect statistically different than zero, in negligible effect testing, the hypotheses are flipped: In essence, H0 states that the effect is non-negligible, whereas H1 states that the effect is in fact statistically and practically negligible.
The statistical tests are based on Anderson-Hauck (1983) and Schuirmann's (1987) Two One-Sided Test (TOST) equivalence testing procedures; namely addressing the question of whether the estimated effect size (and its associated uncertainty) of a predictor variable in a multiple regression model is smaller than the what the user defines as negligible effect size. Defining what is considered negligible effect is done by specifying the negligible (equivalence) interval: its upper (eiu) and lower (eil) bounds.
The negligible (equivalence) interval should be set based on the context of the research. Because the predictor's effect size can be in either standardized or unstandardized units, setting eil and eiu is a matter of determining what magnitude of the relationship between predictor and outcome in either standardized or unstandardized units is the minimally meaningful effect size (MMES) given the context of the research.
It is necessary to be consistent with the units of measurement. For example, unstandardized negligible interval bounds (i.e., eil and eiu) must only be used when std = FALSE (default). If the effect size (b), standard error (se), and sample size (n) are entered manually as arguments (i.e., without the dataset), these should also be in the same units of measurements. Whereas if the user prefers to specify eiu and eil in standardized unites, std = TRUE should be specified. In which case, any units entered into the function must also be in standardized form. Mixing unstandardized and standardized units would yield inaccurate results and likely lead to invalid conclusions. Thus, users must be cognizant of the measurement units of the negligible interval.
There are two main approaches to using neg.reg. The first (and more recommended) is by inserting a dataset ('data' argument) into the function. If the user/s have access to the dataset, they should use the following set of arguments: data, formula, predictor, bootstrap (optional), nboot (optional), and seed (optional). However, this function also accommodates cases where no dataset is available. In this case, users should use the following set of arguments instead: b, se, n, and nop. In either situation, users must specify the negligible interval bounds (eiu and eil). Other optional arguments and features include: alpha, std, test, plots, and saveplots.
The proportional distance (PD; effect size/eiu) estimates the proportional distance of the estimated effect to eiu, and acts as an alternative effect size measure.
The confidence interval for the PD is computed via bootstrapping (percentile bootstrap), unless the user does not insert a dataset. In which case, the PD confidence interval is calculated by dividing the upper and lower CI bounds for the effect size estimate by the absolute value of the negligible interval bounds.
A list
containing the following:
formula
The regression model
effect
Specifying if effect size is in standardized or unstandardized units
test
Test type, i.e., Anderson-Hauck (AH) or Two One-Sided Tests (TOST)
t.value
t test statistic. If TOST was specified, only the smaller of the t values will be presented
df
Degrees of freedom associated with the test statistic
n
Sample size
p.value
p value associated with the test statistic. If TOST was specified, only the larger of the p values will be presented
eil
Lower bound of the negligible effect (equivalence) interval
eiu
Upper bound of the negligible effect (equivalence) interval
predictor
Variable name of the predictor in question
b
Effect size of the specified predictor
se
Standard error associated with the effect size point estimate (in the same units)
l.ci
Lower bound of the 1-alpha CI for the effect size
u.ci
Upper bound of the 1-alpha CI for the effect size
pd
Proportional distance
pd.l.ci
Lower bound of the 1-alpha CI for the PD
pd.u.ci
Upper bound of the 1-alpha CI for the PD
seed
Seed identity if bootstrapping is used
decision
NHST decision
alpha
Nominal Type I error rate
Udi Alter [email protected] and Alyssa Counsell [email protected] and Rob Cribbie [email protected]
# Negligible Regression Coefficient (equivalence interval: -.1 to .1) pr1 <- stats::rnorm(20, mean = 0, sd= 1) pr2 <- stats::rnorm(20, mean = 0, sd= 1) dp <- stats::rnorm(20, mean = 0, sd= 1) dat <- data.frame(pr1,pr2,dp) # dataset available (unstandardized coefficients, AH procedure, using bootstrap-generated CIs): neg.reg(formula=dp~pr1+pr2,data=dat,predictor=pr1,eil=-.1,eiu=.1,nboot=5) neg.reg(formula=dat$dp ~ dat$pr1 + dat$pr2, predictor= pr1, eil= -.25, eiu= .25, nboot=5) # dataset unavailable (standardized coefficients, TOST procedure): neg.reg(b= .017, se= .025, nop= 3, n= 500, eil=-.1,eiu=.1, std=TRUE, test="TOST") # end.
# Negligible Regression Coefficient (equivalence interval: -.1 to .1) pr1 <- stats::rnorm(20, mean = 0, sd= 1) pr2 <- stats::rnorm(20, mean = 0, sd= 1) dp <- stats::rnorm(20, mean = 0, sd= 1) dat <- data.frame(pr1,pr2,dp) # dataset available (unstandardized coefficients, AH procedure, using bootstrap-generated CIs): neg.reg(formula=dp~pr1+pr2,data=dat,predictor=pr1,eil=-.1,eiu=.1,nboot=5) neg.reg(formula=dat$dp ~ dat$pr1 + dat$pr2, predictor= pr1, eil= -.25, eiu= .25, nboot=5) # dataset unavailable (standardized coefficients, TOST procedure): neg.reg(b= .017, se= .025, nop= 3, n= 500, eil=-.1,eiu=.1, std=TRUE, test="TOST") # end.
Function performs one of four equivalence tests for RMSEA fit index.
neg.rmsea( mod, alpha = 0.05, eq.bound, modif.eq.bound = FALSE, ci.method = "not.close", nbootpd = 50L, nboot = 250L, round = 3, plot = TRUE, saveplot = FALSE ) ## S3 method for class 'neg.rmsea' print(x, ...)
neg.rmsea( mod, alpha = 0.05, eq.bound, modif.eq.bound = FALSE, ci.method = "not.close", nbootpd = 50L, nboot = 250L, round = 3, plot = TRUE, saveplot = FALSE ) ## S3 method for class 'neg.rmsea' print(x, ...)
mod |
lavaan model object |
alpha |
desired alpha level (default = .05) |
eq.bound |
upper end of equivalence interval for comparison; must be .01, .05, .08 or .10 if modif.eq.bound = TRUE |
modif.eq.bound |
should the upper end of the equivalence interval be modified? (default = FALSE) |
ci.method |
method used to calculate confidence interval; options are "not.close" or "yhy.boot"; "not.close" corresponds to (1-2alpha) percent CI, "yhy.boot" corresponds to (1-2alpha) percent boot CI (default = "not.close") |
nbootpd |
number of bootstrap samples by "yhy.boot" for pd function |
nboot |
number of bootstrap samples if "yhy.boot" is selected as ci.method (default = 250L) |
round |
number of digits to round equivalence bound and confidence interval bounds (default = 3) |
plot |
logical, plotting the results (default = TRUE) |
saveplot |
saving plots (default = FALSE) |
x |
object of class |
... |
extra arguments |
The user specifies the lavaan fitted model object, the desired equivalence bound, and method of confidence interval computation. By default, the function does not modify the equivalence bounds according to Yuan et al. (2016). The user can also choose to instead run an equivalence test using a modified equivalence bound if the equivalence bound to be modified is .01, .05, .08, or .10. Alpha level can also be modified.
For information on modified equivalence bounds see Yuan, K. H., Chan, W., Marcoulides, G. A., & Bentler, P. M. (2016). Assessing structural equation models by equivalence testing with adjusted fit indexes. Structural Equation Modeling: A Multidisciplinary Journal, 23(3), 319-330. doi: https://doi.org/10.1080/10705511.2015.1065414.
The proportional distance quantifies the proportional distance from 0 to the nearest negligible effect (equivalence) interval (here, eiU). As values get farther from 0 the relationship becomes more substantial, with values greater than 1 indicating that the effect falls outside of the negligible effect (equivalence) interval.
returns a list
including the following:
title1
The title of the RMSEA equivalence test. The appropriate title of the test will be displayed depending on the ci.method chosen and whether modif.eq.bound is TRUE or FALSE.
rmsea_index
The RMSEA index.
ci.method
The method for confidence interval calculation (direct computation or bootstrap).
rmsea_eq
The upper end of the 1-2*alpha confidence interval for the RMSEA index.
eq.bound
The equivalence bound.
PD
Proportional distance (PD).
cilpd
Lower bound of the 1-alpha CI for the PD.
ciupd
Upper bound of the 1-alpha CI for the PD.
Rob Cribbie [email protected] and Nataly Beribisky [email protected]
@export neg.rmsea
d <- lavaan::HolzingerSwineford1939 hs.mod <- 'visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9' fit1 <- lavaan::cfa(hs.mod, data = d) neg.rmsea(alpha = .05, mod = fit1, eq.bound = .05, ci.method = "not.close", modif.eq.bound = FALSE, round = 5, nboot = 50)
d <- lavaan::HolzingerSwineford1939 hs.mod <- 'visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9' fit1 <- lavaan::cfa(hs.mod, data = d) neg.rmsea(alpha = .05, mod = fit1, eq.bound = .05, ci.method = "not.close", modif.eq.bound = FALSE, round = 5, nboot = 50)
Function performs equivalence tests for RMSEA, CFI, and SRMR.
neg.semfit( mod, alpha = 0.05, round = 3, rmsea.eq.bound = 0.05, rmsea.modif.eq.bound = FALSE, rmsea.ci.method = "not.close", rmsea.nboot = 250L, cfi.eq.bound = 0.95, cfi.modif.eq.bound = FALSE, cfi.ci.method = "yhy.boot", cfi.nboot = 250L, srmr.eq.bound = 0.08, srmr.modif.eq.bound = FALSE, srmr.ci.method = "MO", usrmr = TRUE, srmr.nboot = 250L )
neg.semfit( mod, alpha = 0.05, round = 3, rmsea.eq.bound = 0.05, rmsea.modif.eq.bound = FALSE, rmsea.ci.method = "not.close", rmsea.nboot = 250L, cfi.eq.bound = 0.95, cfi.modif.eq.bound = FALSE, cfi.ci.method = "yhy.boot", cfi.nboot = 250L, srmr.eq.bound = 0.08, srmr.modif.eq.bound = FALSE, srmr.ci.method = "MO", usrmr = TRUE, srmr.nboot = 250L )
mod |
lavaan model object |
alpha |
desired alpha level (default = .05) |
round |
number of digits to round equivalence bound and confidence interval bounds (default = 3) |
rmsea.eq.bound |
upper bound of the equivalence interval for RMSEA for comparison; must be .01, .05, .08, or .10 if rmsea.modif.eq.bound = TRUE |
rmsea.modif.eq.bound |
should the upper bound of the equivalence interval for RMSEA be modified (default = FALSE) |
rmsea.ci.method |
method used to calculate confidence interval for RMSEA; options are "not.close" or "yhy.boot"; "not.close" corresponds to (1-2alpha) percent CI, "yhy.boot" corresponds to (1-2alpha) percent boot CI (default = "not.close") |
rmsea.nboot |
number of bootstrap samples if "yhy.boot" is selected as rmsea.ci.method (default = 250L) |
cfi.eq.bound |
lower bound of equivalence interval for CFI for comparison; must be .99, .95, .92 or .90 if cfi.modif.eq.bound = TRUE |
cfi.modif.eq.bound |
should the lower bound of the equivalence interval for CFI be modified (default = FALSE) |
cfi.ci.method |
method used to calculate confidence interval for CFI; options are "yuan", "equiv" or "yhy.boot"; "yuan" corresponds to (1-alpha) percent CI, "equiv" corresponds to (1-2alpha) percent CI, "yhy.boot" corresponds to (1-2alpha) percent boot CI (default = "equiv") |
cfi.nboot |
number of bootstrap samples if "yhy.boot" is selected as cfi.ci.method (default = 250L) |
srmr.eq.bound |
upper bound of equivalence interval for SRMR for comparison; must be .05 or .10 if modif.eq.bound = TRUE |
srmr.modif.eq.bound |
should the upper bound of the equivalence interval for SRMR be modified? (default = FALSE) |
srmr.ci.method |
method used to calculate confidence interval for SRMR; options are "MO" or "yhy.boot"; "MO" corresponds to (1-2alpha) percent CI, "yhy.boot" corresponds to (1-2alpha) percent boot CI (default = "MO") |
usrmr |
fit index around which equivalence test should be structured (usrmr = TRUE which is the default states that usrmr from Maydeu-Olivares, 2017 will be used, otherwise srmr from fitmeasures() output in lavaan will be used) |
srmr.nboot |
number of bootstrap samples if "yhy.boot" is selected as srmr.ci.method (default = 250L) |
#' The user specifies the lavaan fitted model object, the desired equivalence bound, and method of confidence interval computation for RMSEA, CFI, and SRMR. By default, the function does not modify the equivalence bounds according to Yuan et al. (2016) or according to Shi et al. (2018). The user can also choose to instead run an equivalence test using a modified equivalence bound if the equivalence bound to be modified is .01, .05, .08, or .10 for RMSEA,.99, .95, .92 or .90 for CFI, .05 or .10 for SRMR. Alpha level can also be modified.
For information on modified equivalence bounds for CFI and RMSEA see Yuan, K. H., Chan, W., Marcoulides, G. A., & Bentler, P. M. (2016). Assessing structural equation models by equivalence testing with adjusted fit indexes. Structural Equation Modeling: A Multidisciplinary Journal, 23(3), 319-330. doi: https://doi.org/10.1080/10705511.2015.1065414. For information on uSRMR and modified cut-offs for SRMR see: Maydeu-Olivares, A. (2017). Maximum likelihood estimation of structural equation models for continuous data: Standard errors and goodness of fit. Structural Equation Modeling: A Multidisciplinary Journal, 24(3), 383-394. Shi, D., Maydeu-Olivares, A., & DiStefano, C. (2018). The relationship between the standardized root mean square residual and model misspecification in factor analysis models. Multivariate Behavioral Research, 53(5), 676-694.
returns a list
containing analysis and respective statistics
and decision.
title1
The appropriate title of the test will be displayed depending on the ci.method chosen and whether modif.eq.bound is TRUE or FALSE.
cfi_index
The CFI index.
ci.method
The method for confidence interval calculation.
cfi_eq
The lower end of the confidence interval for the CFI index.
eq.bound
The equivalence bound.
Rob Cribbie [email protected] and Nataly Beribisky [email protected]
d <- lavaan::HolzingerSwineford1939 hs.mod <- 'visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9' fit1 <- lavaan::cfa(hs.mod, data = d) neg.cfi(mod = fit1, alpha = .05, eq.bound = .95, modif.eq.bound = FALSE, ci.method = "equiv", round = 3, plot = TRUE)
d <- lavaan::HolzingerSwineford1939 hs.mod <- 'visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9' fit1 <- lavaan::cfa(hs.mod, data = d) neg.cfi(mod = fit1, alpha = .05, eq.bound = .95, modif.eq.bound = FALSE, ci.method = "equiv", round = 3, plot = TRUE)
Function performs one of four equivalence tests for SRMR fit index.
neg.srmr( mod, alpha = 0.05, eq.bound, modif.eq.bound = FALSE, ci.method = "MO", usrmr = TRUE, nboot = 250L, round = 3 ) ## S3 method for class 'neg.srmr' print(x, ...)
neg.srmr( mod, alpha = 0.05, eq.bound, modif.eq.bound = FALSE, ci.method = "MO", usrmr = TRUE, nboot = 250L, round = 3 ) ## S3 method for class 'neg.srmr' print(x, ...)
mod |
lavaan model object |
alpha |
desired alpha level (default = .05) |
eq.bound |
upper bound of equivalence interval for comparison; must be .05 or .10 if modif.eq.bound = TRUE |
modif.eq.bound |
should the upper bound of the equivalence interval be modified? (default = FALSE) |
ci.method |
method used to calculate confidence interval; options are "MO" or "yhy.boot"; "MO" corresponds to (1-2alpha) percent CI, "yhy.boot" corresponds to (1-2alpha) percent boot CI (default = "MO") |
usrmr |
fit index around which equivalence test should be structured (usrmr = TRUE which is the default states that usrmr from Maydeu-Olivares, 2017 will be used, otherwise srmr from fitmeasures() output in lavaan will be used) |
nboot |
number of bootstrap samples if "yhy.boot" is selected as ci.method (default = 250L) |
round |
number of digits to round equivalence bound and confidence interval bounds (default = 3) |
x |
object of class |
... |
extra arguments |
The user specifies the lavaan fitted model object, the desired equivalence bound, the method of confidence interval computation, and whether unbiased SRMR or original SRMR should be used. By default, the function does not modify the equivalence bounds. The user can also choose to instead run an equivalence test using a modified equivalence bound of .05 or .10 multiplied by the average communality of the observed indicators. Alpha level can also be modified.
For information on unbiased SRMR and its confidence interval computation see Maydeu-Olivares, A. (2017). Maximum likelihood estimation of structural equation models for continuous data: Standard errors and goodness of fit. Structural Equation Modeling: A Multidisciplinary Journal, 24(3), 383-394. https://doi.org/10.1080/10705511.2016.1269606
returns a list
including the following:
title1
The title of the SRMR equivalence test. The appropriate title of the test will be displayed depending on the ci.method chosen whether usrmr and modif.eq.bound are TRUE or FALSE.
srmr_index
The SRMR index.
ci.method
The method for confidence interval calculation (direct computation or bootstrap).
srmr_ci
The upper bound of the 1-2*alpha confidence interval for the RMSEA index.
eq.bound
The equivalence bound.
PD
Proportional distance (PD).
cilpd
Lower bound of the 1-alpha CI for the PD.
ciupd
Upper bound of the 1-alpha CI for the PD.
Rob Cribbie [email protected] and Nataly Beribisky [email protected]
d <- lavaan::HolzingerSwineford1939 hs.mod <- 'visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9' fit1 <- lavaan::cfa(hs.mod, data = d) neg.srmr(mod=fit1,alpha=.05,eq.bound=.08,usrmr = TRUE)
d <- lavaan::HolzingerSwineford1939 hs.mod <- 'visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9' fit1 <- lavaan::cfa(hs.mod, data = d) neg.srmr(mod=fit1,alpha=.05,eq.bound=.08,usrmr = TRUE)
This function evaluates whether the difference between two correlation coefficients can be considered statistically and practically negligible
neg.twocors( data = NULL, r1v1 = NULL, r1v2 = NULL, r2v1 = NULL, r2v2 = NULL, r1 = NULL, n1 = NULL, r2 = NULL, n2 = NULL, dep = FALSE, r3 = NA, test = "AH", eiu, eil, alpha = 0.05, bootstrap = TRUE, nboot = 1000, seed = NA, plots = TRUE, saveplots = FALSE, ... ) ## S3 method for class 'neg.twocors' print(x, ...)
neg.twocors( data = NULL, r1v1 = NULL, r1v2 = NULL, r2v1 = NULL, r2v2 = NULL, r1 = NULL, n1 = NULL, r2 = NULL, n2 = NULL, dep = FALSE, r3 = NA, test = "AH", eiu, eil, alpha = 0.05, bootstrap = TRUE, nboot = 1000, seed = NA, plots = TRUE, saveplots = FALSE, ... ) ## S3 method for class 'neg.twocors' print(x, ...)
data |
a data.frame or matrix which includes the variables in r1 and r2 |
r1v1 |
the name of the 1st variable included in the 1st correlation coefficient (r1, variable 1) |
r1v2 |
the name of the 2nd variable included in the 1st correlation coefficient (r1, variable 2) |
r2v1 |
the name of the 1st variable included in the 2nd correlation coefficient (r2, variable 1) |
r2v2 |
the name of the 2nd variable included in the 2st correlation coefficient (r2, variable 2) |
r1 |
entered 1st correlation coefficient manually, without a dataset |
n1 |
entered sample size associated with r1 manually, without a dataset |
r2 |
entered 2nd correlation coefficient manually, without a dataset |
n2 |
entered sample size associated with r2 manually, without a dataset |
dep |
are the correlation coefficients dependent (overlapping)? |
r3 |
if the correlation coefficients are dependent and no datasets were entered, specify the correlation between the two, non-intersecting variables (e.g. if r1 = r12 and r2 = r13, then r3 = r23) |
test |
'AH' is the default based on recommendation in Counsell & Cribbie (2015), 'TOST' is an additional (albeit, more conservative) option. |
eiu |
upper bound of the equivalence interval measured as the largest difference between the two correlations for which the two coefficients would still be considered equivalent |
eil |
lower bound of the equivalence interval measured as the largest difference between the two correlations for which the two coefficients would still be considered equivalent |
alpha |
desired alpha level, defualt is .05 |
bootstrap |
logical, default is TRUE, incorporating bootstrapping when calculating regression coefficients, SE, and CIs |
nboot |
1000 is the default. indicate if other number of bootstrapping iterations is desired |
seed |
to reproduce previous analyses using bootstrapping, the user can set their seed of choice |
plots |
logical, plotting the results. TRUE is set as default |
saveplots |
FALSE for no, "png" and "jpeg" for different formats |
... |
extra arguments |
x |
object of class |
This function evaluates whether the difference between two correlation coefficients can be considered statistically and practically negligible according to a predefined interval. i.e., minimally meaningful effect size (MMES)/smallest effect size of interest (SESOI). The effect size tested is the difference between two correlation coefficients (i.e., r1 - r2).
Unlike the most common null hypothesis significance tests looking to detect a difference or the existence of an effect statistically different than zero, in negligible effect testing, the hypotheses are flipped: In essence, H0 states that the effect is non-negligible, whereas H1 states that the effect is in fact statistically and practically negligible.
The statistical tests are based on Anderson-Hauck (1983) and Schuirmann's (1987) Two One-Sided Test (TOST) equivalence testing procedures; namely addressing the question of whether the estimated effect size (and its associated uncertainty) of a difference between two correlation coefficients (i.e., r1 and r2) is smaller than the what the user defines as negligible effect size. Defining what is considered negligible effect is done by specifying the negligible (equivalence) interval: its upper (eiu) and lower (eil) bounds.
The negligible (equivalence) interval should be set based on the context of the research. Because the two correlations (and, therefore their difference) are in standardized units, setting eil and eiu is a matter of determining what is the smallest difference between the two correlations that can be considered of practical significance. For example, if the user determines that the smallest effect of interest is 0.1 – that is, any difference between the two correlation coefficient larger than 0.1 is meaningful in this context - then eil will be set to -0.1 and eiu to 0.1. Therefore, any observable difference that is larger than -0.1 AND smaller than 0.1, will be considered practically negligible.
There are two main approaches to using neg.twocors. The first (and more recommended) is by inserting a dataset ('data' argument) into the function. If the user/s have access to the dataset, they should use the following set of arguments: data, formula, r1v1, r1v2, r2v1, r2v2, dep (if applicable), bootstrap (optional), nboot (optional), and seed (optional). However, this function also accommodates cases where no dataset is available. In this case, users should use the following set of arguments instead: r1, n1, r2, n2, and r3 (if applicable). In either situation, users must specify the negligible interval bounds (eiu and eil). Other optional arguments and features include: alpha, test, plots, and saveplots.
This function accommodates both independent and dependent correlations. A user might want to compare two independent correlations. For example, the correlation between X and Y in one group (e.g., Control group; rXYc) with the correlation between X and Y in a different, independent group (e.g., Treatment group; rXYt). The ‘independent correlations’ setting (i.e., dep=FALSE) is the default in this function. However, in other cases, a user might want to compare two dependent correlation coefficients. That is, the two correlations share a common variable (i.e., same variable values). For example, the correlation between X and Y in one group (e.g., Treatment group; rXYt) with the correlation between X and B in the same group (e.g., Treatment group; rXBt). Because values in variable X are shared among the two correlations, the two correlations (e.g., rXYt and rXBt) are not independent from one another, but, in fact, dependent. To compare two dependent correlation coefficients, users need only to specify dep=TRUE. If no dataset is entered into the function, users should also use the argument r3, which will hold the correlation between the two non-shared variables. In the example above (i.e., rXYt and rXBt), the two non-shared variables are Y and B. In this case, r3 = rYBt. If dep=TRUE is entered into the function, test statistics and p values will be calculated differently to account for the shared variable. The negligible testing methods for comparing dependent correlations in this function are based on Williams’s (1959) modification to Hotelling’s (1931) test for comparing overlapping dependent correlations. For more details see Counsell and Cribbie (2015).
The proportional distance (PD; effect size/eiu) estimates the proportional distance of the estimated effect to eiu, and acts as an alternative effect size measure.
The confidence interval for the PD is computed via bootstrapping (percentile bootstrap), unless the user does not insert a dataset. In which case, the PD confidence interval is calculated by dividing the upper and lower CI bounds for the effect size estimate by the absolute value of the negligible interval bounds.
A list
containing the following:
r1v1
Name of the 1st variable included in the 1st correlation coefficient (r1, variable 1 ; if applicable)
r1v2
Name of the 2nd variable included in the 1st correlation coefficient (r1, variable 2; if applicable)
r2v1
Name of the 1st variable included in the 2nd correlation coefficient (r2, variable 1; if applicable)
r2v2
Name of the 2nd variable included in the 2nd correlation coefficient (r2, variable 2; if applicable)
r1
Effect size of the first bivariate relationship (1st correlation coefficient)
n1
Sample size in each variable included in the first correlation (r1)
r2
Effect size of the second bivariate relationship (2nd correlation coefficient)
n2
Sample size in each variable included in the second correlation (r2)
r3
If the correlation coefficients (r1 and r2) are dependent, r3 is then the correlation coefficient of the two, non-intersecting variables (e.g. if r1 = r12 and r2 = r13, then r3 = r23; if applicable)
rsdiff
The difference between the two correlation coefficients. Specifically, r1 - r2.
se
Standard error associated with the effect size point estimate (the difference between r1 and r2). The SE calculations are based on Kraatz (2007) and can be found in Counsell & Cribbie (2015)
dep
Logical. TRUE if r1 and r2 are dependent, otherwise FALSE
eil
Lower bound of the negligible effect (equivalence) interval
eiu
Upper bound of the negligible effect (equivalence) interval
u.ci.a
Upper bound of the 1-alpha CI for the effect size
l.ci.a
Lower bound of the 1-alpha CI for the effect size
pd
Proportional distance
pd.l.ci
Lower bound of the 1-alpha CI for the PD
pd.u.ci
Upper bound of the 1-alpha CI for the PD
seed
Seed identity if bootstrapping is used (if applicable)
nboot
Number of bootstrapping iterations, if bootstrapping was used (if applicable)
which.test
Test type, e.g., AH-rho-D, KTOST-rho etc. See Counsell & Cribbie (2015) for details
degfree
Degrees of freedom associated with the test statistic
pv
p value associated with the test statistic. If TOST was specified, only the larger of the p values will be presented
NHSTdecision
NHST decision
alpha
Nominal Type I error rate
Rob Cribbie [email protected] and Alyssa Counsell [email protected] and Udi Alter [email protected]
# Negligible difference between two correlation coefficients # Equivalence interval: -.15 to .15 v1a<-stats::rnorm(10) v2a<-stats::rnorm(10) v1b <- stats::rnorm(10) v2b <- stats::rnorm(10) dat<-data.frame(v1a, v2a, v1b, v2b) # dataset available (independent correlation coefficients): neg.twocors(r1v1=v1a,r1v2=v2a,r2v1=v1b,r2v2=v2b,data=dat,eiu=.15,eil=-.15,nboot=50, dep=FALSE) neg.twocors(r1=0.5,n1=300,r2=0.6,n2=500,eiu=.15,eil=-0.15, dep=TRUE, r3=0.51) # end.
# Negligible difference between two correlation coefficients # Equivalence interval: -.15 to .15 v1a<-stats::rnorm(10) v2a<-stats::rnorm(10) v1b <- stats::rnorm(10) v2b <- stats::rnorm(10) dat<-data.frame(v1a, v2a, v1b, v2b) # dataset available (independent correlation coefficients): neg.twocors(r1v1=v1a,r1v2=v2a,r2v1=v1b,r2v2=v2b,data=dat,eiu=.15,eil=-.15,nboot=50, dep=FALSE) neg.twocors(r1=0.5,n1=300,r2=0.6,n2=500,eiu=.15,eil=-0.15, dep=TRUE, r3=0.51) # end.
This function allows researchers to test whether the difference between the means of two independent populations is negligible, where negligible represents the smallest meaningful effect size (MMES, which in this case the effect is the mean difference)
neg.twoindmeans( v1 = NULL, v2 = NULL, dv = NULL, iv = NULL, eiL, eiU, varequiv = FALSE, normality = FALSE, tr = 0.2, nboot = 500, alpha = 0.05, plot = TRUE, saveplot = FALSE, data = NULL ) ## S3 method for class 'neg.twoindmeans' print(x, ...)
neg.twoindmeans( v1 = NULL, v2 = NULL, dv = NULL, iv = NULL, eiL, eiU, varequiv = FALSE, normality = FALSE, tr = 0.2, nboot = 500, alpha = 0.05, plot = TRUE, saveplot = FALSE, data = NULL ) ## S3 method for class 'neg.twoindmeans' print(x, ...)
v1 |
Data for Group 1 (if dv and iv are omitted) |
v2 |
Data for Group 2 (if dv and iv are omitted) |
dv |
Dependent Variable (if v1 and v2 are omitted) |
iv |
Dichotomous Predictor/Independent Variable (if v1 and v2 are omitted) |
eiL |
Lower Bound of the Equivalence Interval |
eiU |
Upper Bound of the Equivalence Interval |
varequiv |
Are the population variances assumed to be equal? Population variances are assumed to be unequal if normality=FALSE. |
normality |
Are the population variances (and hence the residuals) assumed to be normally distributed? |
tr |
Proportion of trimming from each tail (relevant if normality = FALSE) |
nboot |
Number of bootstrap samples for calculating CIs |
alpha |
Nominal Type I Error rate |
plot |
Should a plot of the results be produced? |
saveplot |
Should the plot be saved? |
data |
Dataset containing v1/v2 or iv/dv |
x |
object of class |
... |
extra arguments |
This function evaluates whether the difference in the means of 2 independent populations can be considered negligible (i.e., the population means can be considered equivalent).
The user specifies either the data associated with the first and second groups/populations (iv1, iv2, both should be continuous) or specifies the Indepedent Variable/Predictor (iv, should be a factor) and the Dependent Variable (outcome, should be continuous). A 'data' statement can be used if the variables are stored in an R dataset.
The user must also specify the lower and upper bounds of the negligible effect (equivalence) interval. These are specified in the original units of the outcome variable.
The arguments 'varequiv' and 'normality' control what test statistic is adopted. If varequiv = TRUE and normality = TRUE the ordinary Student t statistic is adopted. If varequiv = FALSE and normality = TRUE the Welch t statistic is adopted. If normality = FALSE the ordinary Student t statistic is adopted. d
A list
including the following:
meanx
Sample mean of the first population/group.
meany
Sample mean of the second population/group.
trmeanx
Sample trimmed mean of the first population/group.
trmeany
Sample trimmed mean of the second population/group.
sdx
Sample standard deviation of the first population/group.
sdy
Sample standard deviation of the second population/group.
madx
Sample median absolute deviation of the first population/group.
mady
Sample median absolute deviation of the second population/group.
eiL
Lower bound of the negligible effect (equivalence) interval.
eiU
Upper bound of the negligible effect (equivalence) interval.
effsizeraw
Simple difference in the means (or trimmed means if normality = FALSE)
cilraw2
Lower bound of the 1-alpha CI for the raw mean difference.
ciuraw2
Upper bound of the 1-alpha CI for the raw mean difference.
cilraw
Lower bound of the 1-2*alpha CI for the raw mean difference.
ciuraw
Upper bound of the 1-2*alpha CI for the raw mean difference.
effsized
Standardized mean (or trimmed mean if normality = FALSE) difference.
cild
Lower bound of the 1-alpha CI for the standardized mean (or trimmed mean if normality = FALSE) difference.
ciud
Upper bound of the 1-alpha CI for the standardized mean (or trimmed mean if normality = FALSE) difference.
effsizepd
Proportional distance statistic.
cilpd
Lower bound of the 1-alpha CI for the proportional distance statistic.
ciupd
Upper bound of the 1-alpha CI for the proportional distance statistic.
t1
First t-statistic from the TOST procedure.
t1
Second t-statistic from the TOST procedure.
df1
Degrees of freedom for the first t-statistic from the TOST procedure.
df2
Degrees of freedom for the second t-statistic from the TOST procedure.
p1
p value associated with the first t-statistic from the TOST procedure.
p2
p value associated with the second t-statistic from the TOST procedure.
alpha
Nominal Type I error rate
Rob Cribbie [email protected] R. Philip Chalmers [email protected] Naomi Martinez Gutierrez [email protected]
indvar<-rep(c("a","b"),c(10,12)) depvar<-rnorm(22) d<-data.frame(indvar,depvar) neg.twoindmeans(dv=depvar,iv=indvar,eiL=-1,eiU=1,plot=TRUE,data=d) neg.twoindmeans(dv=depvar,iv=indvar,eiL=-1,eiU=1) neg.twoindmeans(v1=depvar[indvar=="a"],v2=depvar[indvar=="b"],eiL=-1,eiU=1) xx<-neg.twoindmeans(dv=depvar,iv=indvar,eiL=-1,eiU=1) xx$decis
indvar<-rep(c("a","b"),c(10,12)) depvar<-rnorm(22) d<-data.frame(indvar,depvar) neg.twoindmeans(dv=depvar,iv=indvar,eiL=-1,eiU=1,plot=TRUE,data=d) neg.twoindmeans(dv=depvar,iv=indvar,eiL=-1,eiU=1) neg.twoindmeans(v1=depvar[indvar=="a"],v2=depvar[indvar=="b"],eiL=-1,eiU=1) xx<-neg.twoindmeans(dv=depvar,iv=indvar,eiL=-1,eiU=1) xx$decis
This dataset comes from the dissertation of Chantal Arpin-Cribbie. The study was an RCT looking at the effect of an online CBT therapy on perfectionism (and related variables) in a sample of undergraduate students with extreme perfectionism. This dataset has missing data imputed with a single stochastic regression imputation.
data(perfectionism)
data(perfectionism)
A data frame with 83 rows and 17 variables:
whether the participants received the CBT therapy, a general stress reduction protocol, or no treatment
Pretest Score, Self-oriented Perfectionism, Hewitt & Flett Multidimensional Perfectionism Scale
Pretest Score, Socially-prescribed Perfectionism, Hewitt & Flett Multidimensional Perfectionism Scale
Pretest Score, Perfection Cognitions Inventory
Pretest Score, Beck Anxiety Inventory
Pretest Score, CESD Depression Scale
Pretest Score, Concern Over Mistakes subscale, Frost Multidimensional Perfectionism Scale
Posttest Score, Self-oriented Perfectionism, Hewitt & Flett Multidimensional Perfectionism Scale
Posttest Score, Self-prescribed Perfectionism, Hewitt & Flett Multidimensional Perfectionism Scale
Posttest Score, Perfection Cognitions Inventory
Posttest Score, Beck Anxiety Inventory
Posttest Score, CESD Depression Scale
Posttest Score, Concern Over Mistakes subscale, Frost Multidimensional Perfectionism Scale
Pretest Score, Automatic Thoughts Quesionnaire
Posttest Score, Automatic Thoughts Questionnaire
Pretest score, Other Oriented Perfectionism, Hewitt & Flett Multidimensional Perfectionism Scale
Posttest Score, Other Oriented Perfectionism, Hewitt & Flett Multidimensional Perfectionism Scale
...
https://pubmed.ncbi.nlm.nih.gov/22122217/