-
Notifications
You must be signed in to change notification settings - Fork 1
/
README.Rmd
198 lines (128 loc) · 14.1 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```
<!-- badges: start -->
[![R-CMD-check](https://github.com/jepusto/scdhlm/workflows/R-CMD-check/badge.svg)](https://github.com/jepusto/scdhlm/actions)
[![Codecov Status](https://codecov.io/gh/jepusto/scdhlm/branch/master/graph/badge.svg)](https://codecov.io/gh/jepusto/scdhlm?branch=master)
[![](http://www.r-pkg.org/badges/version/scdhlm)](https://CRAN.R-project.org/package=scdhlm)
[![](http://cranlogs.r-pkg.org/badges/grand-total/scdhlm)](https://CRAN.R-project.org/package=scdhlm)
[![](http://cranlogs.r-pkg.org/badges/last-month/scdhlm)](https://CRAN.R-project.org/package=scdhlm)
<!-- badges: end -->
# Estimating Hierarchical Linear Models for Single-Case Designs
`scdhlm` provides a set of tools for estimating hierarchical linear models and effect sizes based on data from single-case designs. The estimated effect sizes, as described in Pustejovsky, Hedges, and Shadish (2014), are comparable in principle to standardized mean differences (SMDs) estimated from between-subjects randomized experiments. The package includes functions for estimating design-comparable SMDs based on data from treatment reversal designs with replication across participants (Hedges, Pustejovsky, & Shadish, 2012), across-participant multiple baseline designs and multiple probe designs (Hedges, Pustejovsky, & Shadish, 2013; Pustejovsky, Hedges, & Shadish, 2014), and more complex variations of multiple baseline designs (Chen, Pustejovsky, Klingbeil, & Van Norman, 2023). Two estimation methods are available: moment estimation and restricted maximum likelihood estimation. The package also includes an interactive web interface implemented using Shiny.
# Acknowledgment
<img src="https://raw.githubusercontent.com/jepusto/scdhlm/master/images/IES_InstituteOfEducationSciences_RGB.svg" width="40%" align = "right" alt = "Institute of Education Sciences logo"/>
The development of this R package was supported in part by the Institute of Education Sciences, U.S. Department of Education, through [Grant R324U190002](https://ies.ed.gov/funding/grantsearch/details.asp?ID=3358) to the University of Oregon. The contents of the package do not necessarily represent the views of the Institute or the U.S. Department of Education.
# Citations
Please cite this R package as follows:
> Pustejovsky, J. E., Chen, M., & Hamilton, B. J. (`r substr(packageDate("scdhlm"),1,4)`). scdhlm: Estimating hierarchical linear models for single-case designs (Version `r packageVersion("scdhlm")`) [R package]. https://CRAN.R-project.org/package=scdhlm
Please cite the web application as follows:
> Pustejovsky, J. E., Chen, M., Hamilton, B., & Grekov, P. (`r substr(packageDate("scdhlm"),1,4)`). scdhlm: A web-based calculator for between-case standardized mean differences (Version `r packageVersion("scdhlm")`) [Web application]. https://jepusto.shinyapps.io/scdhlm
# Installation
You can install the released version of `scdhlm` from [CRAN](https://CRAN.R-project.org) with:
```{r, eval=FALSE}
install.packages("scdhlm")
```
You can install the development version from [GitHub](https://github.com/) with:
```{r, eval=FALSE}
# install.packages("remotes")
remotes::install_github("jepusto/scdhlm")
```
You can access a local version of the [interactive web-app](https://jepusto.shinyapps.io/scdhlm/) by running the commands:
```{r, eval=FALSE}
library(scdhlm)
shine_scd()
```
# Demonstration
Here we demonstrate how to use `scdhlm` to calculate design-comparable SMDs based on data from different single-case designs. We will first demonstrate the recommended approach, which uses restricted maximum likelihood (REML) estimation. We will then demonstrate the older, moment estimation methods. The moment estimation methods were the originally proposed approach (described in Hedges, Pustejovsky, & Shadish, 2012, 2013). The package provides these methods for sake of completeness, but we no longer recommend them for general use.
## Estimating SMDs using REML with `g_mlm()`
Laski, Charlop, and Schreibman (1988) used a multiple baseline across individuals to evaluate the effect of a training program for parents on the speech production of their autistic children, as measured using a partial interval recording procedure. The design included eight children. One child was measured separately with each parent; following Hedges, Pustejovsky, and Shadish (2013), only the measurements taken with the mother are included in our analysis.
For this study, we will estimate a design-comparable SMD using restricted maximum likelihood (REML) methods, as described by Pustejovsky and colleagues (2014). This is a two-step process. The first step is to estimate a hierarchical linear model for the data, treated the measurements as nested within cases. We fit the model using `nlme::lme()`
```{r Laski}
library(nlme)
library(scdhlm)
data(Laski)
# Pustejovsky, Hedges, & Shadish (2014)
Laski_RML <- lme(fixed = outcome ~ treatment,
random = ~ 1 | case,
correlation = corAR1(0, ~ time | case),
data = Laski)
Laski_RML
```
The summary of the fitted model displays estimates of the component parameters, including the within-case and between-case standard deviations, auto-correlation, and (unstandardized) treatment effect estimate. These estimated components will be used to calculate the effect size in next step.
The estimated variance components from the fitted model can be obtained using `extract_varcomp()`:
```{r Laski-varcomp}
varcomp_Laski_RML <- extract_varcomp(Laski_RML)
varcomp_Laski_RML
```
The estimated between-case variance is `r round(as.numeric(varcomp_Laski_RML$Tau$case), 3)`, the estimated auto-correlation is `r round(varcomp_Laski_RML$cor_params, 3)`, the estimated and the estimated within-case variance is `r round(varcomp_Laski_RML$sigma_sq, 3)`. These estimated variance components will be used to calculate the effect size in next step.
The second step in the process is to estimate a design-comparable SMD using `scdhlm::g_mlm()`.
The SMD parameter can be defined as the ratio of a linear combination of the fitted model’s fixed effect parameters over the square root of a linear combination of the model’s variance components. `g_mlm()` takes the fitted `lme` model object as input, followed by the vectors `p_const` and `r_const`, which specify the components of the fixed effects and variance estimates that are to be used in constructing the design-comparable SMD. Note that `r_const` is a vector of 0s and 1s which specify whether to use the variance component parameters for calculating the effect size: random effects variances, correlation structure parameters, variance structure parameters, and level-1 error variance. The function calculates an effect size estimate by first substituting maximum likelihood or restricted maximum likelihood estimates in place of the corresponding parameters, then applying a small-sample correction. The small-sample correction and the standard error are based on approximating the distribution of the estimator by a t distribution, with degrees of freedom given by a Satterthwaite approximation (Pustejovsky, Hedges, & Shadish, 2014). The `g_mlm()` function includes an option allowing use of the expected or average form of the Fisher information matrix in the calculations.
In this example, we use the treatment effect in the numerator of the effect size and the sum of the between-case and within-case variance components in the denominator of the effect size. The constants are therefore given by `p_const = c(0, 1)` and `r_const = c(1, 0, 1)`. The effect size estimated is calculated as:
```{r Laski-bc-smd, eval=TRUE}
Laski_ES_RML <- g_mlm(Laski_RML, p_const = c(0, 1), r_const = c(1, 0, 1))
Laski_ES_RML
```
The adjusted SMD effect size estimate is `r round(Laski_ES_RML$g_AB, 3)` with standard error of `r round(Laski_ES_RML$SE_g_AB, 3)` and degree of freedom `r round(Laski_ES_RML$nu, 1)`.
A `summary()` method is included, which returns more detail about the model parameter estimates and effect size estimate when setting `returnModel = TRUE` (the default) in `g_mlm()`:
```{r Laski-summary, eval=TRUE}
summary(Laski_ES_RML)
```
The `CI_g()` calculates a symmetric confidence interval using a central t distribution (the default) or an asymmetric confidence interval using non-central t distribution (setting `symmetric = FALSE`).
```{r Laski-CI, eval=TRUE}
CI_g(Laski_ES_RML)
CI_g(Laski_ES_RML, symmetric = FALSE)
```
The symmetric confidence interval is `r paste0("[", round(CI_g(Laski_ES_RML),3)[1], ", ", round(CI_g(Laski_ES_RML),3)[2], "]")` and the asymmetric confidence interval is `r paste0("[", round(CI_g(Laski_ES_RML, symmetric = FALSE),3)[1], ", ", round(CI_g(Laski_ES_RML, symmetric = FALSE),3)[2], "]")`.
## Estimating SMDs using `effect_size_ABk()`
Lambert, Cartledge, Heward, and Lo (2006) tested the effect of using response cards (compared to single-student responding) during math lessons in two fourth-grade classrooms. The investigators collected data on rates of disruptive behavior and academic response for nine focal students, using an ABAB design. This example is discussed in Hedges, Pustejovsky, and Shadish (2012), who selected it because the design was close to balanced and used a relatively large number of cases. Their calculations can be replicated using the `effect_size_ABk()` function. To use this function, the user must provide the names of five variables:
* the outcome variable,
* a variable indicating the treatment condition,
* a variable listing the case on which the outcome was measured,
* a variable indicating the phase of treatment (i.e., each replication of a baseline and treatment condition), and
* a variable listing the session number.
In the `Lambert` dataset, these variables are called respectively `outcome`, `treatment`, `case`, `phase`, and `time`. Given these inputs, the design-comparable SMD is calculated as follows for the measure of academic response:
```{r}
data(Lambert)
Lambert_academic <- subset(Lambert, measure == "academic response")
Lambert_ES <- effect_size_ABk(outcome = outcome, treatment = treatment, id = case,
phase = phase, time = time, data = Lambert_academic)
Lambert_ES
```
The adjusted effect size estimate `delta_hat` is equal to `r round(Lambert_ES$delta_hat, 3)`; its variance `V_delta_hat` is equal to `r round(Lambert_ES$V_delta_hat,3)`. A standard error for `delta_hat` can be calculated by taking the square root of `V_delta_hat`: `sqrt(Lambert_ES$V_delta_hat)` = `r round(sqrt(Lambert_ES$V_delta_hat),3)`. The effect size estimate is bias-corrected in a manner analogous to Hedges' g correction for SMDs from a between-subjects design. The degrees of freedom `nu` are estimated based on a Satterthwaite-type approximation, which is equal to `r round(Lambert_ES$nu,1)` in this example.
A summary() method is included to return more detail about the model parameter estimates and effect size estimates:
```{r}
summary(Lambert_ES)
```
## Estimating SMDs using `effect_size_MB()`
Saddler, Behforooz, and Asaro (2008) used a multiple baseline design to investigate the effect of an instructional technique on the writing of fourth grade students. The investigators assessed the intervention's effect on measures of writing quality, sentence complexity, and use of target constructions.
Design-comparable SMDs can be estimated based on these data using the `effect_size_MB()` function. The following code calculates a design-comparable SMD estimate for the measure of writing quality:
```{r}
data(Saddler)
Saddler_quality <- subset(Saddler, measure=="writing quality")
quality_ES <- effect_size_MB(outcome, treatment, case, time, data = Saddler_quality)
quality_ES
```
The adjusted effect size estimate `delta_hat` is equal to `r round(quality_ES$delta_hat, 3)`, with sampling variance of `V_delta_hat` equal to `r round(quality_ES$V_delta_hat, 3)` and a standard error of `r round(sqrt(quality_ES$V_delta_hat), 3)`.
`summary(quality_ES)` returns more detail about the model parameter estimates and effect size estimates:
```{r}
summary(quality_ES)
```
# References
Chen, M., Pustejovsky, J. E., Klingbeil, D. A., & Van Norman, E. R. (2023). Between-case standardized mean differences: Flexible methods for single-case designs. _Journal of School Psychology, 98_, 16-38. https://doi.org/10.1016/j.jsp.2023.02.002
Gilmour, A. R., Thompson, R., & Cullis, B. R. (1995). Average information REML: An efficient algorithm for variance parameter estimation in linear mixed models. _Biometrics, 51_(4), 1440–1450. https://doi.org/10.2307/2533274
Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2012). A standardized mean difference effect size for single case designs. _Research Synthesis Methods, 3_(3), 224-239. https://doi.org/10.1002/jrsm.1052
Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2013). A standardized mean difference effect size for multiple baseline designs across individuals. _Research Synthesis Methods, 4_(4), 324-341. https://doi.org/10.1002/jrsm.1086
Lambert, M. C., Cartledge, G., Heward, W. L., & Lo, Y. (2006). Effects of response cards on disruptive behavior and academic responding during math lessons by fourth-grade urban students. _Journal of Positive Behavior Interventions, 8_(2), 88-99. https://doi.org/10.1177/10983007060080020701
Laski, K. E., Charlop, M. H., & Schreibman, L. (1988). Training parents to use the natural language paradigm to increase their autistic children’s speech. _Journal of Applied Behavior Analysis, 21_(4), 391–400. https://doi.org/10.1901/jaba.1988.21-391
Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R. (2014). Design-comparable effect sizes in multiple baseline designs: A general modeling framework. _Journal of Educational and Behavioral Statistics, 39_(4), 211-227. https://doi.org/10.3102/1076998614547577
Saddler, B., Behforooz, B., & Asaro, K. (2008). The effects of sentence-combining instruction on the writing of fourth-grade students with writing difficulties. _The Journal of Special Education, 42_(2), 79–90. https://doi.org/10.1177/0022466907310371