forked from renfrst/RTipsAndTricks
-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathCHIPRASurvey.Rmd
123 lines (101 loc) · 5.23 KB
/
CHIPRASurvey.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
ORPRN CHIPRA survey
===================
Last update by Benjamin Chan (<[email protected]>) on `r paste(Sys.time())` using `r R.version.string`.
Background
----------
Original email from Katrina (emphasis mine):
> From: Katrina Ramsey
> Sent: Thursday, July 25, 2013 3:04 PM
> To: Benjamin Chan
> Subject: ORPRN Quality measures survey analysis -- suggestions?
>
> I'm working with ORPRN on a survey of providers about which quality measures they find useful. LJ Fagnan suggested that you might have useful suggestions for how to wrangle this information, and that you might actually find it interesting.
>
> **If you are interested, would you have a short time slot available to brainstorm with me early next week? Or can you point me in the direction of a similar analysis?
>
> A brief summary:
> Two types of questions about a list of about 23 CHIPRA quality measures:
> 1) Is this measure useful? (check all that apply)
> 2) Check the five measures that are top priorities (check 5)
> A subset of these measures correspond to CCO incentive measures. **We are interested in some way of expressing their importance relative to non-CCO measures on the list.**
>
> So far, I have calculated percentages of providers (n= approx 140) who checked measures and ranked the measures by percentages. Also checked differences by specialty type, size of practice, and PCPCH status.
>
> Thanks in advance,
>
> Katrina
>
> Katrina Ramsey, MPH
> BDP Staff Biostatistician
> OCTRI Biostatistics & Design Program
> Oregon Health & Science University
> Phone: 503-418-9241
> Email: [email protected]
Follow-up email:
> From: Katrina Ramsey
> Sent: Monday, July 29, 2013 10:27 AM
> To: Benjamin Chan; Ruth Rowland; Aaron Mendelson
> Cc: Lyle Fagnan; LeAnn Michaels
> Subject: Materials for ORPRN Quality Measures Survey meeting this afternoon
>
> Thanks, everyone, for your willingness to meet today to talk about the ORPRN Quality Measures Survey. I am attaching two files for your information. The pdf file is the questionnaire itself, delivered on SurveyMonkey.com. The Excel file contains a summary of responses to Question 4 on the first tab, and some subgroupings of respondents on the second tab.
>
> We are looking for a way to summarize/test/talk about how useful and important providers find the quality measures in questions 3 and 4 of the survey. Questions 3 and 4 are similar; question 3 asks about usefulness and question 4 about priorities. Question 3 also has more detail about CAPHS measures.
>
> Some of these measures are similar to CCO incentive measures - these are identified in the Excel file - and the differences between this group of measures versus others are of particular interest.
>
> I will bring copies this afternoon. Thanks again!
>
> Katrina
The key questions from the survey are
Q3. For each measure, check the box if the measure is *USEFUL* for Your Practice. (31 items)
and
Q4. Select the 5 Measures that represent *YOUR TOP 5 PRIORITIES* for measurement for your practice. (23 items)
Proposed model
--------------
The 31 items from Q3 and the 23 items from Q4 can be viewed as binary responses (checked/unchecked) that are clustered within provider. Thus, a hierarchical linear model is proposed using a logistic link function and clustering by provider.
$latex y_i = \beta_0 + \beta_1 x_\text{CCO measure} + \beta_2 x_\text{provider characteristic} + \Gamma Z_\text{provider ID}$
How to fit model using fake data
--------------------------------
The code below walks through fitting the model for Q3 using fake data. The same code can be used to fit the model for Q4. The end goal is to get the ratio of the odds that a CCO measure would be classified as *useful* versus a non-CCO measure.
Generate the fake data for 100 providers responding to the 31 checkbox items of Q3. Show the top few lines of the data frame.
```{r GenerateFakeData}
nProv <- 100
idProv <- factor(rep(seq(1, nProv), each=31))
idItem <- factor(rep(seq(1, 31), nProv))
isCCOMeasure <- idItem %in% c(1, 4, 8, 12, 18, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31)
x <- rep(rnorm(nProv), each=31)
orNominal <- 1.2
p <- exp(log(orNominal) * isCCOMeasure) / (1 + exp(log(orNominal) * isCCOMeasure))
isUseful <- rbinom(nProv * 31, 1, p)
D <- data.frame(idProv, idItem, isCCOMeasure, x, isUseful)
head(D)
```
Load the `lme4` package.
```{r LoadLibraries}
require(lme4, quietly=TRUE)
require(xtable, quietly=TRUE)
```
Fit a random intercept model and then a random slope model to the fake data.
```{r FitModels}
M1 <- lmer(isUseful ~ isCCOMeasure + x + (1 | idProv), data=D, family=binomial)
summary(M1)
M2 <- lmer(isUseful ~ isCCOMeasure + x + (1 + isCCOMeasure | idProv), data=D, family=binomial)
summary(M2)
```
Calculate odds ratios and 95% CIs.
```{r CalculateOddsRatio, results='asis'}
calcOR <- function (M) {
f <- deparse(formula(M))
or <- exp(fixef(M)["isCCOMeasureTRUE"])
se <- sqrt(vcov(M)[2, 2])
ciLower <- exp(fixef(M)["isCCOMeasureTRUE"] + se * qnorm(0.025))
ciUpper <- exp(fixef(M)["isCCOMeasureTRUE"] + se * qnorm(0.975))
data.frame(Formula=f, OddsRatio=or, LowerCI=ciLower, UpperCI=ciUpper)
}
orM1 <- calcOR(M1)
orM2 <- calcOR(M2)
results <- rbind(orM1, orM2)
print(xtable(results, digits=4), type="html", include.rownames=FALSE)
```
These odds ratios do come close to the nominal OR of `r orNominal`.