-
Notifications
You must be signed in to change notification settings - Fork 38
/
17.Rmd
328 lines (226 loc) · 26.2 KB
/
17.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
```{r, echo = F, cache = F}
knitr::opts_chunk$set(fig.retina = 2.5)
knitr::opts_chunk$set(fig.align = "center")
options(width = 110)
```
# ~~Horoscopes~~ Insights
> Statistical inference is indeed critically important. But only as much as every other part of research. Scientific discovery is not an additive process, in which sin in one part can be atoned by virtue in another. Everything interacts. So equally when science works as intended as when it does not, every part of the process deserves attention. [@mcelreathStatisticalRethinkingBayesian2020, p. 553]
In this final chapter, there are no models for us to fit and no figures for use to reimagine. McElreath took the opportunity to comment more broadly on the scientific process. He made a handful of great points, some of which I'll quote in a bit. But for the bulk of this chapter, I'd like to take the opportunity to pass on a few of my own insights about workflow. I hope they're of use.
## Use R Notebooks
OMG
I first started using **R** in the winter of 2015/2016. Right from the start, I learned how to code from within the [RStudio](https://www.rstudio.com) environment. But within RStudio I was using simple scripts. No longer. I now use [R Notebooks](http://rmarkdown.rstudio.com/r_notebooks.html) for just about everything, including my scientific projects, this [**bookdown**](https://bookdown.org)-powered ebook, and even my academic [webpage](https://solomonkurz.netlify.com) and [blog](https://solomonkurz.netlify.com/blog/). Nathan Stephens wrote a nice blog post, [*Why I love R Notebooks*](https://rviews.rstudio.com/2017/03/15/why-i-love-r-notebooks/). I agree. This has fundamentally changed my workflow as a scientist. I only wish I'd learned about this before starting my dissertation project. So it goes...
Do yourself a favor, adopt R Notebooks into your workflow. Do it today. If you prefer to learn with videos, here's a [nice intro](https://youtu.be/TJmNvfhLCoI?t=195) by [Kristine Yu](https://www.krisyu.org/) and [another one](https://youtu.be/GG4pgtfDpWY?t=324) by [JJ Allaire](https://twitter.com/fly_upside_down). Try it out for like one afternoon and you'll be hooked.
## Save your model fits
It's embarrassing how long it took for this to dawn on me.
Unlike classical statistics, Bayesian models using MCMC take a while to compute. Most of the simple models in McElreath's text take 30 seconds up to a couple minutes. If your data are small, well-behaved and of a simple structure, you might have a lot of wait times in that range in your future.
It hasn't been that way, for me.
Most of my data have a complicated multilevel structure and often aren't very well behaved. It's normal for my models to take an hour or several to fit. Once you start measuring your model fit times in hours, you do not want to fit these things more than once. So, it's not enough to document my code in a nice R Notebook file. I need to save my `brm()` fit objects in external files.
Consider this model. It's taken from Bürkner's [-@Bürkner2022Multivariate] vignette, [*Estimating multivariate models with brms*](https://cran.r-project.org/package=brms/vignettes/brms_multivariate.html). It took about three minutes for my personal laptop to fit.
```{r, warning = F, message = F}
library(brms)
data("BTdata", package = "MCMCglmm")
```
```{r b17.1, echo = F}
# save(b17.1, file = "fits/b17.01.rda")
# rm(b17.1)
load("fits/b17.01.rda")
```
```{r, eval = F}
b17.1 <-
brm(data = BTdata,
family = gaussian,
bf(mvbind(tarsus, back) ~ sex + hatchdate + (1 | p | fosternest) + (1 | q | dam)) +
set_rescor(TRUE),
chains = 2, cores = 2,
seed = 17)
```
Three minutes isn't terribly long to wait, but still. I'd prefer to never have to wait for another three minutes, again. Sure, if I save my code in a document like this, I will always be able to fit the model again. But I can work smarter. Here I'll save my `b17.1` object outside of **R** with the `save()` function.
```{r}
save(b17.1, file = "fits/b17.01.rda")
```
Hopefully y'all are savvy Bayesian **R** users and find this insultingly remedial. But if it's new to you like it was me, you can learn more about `.rda` files [here](https://www.r-bloggers.com/load-save-and-rda-files/).
Now `b17.1` is saved outside of **R**, I can safely remove it and then reload it.
```{r}
rm(b17.1)
load("fits/b17.01.rda")
```
The file took a fraction of a second to reload. Once reloaded, I can perform typical operations, like examine summaries of the model parameters or refreshing my memory on what data I used.
```{r, message = F, warning = F}
print(b17.1)
head(b17.1$data)
```
The other option, which we've been using extensively throughout the earlier chapters, is to use `file` argument within the `brms::brm()` function. You can read about the origins of the argument in [issue #472 on the **brms** GitHub repo](https://github.com/paul-buerkner/brms/issues/472). To make use of the `file` argument, specify a character string. `brm()` will then save your fitted model object in an external `.rds` file via the `saveRDS()` function. Let's give it a whirl, this time with an interaction.
```{r b17.2, message = F, results = "hide"}
b17.2 <-
brm(data = BTdata,
family = gaussian,
bf(mvbind(tarsus, back) ~ sex * hatchdate + (1 | p | fosternest) + (1 | q | dam)) +
set_rescor(TRUE),
chains = 2, cores = 2,
seed = 17,
file = "fits/b17.02")
```
Now `b17.2` is saved outside of **R**, I can safely remove it and then reload it.
```{r}
rm(b17.2)
```
We might load `b17.2` with the `readRDS()` function.
```{r}
b17.2 <- readRDS("fits/b17.02.rds")
```
Now we can work with `b17.2` as desired.
```{r}
fixef(b17.2)
```
The `file` method has another handy feature. Let's remove `b17.2` one more time to see.
```{r}
rm(b17.2)
```
If you've fit a `brm()` model once and saved the results with `file`, executing the same `brm()` code will not re-fit the model. Rather, it will just load and return the model from the `.rds` file.
```{r, message = F, results = "hide"}
b17.2 <-
brm(data = BTdata,
family = gaussian,
bf(mvbind(tarsus, back) ~ sex * hatchdate + (1 | p | fosternest) + (1 | q | dam)) +
set_rescor(TRUE),
chains = 2, cores = 2,
seed = 15,
file = "fits/b17.02")
```
It takes just a fraction of a second. Once again, we're ready to work with `b17.2`.
```{r}
b17.2$formula
```
And if you'd like to remind yourself what the name of that external file was or what folder you saved it in, you can extract it from the `brm()` fit object.
```{r}
b17.2$file
```
Also, see [Gavin Simpson](https://fromthebottomoftheheap.net/)'s blog post, [*A better way of saving and loading objects in R*](https://www.fromthebottomoftheheap.net/2012/04/01/saving-and-loading-r-objects/), for a discussion on the distinction between `.rda` and `.rds` files.
## Build your models slowly
The model from Bürkner's vignette, `b17.1`, was no joke. If you wanted to be verbose about it, it was a multilevel, multivariate, multivariable model. It had a cross-classified multilevel structure, two predictors (for each criterion), and two criteria. Not only is that a lot to keep track of, there's a whole lot of places for things to go wrong.
Even if that was the final model I was interested in as a scientist, I still wouldn't start with it. I'd build up incrementally, just to make sure nothing looked fishy. One place to start would be a simple intercepts-only model.
```{r b17.0, message = F, results = "hide"}
b17.0 <-
brm(data = BTdata,
family = gaussian,
bf(mvbind(tarsus, back) ~ 1) + set_rescor(TRUE),
chains = 2, cores = 2,
file = "fits/b17.00")
```
```{r, message = F, fig.width = 6.5, fig.height = 6}
plot(b17.0, widths = c(2, 3))
print(b17.0)
```
If the chains look good and the summary statistics are about what I'd expect, I'm on good footing to keep building up to the model I really care about. The results from this model, for example, suggest that both criteria were standardized (i.e., intercepts at 0 and $\sigma$'s at 1). If that wasn't what I intended, I'd rather catch it here than spend several minutes fitting the more complicated `b17.1` model, the parameters for which are sufficiently complicated that I may have had trouble telling what scale the data were on.
Note, this is not the same as $p$-hacking [@simmonsFalsepositivePsychologyUndisclosed2011] or wandering aimlessly down the garden of forking paths [@gelmanGardenForkingPaths2013]. We are not chasing the flashiest model to put in a paper. Rather, this is just good pragmatic data science. If you start off with a theoretically-justified but complicated model and run into computation problems or produce odd-looking estimates, it won't be clear where things went awry. When you build up, step by step, it's easier to catch data cleaning failures, coding goofs and the like.
So, when I'm working on a project, I fit one or a few simplified models before fitting my complicated model of theoretical interest. This is especially the case when I'm working with model types that are new to me or that I haven't worked with in a while. I document each step in my R Notebook files and I save the fit objects for each in external files. I have caught surprises this way. Hopefully this will help you catch your mistakes, too.
## Look at your data
Relatedly, and perhaps even a precursor, you should always plot your data before fitting a model. There were plenty examples of this in the text, but it's worth it making explicit. Simple summary statistics are great, but they're not enough. For an entertaining exposition, check out Matejka and Fitzmaurice's [-@matejkaSameStats2017] [*Same stats, different graphs: Generating datasets with varied appearance and identical statistics through simulated annealing*](https://www.autodeskresearch.com/publications/samestats). Though it might make for a great cocktail party story, I'd hate to pollute the scientific literature with a linear model based on a set of dinosaur-shaped data.
## Consider using the `0 + Intercept` syntax
We covered this a little in the last couple chapters (e.g., [Section 15.4.7][Fit the Bayesian muiltilevel alternative.], [Section 16.4.3][~~Lynx lessons~~ Bonus: Practice with the autoregressive model.]), but it's easy to miss. If your real-world model has predictors (i.e., isn't an intercept-only model), it's important to keep track of how you have centered those predictors. When you specify a prior for a **brms** `Intercept` (i.e., an intercept resulting from the `y ~ x` or `y ~ 1 + x` style of syntax), <u>that prior is applied under the presumption all the predictors are mean centered</u>. In the *Population-level ('fixed') effects* subsection of the *`set_prior`* section of the [**brms** reference manual](https://cran.r-project.org/package=brms/brms.pdf) [@brms2022RM], we read:
> Note that technically, this prior is set on an intercept that results when internally
centering all population-level predictors around zero to improve sampling efficiency. On this centered intercept, specifying a prior is actually much easier and intuitive than on the original intercept, since the former represents the expected response value when all predictors are at their means. To treat the intercept as an ordinary population-level effect and avoid the centering parameterization, use `0 + Intercept` on the right-hand side of the model formula.
We get a little more information from the *Parameterization of the population-level intercept* subsection of the *`brmsformula`* section:
> This behavior can be avoided by using the reserved (and internally generated) variable Intercept. Instead of `y ~ x`, you may write `y ~ 0 + Intercept + x`. This way, priors can be defined on the real intercept, directly. In addition, the intercept is just treated as an ordinary population-level effect and thus priors defined on b will also apply to it. Note that this parameterization may be less efficient than the default parameterization discussed above.
We didn't bother using the `0 + Intercept` syntax for most of our models because McElreath chose to emphasize mean-centered and standardized predictors in the second edition of his text. But this will not always be the case. Sometimes you might have a good reason not to center your predictors. In those cases, the `0 + Intercept` syntax can make a difference. Regardless, do set your `Intercept` priors with care.
## Annotate your workflow
In a typical model-fitting file, I'll load my data, perhaps transform the data a bit, fit several models, and examine the output of each with trace plots, model summaries, information criteria, and the like. In my early days, I just figured each of these steps were self-explanatory.
Nope.
["In every project you have at least one other collaborator; future-you. You don't want future-you to curse past-you."](https://www.r-bloggers.com/your-most-valuable-collaborator-future-you/)
My experience was that even a couple weeks between taking a break from a project and restarting it was enough time to make my earlier files confusing. **And they were my files**. I now start each R Notebook document with an introductory paragraph or two explaining exactly what the purpose of the file is. I separate my major sections by [headers and subheaders](https://rmarkdown.rstudio.com/authoring_basics.html). My working R Notebook files are peppered with bullets, sentences, and full on paragraphs between code blocks.
## Annotate your code
This idea is implicit in McElreath's text, but it's easy to miss the message. I know I did, at first. I find this is especially important for data wrangling. I'm a **tidyverse** guy and, for me, the big-money verbs like `mutate()`, `pivot_longer()`, `select()`, `filter()`, `group_by()`, and `summarise()` take care of the bulk of my data wrangling. But every once and a while I need to do something less common, like with `str_extract()` or `case_when()`. When I end up using a new or less familiar function, I typically annotate right in the code and even sometimes leave a hyperlink to some [R-bloggers](https://www.r-bloggers.com) post or [stackoverflow](https://stackoverflow.com) question that explained how to use it.
## Break up your workflow
I've also learned to break up my projects into multiple R Notebook files. If you have a small project for which you just want a quick and dirty plot, fine, do it all in one file. My typical scientific projects have:
* a primary data cleaning file;
* a file with basic descriptive statistics and the like;
* at least one primary analysis file;
* possible secondary and tertiary analysis files;
* a file or two for my major figures; and
* a file explaining and depicting my priors, often accompanied by my posteriors, for comparison.
Putting all that information in one R Notebook file would be overwhelming. Your workflow might well look different, but hopefully you get the idea. You don't want working files with thousands of lines of code.
To get a sense of what this can look like in practice, you might follow this link, [https://osf.io/vekpf/](https://osf.io/vekpf/), to the OSF project site for one of my conference presentations [@kurzEvenTreatmentFunctional2019] from the Before Times. In the [wiki](https://osf.io/vekpf/wiki/home/), I explained the contents of 10 files supporting the analyses we presented at the conference. Each of those 10 `.html` files has its origin in its own `.Rmd` file.
Mainly to keep Jenny Bryan from [setting my computer on fire](https://www.tidyverse.org/articles/2017/12/workflow-vs-script/), I'm also in the habit of organizing interconnected project files with help from [RStudio Projects](https://support.rstudio.com/hc/en-us/articles/200526207-Using-Projects). You can learn more about these from [Chapter 8](https://r4ds.had.co.nz/workflow-projects.html) in *R4DS* [@grolemundDataScience2017]. They might seem trivial, at first, but I've come to value the simple ways in which RStudio Projects help streamline my workflow.
## Code in public
If you would like to improve the code you write for data-wrangling, modeling, and/or visualizing, code in public. Yes, it can be intimidating. Yes, you will probably make mistakes. If you're lucky, [others will point them out](https://github.com/ASKurz/Statistical_Rethinking_with_brms_ggplot2_and_the_tidyverse/pulls?q=is%3Apr+is%3Aclosed) and you will revise and learn and grow. `r emo::ji("seedling")`
You can do this on any number of mediums, such as
* GitHub (e.g., [here](https://github.com/paul-buerkner/brms)),
* personal blogs (e.g., [here](https://dilshersinghdhillon.netlify.com/post/multiple_comparison/)),
* the Open Science Framework (e.g., [here](https://osf.io/dpzcb/)),
* online books (e.g., [here](https://learningstatisticswithr.com/)),
* full-blown YouTube lectures (e.g., [here](https://youtu.be/sgw4cu8hrZM)), or even with
* brief Twitter GIFs (e.g., [here](https://twitter.com/GuyProchilo/status/1207766623840604174)).
I've found that just the possibility that others might look at my code makes it more likely I'll slow down, annotate, and try to use more consistent formatting. Hopefully it will benefit you, too.
## Check out social media
If you're not on it, consider joining academic Twitter[^10] (see the related blog posts by [Sarah Mojarad](https://medium.com/@smojarad/a-beginners-guide-to-academic-twitter-f483dae86597) and [Dan Quintana](https://www.dsquintana.blog/twitter-for-scientists/)). The word on the street is correct. Twitter can be rage-fueled [dumpster fire](https://media.giphy.com/media/l0IynvPneUpb7SnBe/giphy.gif). But if you're selective about who you follow, it's a great place to lean from and connect with your academic heroes. If you're a fan of this project, here's a list of some of the people you might want to follow:
* [Andrew Althouse](https://twitter.com/ADAlthousePhD)
* [Vincent Arel-Bundock](https://twitter.com/VincentAB)
* [Mara Averick](https://twitter.com/dataandme)
* [Mattan S. Ben-Shachar](https://twitter.com/mattansb)
* [Michael Bentacourt](https://twitter.com/betanalpha)
* [Jenny Bryan](https://twitter.com/JennyBryan)
* [Paul Bürkner](https://twitter.com/paulbuerkner)
* [Nathaniel Haines](https://twitter.com/Nate__Haines)
* [Frank Harrell](https://twitter.com/f2harrell)
* [Andrew Heiss](https://twitter.com/andrewheiss)
* [Alison Presmanes Hill](https://twitter.com/apreshill)
* [Matthew Kay](https://twitter.com/mjskay)
* [Tristan Mahr](https://twitter.com/tjmahr)
* [Richard McElreath](https://twitter.com/rlmcelreath)
* [A. Jordan Nafa](https://twitter.com/ajordannafa)
* [Danielle Navarro](https://twitter.com/djnavarro)
* [Chelsea Parlett-Pelleriti](https://twitter.com/ChelseaParlett)
* [Roger Peng](https://twitter.com/rdpeng)
* [David Robinson](https://twitter.com/drob)
* [Julia Silge](https://twitter.com/juliasilge)
* [Dan Simpson](https://twitter.com/dan_p_simpson)
* [Gavin Simpson](https://twitter.com/ucfagls)
* [Aki Vehtari](https://twitter.com/avehtari)
* [Matti Vuorre](https://twitter.com/vuorre)
* [Hadley Wickham](https://twitter.com/hadleywickham)
* [Brenton Wiernik](https://twitter.com/bmwiernik)
* [Claus Wilke](https://twitter.com/ClausWilke)
* [Donald Williams](https://twitter.com/wdonald_1985)
* [Yihui Xie](https://twitter.com/xieyihui)
[I'm on twitter](https://twitter.com/SolomonKurz), too.
There are also some great blogs out there. Of primary interest, McElreath blogs once in a blue moon at [https://elevanth.org/blog/](https://elevanth.org/blog/). Andrew Gelman regularly blogs at [https://statmodeling.stat.columbia.edu/](https://statmodeling.stat.columbia.edu/). Many of Gelman's posts are great, but don't miss the action in the comments sections. Every so often, I post on **brms**-related content at [https://solomonkurz.netlify.app/blog/](https://solomonkurz.netlify.app/blog/).
Also, do check out the [Stan Forums](https://discourse.mc-stan.org). They have a special [**brms** tag](https://discourse.mc-stan.org/c/interfaces/brms) there, under which you can find all kinds of hot **brms** talk.
But if you're new to the world of asking for help with your code online, you might acquaint yourself with the notion of a [minimally reproducible example](https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example). In short, a good minimally reproducible example helps others help you. If you fail to do this, prepare for some snark.
## Extra *Rethinking*-related resources
You may have noticed I don't offer solutions to the homework problems. Happily, [David Salazar](https://david-salazar.github.io/) has a blog series on the homework problems highlighting a **tiydverse**- and **rethinking**-oriented workflow; find his first post [here](https://david-salazar.github.io/2020/04/19/statistical-rethinking-week-1/). [Ania Kawiecki](https://akawiecki.github.io/) made a beautiful website solving the homework problems using the **tiydverse** and [**INLA**](https://www.r-inla.org/home) [@rue2009approximate], which you can find [here](https://akawiecki.github.io/statistical_rethinking_inla/). [Vincent Arel-Bundock](https://arelbundock.com/) has a nice website where he worked through the primary material in the book chapters using **rstan** and the **tidyverse**, which you can find [here](https://vincentarelbundock.github.io/rethinking2). Also, [Andrew Heiss](https://twitter.com/andrewheiss) has been coding through some of the examples in McElreath's [2022 lecture series](https://youtube.com/playlist?list=PLDcUM9US4XdMROZ57-OIRtIK0aOynbgZN), which you can find [here](https://bayesf22-notebook.classes.andrewheiss.com/rethinking/03-video.html).
If you find other resources along these lines, please tell me about it in a [GitHub issue](https://github.com/ASKurz/Statistical_Rethinking_with_brms_ggplot2_and_the_tidyverse_2_ed/issues).
## *But, like, how do I do this for real?*
McElreath's text and my ebook are designed to teach you how to *get started doing* applied Bayesian statistics. Neither does a great job teaching you *how to write them up* for a professional presentation, like a peer-reviewed journal article. What I will do, however, is give you some places to look. Just before releasing the 0.2.0 version of this ebook, I asked the good people on twitter to share their favorite examples of applied Bayesian statistics in the scientific literature.
```{r, echo = F, fig.align = "center"}
tweetrmd::include_tweet("https://twitter.com/SolomonKurz/status/1371547835737444355")
```
At first, folks were a little shy, but eventually the crowd came through in spades! For a full list, feel free to peruse my tweet thread. For your convenience, here's a list of a good bunch of the works people shared[^11]:
* @allen2020outside [, PDF [link](https://scholarworks.boisestate.edu/cgi/viewcontent.cgi?article=1197&context=polsci_facpubs)]
* @amlie2020risk [, PDF [link](https://www.ahajournals.org/doi/epub/10.1161/STROKEAHA.119.027225)]
* @casillas2021interlingual [, PDF [link](https://www.mdpi.com/2226-471X/6/1/9/pdf)]
* @davis2020genetic [, PDF [link](https://elifesciences.org/articles/50901.pdf)]
* @girard2021reconsidering [, PDF [link](https://doi.org/10.1007/s42761-020-00030-w)]
* @haines2018outcome [, PDF [link](http://haines-lab.com/pdf/Haines_Cog_Sci_2018.pdf)]
* @kale2020visual [, PDF [link](https://mucollective.northwestern.edu/files/2020%20-%20Kale,%20Visual%20Reasoning%20Strategies%20for%20Effect%20Size%20Judgements.pdf)]
* @nogueira2018thrombectomy [, PDF [link](https://www.nejm.org/doi/full/10.1056/nejmoa1706442)]
* @ross2020racial [, PDF [link](https://journals.sagepub.com/doi/pdf/10.1177/1948550620916071)]
* @silbiger2019comparative [, PDF [link](https://par.nsf.gov/servlets/purl/10148484)]
At a quick glance, these papers cover a reasonably broad range of topics. I hope they give you a sense of how to write up your work.
## Parting wisdom
Okay, that's enough from me. Let's start wrapping this project up with some McElreath.
> There is an aspect of science that you do personally control: openness. [Pre-plan your research together with the statistical analysis](https://mindhacks.com/2017/11/09/open-science-essentials-pre-registration/). Doing so will improve both the research design and the statistics. Document it in the form of a mock analysis that you would not be ashamed to share with a colleague. Register it publicly, perhaps in a simple repository, like [Github](https://github.com) or any other. But [your webpage](https://bookdown.org/yihui/blogdown/) will do just fine, as well. Then collect the data. Then analyze the data as planned. If you must change the plan, that's fine. But document the changes and justify them. Provide all of the data and scripts necessary to repeat your analysis. Do not provide scripts and data ["on request,"](https://twitter.com/robinnkok/status/1213011238294347776) but rather put them online so reviewers of your paper can access them without your interaction. There are of course cases in which full data cannot be released, due to privacy concerns. But the bulk of science is not of that sort.
>
> The data and its analysis are the scientific product. The paper is just an advertisement. If you do your honest best to design, conduct, and document your research, so that others can build directly upon it, you can make a difference. (p. 555)
Toward that end, also check out the [OSF](https://osf.io) and their YouTube channel, [here](https://www.youtube.com/channel/UCGPlVf8FsQ23BehDLFrQa-g). [Katie Corker](https://twitter.com/katiecorker) gets the last words: ["Open science is stronger because we're doing this together."](https://cos.io/blog/open-science-is-a-behavior/)
## Session info {-}
```{r}
sessionInfo()
```
```{r, echo = F}
rm(BTdata, b17.1, b17.2, b17.0)
```
```{r, echo = F, warning = F, message = F, results = "hide"}
# pacman::p_unload(pacman::p_loaded(), character.only = TRUE)
```
[^10]: Since Elon Musk took over twitter in late 2022, academic twitter has become an odd place. Many have migrated to other platforms, and it's not clear if those initial migrations will be permanent, or to what extent we'll see folks return. But at the time of this writing (January 2023), I'm still on twitter and I hope to see you there, too.
[^11]: This list is not exhaustive of the links people shared on twitter. My basic inclusion criteria were that they were (a) peer-reviewed articles (b) with a substantive focus that I could (c) easily find a PDF link for. When someone shared several of their works, I simply pulled the first one that fulfilled those criteria. Also, inclusion on this list is **not a personal endorsement**. I haven't read most of them. You get what you pay for, friends.