forked from tidyverse/dplyr
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathwindow-functions.Rmd
368 lines (258 loc) · 17 KB
/
window-functions.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
<!--
%\VignetteEngine{knitr}
%\VignetteIndexEntry{Window functions}
-->
```{r, echo = FALSE, message = FALSE}
library(dplyr)
library(Lahman)
knitr::opts_chunk$set(tidy = FALSE, comment = "#>")
```
# Window functions and grouped mutate/filter
A __window function__ is a variation on an aggregation function. Where an aggregation function, like `sum()` and `mean()`, takes n inputs and return a single value, a window function returns n values. The output of a window function depends on all its input values, so window functions don't include functions that work element-wise, like `+` or `round()`. Window functions include variations on aggregate functions, like `cumsum()` and `cummean()`, functions for ranking and ordering, like `rank()`, and functions for taking offsets, like `lead()` and `lag()`.
Window functions are used in conjunction with `mutate` and `filter` to solve a wide range of problems, some of which are shown below:
```{r, results = "hide"}
batting <- select(tbl_df(Batting), playerID, yearID, teamID, G, AB:H)
batting <- arrange(batting, playerID, yearID, teamID)
players <- group_by(batting, playerID)
# For each player, find the two years with most hits
filter(players, min_rank(desc(H)) <= 2 & H > 0)
# Within each player, rank each year by the number of games played
mutate(players, G_rank = min_rank(G))
# For each player, find every year that was better than the previous year
filter(players, G > lag(G))
# For each player, compute avg change in games played per year
mutate(players, G_change = (G - lag(G)) / (yearID - lag(yearID)))
# For each player, find all where they played more games than average
filter(players, G > mean(G))
# For each, player compute a z score based on number of games played
mutate(players, G_z = (G - mean(G)) / sd(G))
```
This vignette is broken down into two sections. First you'll learn about the five families of window functions in R, and what you can use them for. If you're only working with local data sources, you can stop there. Otherwise, continue on to learn about window functions in SQL. They are relatively new, but are supported by Postgres, Amazon's Redshift and Google's bigquery. The window functions themselves are basically the same (modulo a few name conflicts), but their specification is a little different. I'll briefy review how they work, and then show how dplyr translates their R equivalents to SQL.
Before reading this vignette, you should be familiar with `mutate()` and `filter()`. If you want to use window functions with SQL databases, you should also be familiar with the basics of dplyr's SQL translation.
## Types of window functions
There are five main families of window functions. Two families are unrelated to aggregation functions:
* Ranking and ordering functions: `row_number()`, `min_rank` (`RANK` in SQL),
`dense_rank()`, `cume_dist()`, `percent_rank()`, and `ntile()`. These
functions all take a vector to order by, and return various types of ranks.
* Offsets `lead()` and `lag()` allow you to access the previous and next
values in a vector, making it easy to compute differences and trends.
The other three families are variations on familiar aggregate functions:
* Cumulative aggregates: `cumsum()`, `cummin()`, `cummax()` (from base R),
and `cumall()`, `cumany()`, and `cummean()` (from dplyr).
* Rolling aggregates operate in a fixed width window. You won't find them in
base R or in dplyr, but there are many implementations in
other packages, such as
[RcppRoll](http://cran.r-project.org/web/packages/RcppRoll).
* Recycled aggregates, where an aggregate is repeated to match the length
of the input. These are not needed in R because vector recycling
automatically recycles aggregates where needed. They are important in SQL,
because the presence of an aggregation function usually tells the database
to return only one row per group.
Each family is described in more detail below, focussing on the general goals and how to use them with dplyr. For more details, refer to the individual function documentation.
### Ranking functions
The ranking functions are variations on a theme, differing in how they handle ties:
```{r}
x <- c(1, 1, 2, 2, 2)
row_number(x)
min_rank(x)
dense_rank(x)
```
If you're familiar with R, you may recognise that `row_number()` and `min_rank()` can be computed with the base `rank()` function and various values of the `ties.method` argument. These functions are provided to save a little typing, and to make it easier to convert between R and SQL.
Two other ranking functions return numbers between 0 and 1. `percent_rank()` gives the percentage of the rank; `cume_dist()` gives the proportion of values less than or equal to the current value.
```{r}
cume_dist(x)
percent_rank(x)
```
These are useful if you want to select (for example) the top 10% of records within each group. For example:
```{r, results = 'hide'}
# Selects best two years
filter(players, min_rank(desc(G)) < 2)
# Selects best 10% of years
filter(players, cume_dist(desc(G)) < 0.1)
```
Finally, `ntile()` divides the data up into `n` evenly sized buckets. It's a coarse ranking, and it can be used in with `mutate()` to divide the data into buckets for further summary. For example, we could use `ntile()` to divide the players within a team into four ranked groups, and calculate the average number of games within each group.
```{r}
by_team_player <- group_by(batting, teamID, playerID)
by_team <- summarise(by_team_player, G = sum(G))
by_team_quartile <- group_by(by_team, quartile = ntile(G, 4))
summarise(by_team_quartile, mean(G))
```
All ranking functions rank from lowest to highest so that small input values get small ranks. Use `desc()` to rank from highest to lowest.
### Lead and lag
`lead()` and `lag()` produce offset versions of a input vector that is either ahead of or behind the original vector.
```{r}
x <- 1:5
lead(x)
lag(x)
```
You can use them to:
* Compute differences or percent changes.
```{r, results = "hide"}
# Compute the relative change in games played
mutate(players, G_delta = G - lag(G))
```
Using `lag()` is more convenient than `diff()` because for `n` inputs
`diff()` returns `n - 1` outputs.
* Find out when a value changes.
```{r, results = "hide"}
# Find when a player changed teams
filter(players, teamID != lag(teamID))
```
`lead()` and `lag()` have an optional argument `order_by`. If set, instead of using the row order to determine which value comes before another, they will use another variable. This important if you have not already sorted the data, or you want to sort one way and lag another.
Here's a simple example of what happens if you don't specify `order_by` when you need it:
```{r}
df <- data.frame(year = 2000:2005, value = (0:5) ^ 2)
scrambled <- df[sample(nrow(df)), ]
wrong <- mutate(scrambled, running = cumsum(value))
arrange(wrong, year)
right <- mutate(scrambled, running = order_by(year, cumsum(value)))
arrange(right, year)
```
### Cumulative aggregates
Base R provides cumulative sum (`cumsum()`), cumulative min (`cummin()`) and cumulative max (`cummax()`). (It also provides `cumprod()` but that is rarely useful). Other common accumulating functions are `cumany()` and `cumall()`, cumulative versions of `||` and `&&`, and `cummean()`, a cumulative mean. These are not included in base R, but efficient versions are provided by `dplyr`.
`cumany()` and `cumall()` are useful for selecting all rows up to, or all rows after, a condition is true for the first (or last) time. For example, we can use `cumany()` to find all records for a player after they played a year with 150 games:
```{r, results = "hide"}
filter(players, cumany(G > 150))
```
Like lead and lag, you may want to control the order in which the accumulation occurs. None of the built in functions have an `order_by` argument so `dplyr` provides a helper: `order_by()`. You give it the variable you want to order by, and then the call to the window function:
```{r}
x <- 1:10
y <- 10:1
order_by(y, cumsum(x))
```
This function uses a bit of non-standard evaluation, so I wouldn't recommend using it inside another function; use the simpler but less concise `with_order()` instead.
### Recycled aggregates
R's vector recycling make it easy to select values that are higher or lower than a summary. I call this a recycled aggregate because the value of the aggregate is recycled to be the same length as the original vector. Recycled aggregates are useful if you want to find all records greater than the mean or less than the median:
```{r, results = "hide"}
filter(players, G > mean(G))
filter(players, G < median(G))
```
While most SQL databases don't have an equivalent of `median()` or `quantile()`, when filtering you can achieve the same effect with `ntile()`. For example, `x > median(x)` is equivalent to `ntile(x, 2) == 2`; `x > quantile(x, 75)` is equivalent to `ntile(x, 100) > 75` or `ntile(x, 4) > 3`.
```{r, results = "hide"}
filter(players, ntile(G, 2) == 2)
```
You can also use this idea to select the records with the highest (`x == max(x)`) or lowest value (`x == min(x)`) for a field, but the ranking functions give you more control over ties, and allow you to select any number of records.
Recycled aggregates are also useful in conjunction with `mutate()`. For example, with the batting data, we could compute the "career year", the number of years a player has played since they entered the league:
```{r}
mutate(players, career_year = yearID - min(yearID) + 1)
```
Or, as in the introductory example, we could compute a z-score:
```{r}
mutate(players, G_z = (G - mean(G)) / sd(G))
```
## Window functions in SQL
Window functions have a slightly different flavour in SQL. The syntax is a little different, and the cumulative, rolling and recycled aggregate functions are all based on the simple aggregate function. The goal in this section is not to tell you everything you need to know about window functions in SQL, but to remind you of the basics and show you how dplyr translates your R expressions in to SQL.
### Structure of a window function in SQL
In SQL, window functions have the form `[expression] OVER ([partition clause] [order clause] [frame_clause])`:
* The __expression__ is a combination of variable names and window functions.
Support for window functions varies from database to database, but most
support the ranking functions, `lead`, `lag`, `nth`, `first`,
`last`, `count`, `min`, `max`, `sum`, `avg` and `stddev`. dplyr
generates this from the R expression in your `mutate` or `filter` call.
* The __partition clause__ specifies how the window function is broken down
over groups. It plays an analogous role to `GROUP BY` for aggregate functions,
and `group_by()` in dplyr. It is possible for different window functions to
be partitioned into different groups, but not all databases support it, and
neither does dplyr.
* The __order clause__ controls the ordering (when it makes a difference).
This is important for the ranking functions since it specifies which
variables to rank by, but it's also needed for cumulative functions and lead.
Whenever you're thinking about before and after in SQL, you must always tell
it which variable defines the order. In dplyr you do this with `arrange()`.
If the order clause is missing when needed, some databases fail with an
error message while others return non-deterministic results.
* The __frame clause__ defines which rows, or __frame__, that are passed
to the window function, describing which rows (relative to the current row)
should be included. The frame clause provides two offsets which determine
the start and end of frame. There are three special values: -Inf means
to include all preceeding rows (in SQL, "unbounded preceding"), 0 means the
current row ("current row"), and Inf means all following rows ("unbounded
following)". The complete set of options is comprehensive, but fairly
confusing, and is summarised visually below.
![A visual summary of frame options](windows.png)
Of the many possible specifications, there are only three that commonly
used. They select between aggregation variants:
* Recycled: `BETWEEN UNBOUND PRECEEDING AND UNBOUND FOLLOWING`
* Cumulative: `BETWEEN UNBOUND PRECEEDING AND CURRENT ROW`
* Rolling: `BETWEEN 2 PRECEEDING AND 2 FOLLOWING`
dplyr generates the frame clause based on whether your using a recycled
aggregate or a cumulative aggregate.
It's easiest to understand these specifications by looking at a few examples. Simple examples just need the partition and order clauses:
* Rank each year within a player by number of home runs:
`RANK() OVER (PARTITION BY playerID ORDER BY desc(H))`
* Compute change in number of games from one year to the next:
`G - LAG(G) OVER (PARTITION G playerID ORDER BY yearID)`
Aggregate variants are more verbose because we also need to supply the frame clause:
* Running sum of G for each player: `SUM(G) OVER (PARTITION BY playerID ORDER BY yearID BETWEEN UNBOUND PRECEEDING AND CURRENT ROW)`
* Compute the career year: `YearID - min(YearID) OVER (PARTITION BY playerID BETWEEN UNBOUND PRECEEDING AND UNBOUND FOLLOWING) + 1`
* Compute a rolling average of games player: `MEAN(G) OVER (PARTITION BY playerID ORDER BY yearID BETWEEN 2 PRECEEDING AND 2 FOLLOWING)`
You'll notice that window functions in SQL are more verbose than in R. This is because different window functions can have different partitions, and the frame specification is more general than the two aggregate variants (recycled and cumulative) provided by dplyr. dplyr makes a tradeoff: you can't access rarely used window function capabilities (unless you write raw SQL), but in return, common operations are much more succinct.
### Translating dplyr to SQL
To see how individual window functions are translated to SQL, we can use `translate_sql()` with the argument `window = TRUE`.
```{r, message = FALSE}
if (has_lahman("postgres")) {
players_db <- group_by(tbl(lahman_postgres(), "Batting"), playerID)
print(translate_sql(mean(G), tbl = players_db, window = TRUE))
print(translate_sql(cummean(G), tbl = players_db, window = TRUE))
print(translate_sql(rank(G), tbl = players_db, window = TRUE))
print(translate_sql(ntile(G, 2), tbl = players_db, window = TRUE))
print(translate_sql(lag(G), tbl = players_db, window = TRUE))
}
```
If the tbl has been arranged previously, then that ordering will be used for the order clause:
```{r, message = FALSE}
if (has_lahman("postgres")) {
players_by_year <- arrange(players_db, yearID)
print(translate_sql(cummean(G), tbl = players_by_year, window = TRUE))
print(translate_sql(rank(), tbl = players_by_year, window = TRUE))
print(translate_sql(lag(G), tbl = players_by_year, window = TRUE))
}
```
There are some challenges when translating window functions between R and SQL, because dplyr tries to keep the window functions as similar as possible to both the existing R analogues and to the SQL functions. This means that there are three ways to control the order clause depending on which window function you're using:
* For ranking functions, the ordering variable is the first argument: `rank(x)`,
`ntile(y, 2)`. If omitted or `NULL`, will use the default ordering associated
with the tbl (as set by `arrange()`).
* Accumulating aggegates only take a single argument (the vector to aggregate).
To control ordering, use `order_by()`.
* Aggregates implemented in dplyr (`lead`, `lag`, `nth_value`, `first_value`,
`last_value`) have an `order_by` argument. Supply it to override the
default ordering.
The three options are illustrated in the snippet below:
```{r, eval = FALSE}
mutate(players,
min_rank(yearID),
order_by(yearID, cumsum(G)),
lead(order_by = yearID, G)
)
```
Currently there is no way to order by multiple variables, except by setting the default ordering with `arrange()`. This will be added in a future release.
### Translating filters based on window functions
There are some restrictions on window functions in SQL that make their use with `WHERE` somewhat challenging. Take this simple example, where we want to find the year each player played the most games:
```{r, eval = FALSE}
filter(players, rank(G) == 1)
```
The following straightforward translation does not work because window functions are only allowed in `SELECT` and `ORDER_BY`.
```
SELECT *
FROM Batting
WHERE rank() OVER (PARTITION BY "playerID" ORDER BY "G") = 1;
```
Computing the window function in `SELECT` and referring to it in `WHERE` or `HAVING` doesn't work either, because `WHERE` and `HAVING` are computed before windowing functions.
```
SELECT *, rank() OVER (PARTITION BY "playerID" ORDER BY "G") as rank
FROM Batting
WHERE rank = 1;
SELECT *, rank() OVER (PARTITION BY "playerID" ORDER BY "G") as rank
FROM Batting
HAVING rank = 1;
```
Instead, we must use a subquery:
```
SELECT *
FROM (
SELECT *, rank() OVER (PARTITION BY "playerID" ORDER BY "G") as rank
FROM Batting
) tmp
WHERE rank = 1;
```
And even that query is a slightly simplification because it will also add a rank column to the original columns. dplyr takes care of generating the full, verbose, query, so you can focus on your data analysis challenges.