forked from OpenIntroStat/ims
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path07-model-slr.Rmd
1381 lines (1088 loc) · 70.4 KB
/
07-model-slr.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# (PART) Regression modeling {.unnumbered}
```{r, include = FALSE}
source("_common.R")
```
# Linear regression with a single predictor {#model-slr}
::: {.chapterintro data-latex=""}
Linear regression is a very powerful statistical technique.
Many people have some familiarity with regression models just from reading the news, where straight lines are overlaid on scatterplots.
Linear models can be used for prediction or to evaluate whether there is a linear relationship between a numerical variable on the horizontal axis and the average of the numerical variable on the vertical axis.
:::
## Fitting a line, residuals, and correlation {#fit-line-res-cor}
When considering linear regression, it's helpful to think deeply about the line fitting process.
In this section, we define the form of a linear model, explore criteria for what makes a good fit, and introduce a new statistic called *correlation*.
### Fitting a line to data
Figure \@ref(fig:perfLinearModel) shows two variables whose relationship can be modeled perfectly with a straight line.
The equation for the line is $y = 5 + 64.96 x.$ Consider what a perfect linear relationship means: we know the exact value of $y$ just by knowing the value of $x.$ A perfect linear relationship is unrealistic in almost any natural process.
For example, if we took family income ($x$), this value would provide some useful information about how much financial support a college may offer a prospective student ($y$).
However, the prediction would be far from perfect, since other factors play a role in financial support beyond a family's finances.
```{r perfLinearModel, fig.cap = "Requests from twelve separate buyers were simultaneously placed with a trading company to purchase Target Corporation stock (ticker TGT, December 28th, 2018), and the total cost of the shares were reported. Because the cost is computed using a linear formula, the linear fit is perfect."}
target <- simulated_scatter %>%
filter(group == 4)
ggplot(target, aes(x = x, y = y)) +
geom_smooth(method = "lm") +
geom_point(size = 3) +
scale_y_continuous(labels = label_dollar(scale = 0.001, suffix = "K", accuracy = 1), breaks = c(0, 1000, 2000)) +
labs(
x = "Number of Target Corporation Stocks to Purchase",
y = "Total Cost of the Share Purchase"
)
```
Linear regression is the statistical method for fitting a line to data where the relationship between two variables, $x$ and $y,$ can be modeled by a straight line with some error:
$$ y = b_0 + b_1 \ x + e$$
The values $b_0$ and $b_1$ represent the model's intercept and slope, respectively, and the error is represented by $e$.
These values are calculated based on the data, i.e., they are sample statistics.
If the observed data is a random sample from a target population that we are interested in making inferences about, these values are considered to be point estimates for the population parameters $\beta_0$ and $\beta_1$.
We will discuss how to make inferences about parameters of a linear model based on sample statistics in Chapter \@ref(inf-model-slr).
::: {.pronunciation data-latex=""}
The Greek letter $\beta$ is pronounced *beta*, listen to the pronunciation [here](https://youtu.be/PStgY5AcEIw?t=7).
:::
When we use $x$ to predict $y,$ we usually call $x$ the **predictor** variable and we call $y$ the **outcome**.
We also often drop the $e$ term when writing down the model since our main focus is often on the prediction of the average outcome.
```{r include=FALSE}
terms_chp_7 <- c("predictor", "outcome")
```
It is rare for all of the data to fall perfectly on a straight line.
Instead, it's more common for data to appear as a *cloud of points*, such as those examples shown in Figure \@ref(fig:imperfLinearModel).
In each case, the data fall around a straight line, even if none of the observations fall exactly on the line.
The first plot shows a relatively strong downward linear trend, where the remaining variability in the data around the line is minor relative to the strength of the relationship between $x$ and $y.$ The second plot shows an upward trend that, while evident, is not as strong as the first.
The last plot shows a very weak downward trend in the data, so slight we can hardly notice it.
In each of these examples, we will have some uncertainty regarding our estimates of the model parameters, $\beta_0$ and $\beta_1.$ For instance, we might wonder, should we move the line up or down a little, or should we tilt it more or less?
As we move forward in this chapter, we will learn about criteria for line-fitting, and we will also learn about the uncertainty associated with estimates of model parameters.
```{r imperfLinearModel, fig.cap = "Three datasets where a linear model may be useful even though the data do not all fall exactly on the line.", out.width="100%", fig.asp = 0.4}
neg <- simulated_scatter %>% filter(group == 1)
pos <- simulated_scatter %>% filter(group == 2)
ran <- simulated_scatter %>% filter(group == 3)
p_neg <- ggplot(neg, aes(x = x, y = y)) +
geom_point(size = 2, alpha = 0.8) +
geom_smooth(method = "lm", se = FALSE) +
labs(x = NULL, y = NULL)
p_pos <- ggplot(pos, aes(x = x, y = y)) +
geom_point(size = 2, alpha = 0.8) +
geom_smooth(method = "lm", se = FALSE) +
labs(x = NULL, y = NULL)
p_ran <- ggplot(ran, aes(x = x, y = y)) +
geom_point(size = 2, alpha = 0.8) +
geom_smooth(method = "lm", se = FALSE) +
labs(x = NULL, y = NULL)
p_neg + p_pos + p_ran
```
There are also cases where fitting a straight line to the data, even if there is a clear relationship between the variables, is not helpful.
One such case is shown in Figure \@ref(fig:notGoodAtAllForALinearModel) where there is a very clear relationship between the variables even though the trend is not linear.
We discuss nonlinear trends in this chapter and the next, but details of fitting nonlinear models are saved for a later course.
```{r notGoodAtAllForALinearModel, fig.cap = "The best fitting line for these data is flat, which is not a useful way to describe the non-linear relationship. These data are from a physics experiment."}
bad <- simulated_scatter %>% filter(group == 5)
ggplot(bad, aes(x = x, y = y)) +
geom_point(size = 2) +
geom_smooth(method = "lm", se = FALSE) +
labs(
x = "Angle of Incline (Degrees)",
y = "Distance Traveled (m)"
)
```
### Using linear regression to predict possum head lengths
Brushtail possums are marsupials that live in Australia, and a photo of one is shown in Figure \@ref(fig:brushtail-possum).
Researchers captured 104 of these animals and took body measurements before releasing the animals back into the wild.
We consider two of these measurements: the total length of each possum, from head to tail, and the length of each possum's head.
```{r brushtail-possum, fig.cap = "(ref:brushtail-possum-cap)", out.width = "50%", fig.alt = "Photograph of a common brushtail possum of Australia."}
knitr::include_graphics("images/brushtail-possum/brushtail-possum.jpg")
```
(ref:brushtail-possum-cap) The common brushtail possum of Australia. Photo by Greg Schecter, [flic.kr/p/9BAFbR](https://flic.kr/p/9BAFbR), CC BY 2.0 license.
::: {.data data-latex=""}
The [`possum`](http://openintrostat.github.io/openintro/reference/possum.html) data can be found in the [**openintro**](http://openintrostat.github.io/openintro) R package.
:::
Figure \@ref(fig:scattHeadLTotalL) shows a scatterplot for the head length (mm) and total length (cm) of the possums.
Each point represents a single possum from the data.
The head and total length variables are associated: possums with an above average total length also tend to have above average head lengths.
While the relationship is not perfectly linear, it could be helpful to partially explain the connection between these variables with a straight line.
```{r scattHeadLTotalL, fig.cap = "A scatterplot showing head length against total length for 104 brushtail possums. A point representing a possum with head length 86.7 mm and total length 84 cm is highlighted."}
ggplot(possum, aes(x = total_l, y = head_l)) +
geom_point(alpha = 0.7, size = 2) +
labs(
x = "Total Length (cm)",
y = "Head Length (mm)"
) +
geom_point(data = tibble(x = 84, y = 86.7), aes(x = x, y = y), color = IMSCOL["red", "full"], size = 5, shape = "circle open", stroke = 2)
```
We want to describe the relationship between the head length and total length variables in the possum dataset using a line.
In this example, we will use the total length as the predictor variable, $x,$ to predict a possum's head length, $y.$ We could fit the linear relationship by eye, as in Figure \@ref(fig:scattHeadLTotalLLine).
```{r scattHeadLTotalLLine, fig.cap = "A reasonable linear model was fit to represent the relationship between head length and total length."}
ggplot(possum, aes(x = total_l, y = head_l)) +
geom_point(alpha = 0.7, size = 2) +
labs(
x = "Total Length (cm)",
y = "Head Length (mm)"
) +
geom_smooth(method = "lm", se = FALSE)
```
The equation for this line is
$$\hat{y} = 41 + 0.59x$$
A "hat" on $y$ is used to signify that this is an estimate.
We can use this line to discuss properties of possums.
For instance, the equation predicts a possum with a total length of 80 cm will have a head length of
$$\hat{y} = 41 + 0.59 \times 80 = 88.2$$
The estimate may be viewed as an average: the equation predicts that possums with a total length of 80 cm will have an average head length of 88.2 mm.
Absent further information about an 80 cm possum, the prediction for head length that uses the average is a reasonable estimate.
There may be other variables that could help us predict the head length of a possum besides its length.
Perhaps the relationship would be a little different for male possums than female possums, or perhaps it would differ for possums from one region of Australia versus another region.
Plot A in Figure \@ref(fig:scattHeadLTotalL-sex-age) shows the relationship between total length and head length of brushtail possums, taking into consideration their sex.
Male possums (represented by blue triangles) seem to be larger in terms of total length and head length than female possums (represented by red circles).
Plot B in Figure \@ref(fig:scattHeadLTotalL-sex-age) shows the same relationship, taking into consideration their age.
It's harder to tell if age changes the relationship between total length and head length for these possums.
```{r scattHeadLTotalL-sex-age, fig.cap = "Relationship between total length and head length of brushtail possums, taking into consideration their sex (Plot A) or age (Plot B).", fig.asp=1.236}
p_sex <- ggplot(possum, aes(x = total_l, y = head_l, shape = sex, color = sex)) +
geom_point(alpha = 0.8, size = 2) +
scale_color_manual(values = c(IMSCOL["red","full"], IMSCOL["blue","full"])) +
labs(
x = "Total Length (cm)",
y = "Head Length (mm)",
color = "Sex", shape = "Sex"
)
p_age <- ggplot(possum, aes(x = total_l, y = head_l, color = age)) +
geom_point(size = 2) +
labs(
x = "Total Length (cm)",
y = "Head Length (mm)",
color = "Age"
) +
scale_color_gradient(low = IMSCOL["green", "f4"],
high = IMSCOL["green", "full"])
p_sex / p_age + plot_annotation(tag_levels = "A")
```
In Chapter \@ref(model-mlr), we'll learn about how we can include more than one predictor in our model.
Before we get there, we first need to better understand how to best build a linear model with one predictor.
### Residuals {#resids}
**Residuals** are the leftover variation in the data after accounting for the model fit:
$$\text{Data} = \text{Fit} + \text{Residual}$$
Each observation will have a residual, and three of the residuals for the linear model we fit for the possum data are shown in Figure \@ref(fig:scattHeadLTotalLLine-highlighted).
If an observation is above the regression line, then its residual, the vertical distance from the observation to the line, is positive.
Observations below the line have negative residuals.
One goal in picking the right linear model is for these residuals to be as small as possible.
```{r include=FALSE}
terms_chp_7 <- c(terms_chp_7, "residuals")
```
Figure \@ref(fig:scattHeadLTotalLLine-highlighted) is almost a replica of Figure \@ref(fig:scattHeadLTotalLLine), with three points from the data highlighted.
The observation marked by a red circle has a small, negative residual of about -1; the observation marked by a gray diamond has a large positive residual of about +7; and the observation marked by a pink triangle has a moderate negative residual of about -4.
The size of a residual is usually discussed in terms of its absolute value.
For example, the residual for the observation marked by a pink triangle is larger than that of the observation marked by a red circle because $|-4|$ is larger than $|-1|.$
```{r scattHeadLTotalLLine-highlighted, fig.cap = "A reasonable linear model was fit to represent the relationship between head length and total length, with three points highlighted."}
mod <- lm(head_l ~ total_l, data = possum)
preds <- predict(mod, data.frame(total_l = c(76, 85, 95.5)))
obs <- c(85.1, 98.6, 94)
ggplot(possum, aes(x = total_l, y = head_l)) +
geom_point(alpha = 0.8, size = 2) +
labs(
x = "Total Length (cm)",
y = "Head Length (mm)"
) +
geom_smooth(method = "lm", se = FALSE) +
geom_point(data = possum %>% filter(total_l == 76),
shape = "circle open", stroke = 2,
size = 4, color = IMSCOL["red", "full"]) +
geom_segment(aes(x = 76, y = preds[1], xend = 76, yend = obs[1] + 0.4),
color = IMSCOL["red", "full"], inherit.aes = FALSE) +
geom_point(data = possum %>% filter(total_l == 85, head_l == 98.6),
shape = "diamond open", stroke = 2, size = 5,
color = IMSCOL["gray", "full"]) +
geom_segment(aes(x = 85, y = preds[2], xend = 85, yend = obs[2]-0.5),
color = IMSCOL["gray", "full"], inherit.aes = FALSE) +
geom_point(data = possum %>% filter(total_l == 95.5, head_l == 94),
shape = "triangle open", stroke = 2, size = 5,
color = IMSCOL["pink", "full"]) +
geom_segment(aes(x = 95.5, y = preds[3], xend = 95.5, yend = obs[3] + 0.5),
color = IMSCOL["pink", "full"], inherit.aes = FALSE)
```
::: {.important data-latex=""}
**Residual: Difference between observed and expected.**
The residual of the $i^{th}$ observation $(x_i, y_i)$ is the difference of the observed outcome ($y_i$) and the outcome we would predict based on the model fit ($\hat{y}_i$):
$$e_i = y_i - \hat{y}_i$$
We typically identify $\hat{y}_i$ by plugging $x_i$ into the model.
:::
::: {.workedexample data-latex=""}
The linear fit shown in Figure \@ref(fig:scattHeadLTotalLLine-highlighted) is given as $\hat{y} = 41 + 0.59x.$ Based on this line, formally compute the residual of the observation $(76.0, 85.1).$ This observation is marked by a red circle in Figure \@ref(fig:scattHeadLTotalLLine-highlighted).
Check it against the earlier visual estimate, -1.
------------------------------------------------------------------------
We first compute the predicted value of the observation marked by a red circle based on the model:
$$\hat{y} = 41+0.59x = 41+0.59\times 76.0 = 85.84$$
Next we compute the difference of the actual head length and the predicted head length:
$$e = y - \hat{y} = 85.1 - 85.84 = -0.74$$
The model's error is $e = -0.74$ mm, which is very close to the visual estimate of -1 mm.
The negative residual indicates that the linear model overpredicted head length for this particular possum.
:::
::: {.guidedpractice data-latex=""}
If a model underestimates an observation, will the residual be positive or negative?
What about if it overestimates the observation?[^model-slr-1]
:::
[^model-slr-1]: If a model underestimates an observation, then the model estimate is below the actual.
The residual, which is the actual observation value minus the model estimate, must then be positive.
The opposite is true when the model overestimates the observation: the residual is negative.
::: {.guidedpractice data-latex=""}
Compute the residuals for the observation marked by a blue diamond, $(85.0, 98.6),$ and the observation marked by a pink triangle, $(95.5, 94.0),$ in the figure using the linear relationship $\hat{y} = 41 + 0.59x.$[^model-slr-2]
:::
[^model-slr-2]: Gray diamond: $\hat{y} = 41+0.59x = 41+0.59\times 85.0 = 91.15 \rightarrow e = y - \hat{y} = 98.6-91.15=7.45.$ This is close to the earlier estimate of 7.
pink triangle: $\hat{y} = 41+0.59x = 97.3 \rightarrow e = -3.3.$ This is also close to the estimate of -4.
Residuals are helpful in evaluating how well a linear model fits a dataset.
We often display them in a scatterplot such as the one shown in Figure \@ref(fig:scattHeadLTotalLResidualPlot) for the regression line in Figure \@ref(fig:scattHeadLTotalLLine-highlighted).
The residuals are plotted with their predicted outcome variable value as the horizontal coordinate, and the vertical coordinate as the residual.
For instance, the point $(85.0, 98.6)$ (marked by the blue diamond) had a predicted value of 91.4 mm and had a residual of 7.45 mm, so in the residual plot it is placed at $(91.4, 7.45).$ Creating a residual plot is sort of like tipping the scatterplot over so the regression line is horizontal, as indicated by the dashed line.
```{r scattHeadLTotalLResidualPlot, fig.cap = "Residual plot for the model predicting head length from total length for brushtail possums."}
m_head_total <- lm(head_l ~ total_l, data = possum)
m_head_total_aug <- augment(m_head_total)
ggplot(m_head_total_aug, aes(x = .fitted, y = .resid)) +
geom_point(alpha = 0.8, size = 2) +
labs(
x = "Predicted values of head length (mm)",
y = "Residuals"
) +
geom_hline(yintercept = 0, linetype = "dashed") +
geom_point(data = m_head_total_aug %>%
filter(total_l == 76),
shape = "circle open", stroke = 2, size = 4,
color = IMSCOL["red", "full"]) +
geom_segment(aes(x = preds[1], y = obs[1]-preds[1]+0.2,
xend = preds[1], yend = 0),
color = IMSCOL["red", "full"], inherit.aes = FALSE) +
geom_point(data = m_head_total_aug %>%
filter(total_l == 85, head_l == 98.6),
shape = "diamond open", stroke = 2, size = 5,
color = IMSCOL["gray", "full"]) +
geom_segment(aes(x = preds[2], y = obs[2]-preds[2]-0.3,
xend = preds[2], yend = 0),
color = IMSCOL["gray", "full"], inherit.aes = FALSE) +
geom_point(data = m_head_total_aug %>%
filter(total_l == 95.5, head_l == 94),
shape = "triangle open", stroke = 2, size = 5,
color = IMSCOL["pink", "full"]) +
geom_segment(aes(x = preds[3], y = obs[3]-preds[3]+0.3,
xend = preds[3], yend = 0),
color = IMSCOL["pink", "full"], inherit.aes = FALSE)
```
::: {.workedexample data-latex=""}
One purpose of residual plots is to identify characteristics or patterns still apparent in data after fitting a model.
Figure \@ref(fig:sampleLinesAndResPlots) shows three scatterplots with linear models in the first row and residual plots in the second row.
Can you identify any patterns remaining in the residuals?
------------------------------------------------------------------------
In the first dataset (first column), the residuals show no obvious patterns.
The residuals appear to be scattered randomly around the dashed line that represents 0.
The second dataset shows a pattern in the residuals.
There is some curvature in the scatterplot, which is more obvious in the residual plot.
We should not use a straight line to model these data.
Instead, a more advanced technique should be used to model the curved relationship, such as the variable transformations discussed in Section \@ref(transforming-data).
The last plot shows very little upwards trend, and the residuals also show no obvious patterns.
It is reasonable to try to fit a linear model to the data.
However, it is unclear whether there is evidence that the slope parameter is different from zero.
The point estimate of the slope parameter, labeled $b_1,$ is not zero, but we might wonder if this could just be due to chance.
We will address this sort of scenario in Chapter \@ref(inf-model-slr).
:::
```{r sampleLinesAndResPlots, fig.cap = "Sample data with their best fitting lines (top row) and their corresponding residual plots (bottom row)."}
neg_lin <- simulated_scatter %>% filter(group == 6)
neg_cur <- simulated_scatter %>% filter(group == 7)
random <- simulated_scatter %>% filter(group == 8)
neg_lin_mod <- augment(lm(y ~ x, data = neg_lin))
neg_cur_mod <- augment(lm(y ~ x, data = neg_cur))
random_mod <- augment(lm(y ~ x, data = random))
p_neg_lin <- ggplot(neg_lin, aes(x = x, y = y)) +
geom_point(size = 2, alpha = 0.8) +
geom_smooth(method = "lm", se = FALSE) +
theme_void() +
theme(panel.border = element_rect(colour = "gray", fill = NA, size = 1))
p_neg_cur <- ggplot(neg_cur, aes(x = x, y = y)) +
geom_point(size = 2, alpha = 0.8) +
geom_smooth(method = "lm", se = FALSE) +
theme_void() +
theme(panel.border = element_rect(colour = "gray", fill = NA, size = 1))
p_random <- ggplot(random, aes(x = x, y = y)) +
geom_point(size = 2, alpha = 0.8) +
geom_smooth(method = "lm", se = FALSE) +
theme_void() +
theme(panel.border = element_rect(colour = "gray", fill = NA, size = 1))
p_neg_lin_res <- ggplot(neg_lin_mod, aes(x = .fitted, y = .resid)) +
geom_point(size = 2, alpha = 0.8) +
geom_hline(yintercept = 0, linetype = "dashed") +
theme_void() +
theme(panel.border = element_rect(colour = "gray", fill = NA, size = 1))
p_neg_cur_res <- ggplot(neg_cur_mod, aes(x = .fitted, y = .resid)) +
geom_point(size = 2, alpha = 0.8) +
geom_hline(yintercept = 0, linetype = "dashed") +
theme_void() +
theme(panel.border = element_rect(colour = "gray", fill = NA, size = 1))
p_random_res <- ggplot(random_mod, aes(x = .fitted, y = .resid)) +
geom_point(size = 2, alpha = 0.8) +
geom_hline(yintercept = 0, linetype = "dashed") +
theme_void() +
theme(panel.border = element_rect(colour = "gray", fill = NA, size = 1))
p_neg_lin + theme(plot.margin = unit(c(0, 10, 5, 0), "pt")) +
p_neg_cur + theme(plot.margin = unit(c(0, 10, 5, 0), "pt")) + p_random +
p_neg_lin_res + theme(plot.margin = unit(c(0, 10, 5, 0), "pt")) +
p_neg_cur_res + theme(plot.margin = unit(c(0, 10, 5, 0), "pt")) + p_random_res +
plot_layout(ncol = 3, heights = c(2, 1))
```
\clearpage
### Describing linear relationships with correlation
We've seen plots with strong linear relationships and others with very weak linear relationships.
It would be useful if we could quantify the strength of these linear relationships with a statistic.
::: {.important data-latex=""}
**Correlation: strength of a linear relationship.**
**Correlation** which always takes values between -1 and 1, describes the strength and direction of the linear relationship between two variables.
We denote the correlation by $r.$
The correlation value has no units and will not be affected by a linear change in the units (e.g., going from inches to centimeters).
:::
```{r include=FALSE}
terms_chp_7 <- c(terms_chp_7, "correlation")
```
We can compute the correlation using a formula, just as we did with the sample mean and standard deviation.
The formula for correlation, however, is rather complex[^model-slr-3], and like with other statistics, we generally perform the calculations on a computer or calculator.
[^model-slr-3]: Formally, we can compute the correlation for observations $(x_1, y_1),$ $(x_2, y_2),$ ..., $(x_n, y_n)$ using the formula $$r = \frac{1}{n-1} \sum_{i=1}^{n} \frac{x_i-\bar{x}}{s_x}\frac{y_i-\bar{y}}{s_y}$$ where $\bar{x},$ $\bar{y},$ $s_x,$ and $s_y$ are the sample means and standard deviations for each variable.
Figure \@ref(fig:posNegCorPlots) shows eight plots and their corresponding correlations.
Only when the relationship is perfectly linear is the correlation either -1 or 1.
If the relationship is strong and positive, the correlation will be near +1.
If it is strong and negative, it will be near -1.
If there is no apparent linear relationship between the variables, then the correlation will be near zero.
```{r posNegCorPlots, fig.cap = "Sample scatterplots and their correlations. The first row shows variables with a positive relationship, represented by the trend up and to the right. The second row shows variables with a negative trend, where a large value in one variable is associated with a lower value in the other.", fig.asp = 0.5, out.width = "100%"}
library(ggpubr) # Adding here instead of _common.R to avoid collision with ggimage
simulated_scatter %>%
filter(group %in% c(9:12, 14:17)) %>%
ggplot(aes(x = x, y = y)) +
geom_point(size = 2, alpha = 0.8) +
theme_void() +
facet_wrap(~group, nrow = 2, scales = "free_x") +
theme(
panel.border = element_rect(colour = "gray", fill = NA, size = 1),
strip.background = element_blank(),
strip.text.x = element_blank()
) +
stat_cor(aes(label = paste("r", ..r.., sep = "~`=`~")))
```
The correlation is intended to quantify the strength of a linear trend.
Nonlinear trends, even when strong, sometimes produce correlations that do not reflect the strength of the relationship; see three such examples in Figure \@ref(fig:corForNonLinearPlots).
```{r corForNonLinearPlots, fig.cap = " Sample scatterplots and their correlations. In each case, there is a strong relationship between the variables. However, because the relationship is not linear, the correlation is relatively weak.", fig.asp = 0.3, out.width = "100%"}
simulated_scatter %>%
filter(group %in% 17:19) %>%
ggplot(aes(x = x, y = y)) +
geom_point(size = 2, alpha = 0.8) +
theme_void() +
facet_wrap(~group, nrow = 1, scales = "free") +
theme(
panel.border = element_rect(colour = "gray", fill = NA, size = 1),
strip.background = element_blank(),
strip.text.x = element_blank()
) +
stat_cor(aes(label = paste("r", ..r.., sep = "~`=`~")))
```
::: {.guidedpractice data-latex=""}
No straight line is a good fit for any of the datasets represented in Figure \@ref(fig:corForNonLinearPlots).
Try drawing nonlinear curves on each plot.
Once you create a curve for each, describe what is important in your fit.[^model-slr-4]
:::
[^model-slr-4]: We'll leave it to you to draw the lines.
In general, the lines you draw should be close to most points and reflect overall trends in the data.
::: {.workedexample data-latex=""}
The scatterplots below display the relationships between various crop yields in countries.
In the plots, each point represents a different country.
The x and y variables represent the proportion of total yield in the last 50 years which is due to that crop type.
Order the six scatterplots from strongest negative to strongest positive linear relationship.
```{r crop-yields-af, out.width = "100%", fig.asp = 1}
# from tidytuesday: https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-09-01/key_crop_yields.csv
crops_country <- read_csv("data/key_crop_yields.csv")
crops_country <- crops_country %>%
rename_with(tolower) %>%
rename_with(str_remove, contains("tonnes"), " \\(tonnes per hectare\\)") %>%
rename(cocoa = `cocoa beans`) %>%
filter(!is.na(code)) %>%
pivot_longer(
cols = wheat:bananas,
names_to = "crop",
values_to = "yield"
) %>%
group_by(code, crop) %>%
summarise(total = sum(yield, na.rm = TRUE), .groups = "drop_last") %>%
mutate(prop = (total / sum(total)) * 100) %>%
ungroup() %>%
pivot_wider(
names_from = crop,
values_from = c(total, prop)
) %>%
na_if(0) %>%
mutate(
n_missing = rowSums(is.na(.)) / 2
) %>%
filter(n_missing <= 5) # filter out countries that don't have data for most of the crops
sb <- ggplot(crops_country) +
geom_point(aes(x = prop_soybeans, y = prop_bananas)) +
scale_x_continuous(limits = c(0, 6),
labels = label_percent(scale = 1)) +
scale_y_continuous(labels = label_percent(scale = 1)) +
labs(
x = "% Soybeans",
y = "% Bananas"
)
sc <- ggplot(crops_country) +
geom_point(aes(x = prop_soybeans, y = prop_cassava)) +
scale_x_continuous(limits = c(0, 6),
labels = label_percent(scale = 1)) +
scale_y_continuous(labels = label_percent(scale = 1)) +
labs(
x = "% Soybeans",
y = "% Cassava"
)
mc <- ggplot(crops_country) +
geom_point(aes(x = prop_maize, y = prop_cassava)) +
scale_x_continuous(limits = c(0, 15),
labels = label_percent(scale = 1)) +
scale_y_continuous(labels = label_percent(scale = 1)) +
labs(
x = "% Maize",
y = "% Cassava"
)
peb <- ggplot(crops_country) +
geom_point(aes(x = prop_potatoes, y = prop_bananas)) +
scale_x_continuous(limits = c(0, 60),
labels = label_percent(scale = 1)) +
scale_y_continuous(labels = label_percent(scale = 1)) +
labs(
x = "% Potatoes",
y = "% Bananas"
)
cb <- ggplot(crops_country) +
geom_point(aes(x = prop_cocoa, y = prop_bananas)) +
scale_x_continuous(labels = label_percent(scale = 1)) +
scale_y_continuous(labels = label_percent(scale = 1)) +
labs(
x = "% Cocoa",
y = "% Bananas"
)
wb <- ggplot(crops_country) +
geom_point(aes(x = prop_wheat, y = prop_barley)) +
scale_x_continuous(labels = label_percent(scale = 1)) +
scale_y_continuous(labels = label_percent(scale = 1)) +
labs(
x = "% Wheat",
y = "% Barley"
)
pob <- ggplot(crops_country) +
geom_point(aes(x = prop_peas, y = prop_barley)) +
scale_x_continuous(labels = label_percent(scale = 1)) +
scale_y_continuous(labels = label_percent(scale = 1)) +
labs(
x = "% Peas",
y = "% Barley"
)
peb + sc + mc + cb + pob + wb +
plot_layout(ncol = 2) +
plot_annotation(tag_levels = "A")
```
------------------------------------------------------------------------
The order of most negative correlation to most positive correlation is:
$$A \rightarrow D \rightarrow B \rightarrow C \rightarrow E \rightarrow F$$
- Plot A - bananas vs. potatoes: `r round(cor(crops_country$prop_potatoes, crops_country$prop_bananas, use = "pairwise.complete.obs"), digits = 2)`
- Plot B - cassava vs. soybeans: `r round(cor(crops_country$prop_soybeans, crops_country$prop_cassava, use = "pairwise.complete.obs"), digits = 2)`
- Plot C - cassava vs. maize: `r round(cor(crops_country$prop_maize, crops_country$prop_cassava, use = "pairwise.complete.obs"), digits = 2)`
- Plot D - cocoa vs. bananas: `r round(cor(crops_country$prop_cocoa, crops_country$prop_bananas, use = "pairwise.complete.obs"), digits = 2)`
- Plot E - peas vs. barley: `r round(cor(crops_country$prop_peas, crops_country$prop_barley, use = "pairwise.complete.obs"), digits = 2)`
- Plot F - wheat vs. barley: `r round(cor(crops_country$prop_wheat, crops_country$prop_barley, use = "pairwise.complete.obs"), digits = 2)`
:::
One important aspect of the correlation is that it's *unitless*.
That is, unlike a measurement of the slope of a line (see the next section) which provides an increase in the y-coordinate for a one unit increase in the x-coordinate (in units of the x and y variable), there are no units associated with the correlation of x and y.
Figure \@ref(fig:bdims-units) shows the relationship between weights and heights of 507 physically active individuals.
In Plot A, weight is measured in kilograms (kg) and height in centimeters (cm).
In Plot B, weight has been converted to pounds (lbs) and height to inches (in).
The correlation coefficient ($r = 0.72$) is also noted on both plots.
We can see that the shape of the relationship has not changed, and neither has the correlation coefficient.
The only visual change to the plot is the axis *labeling* of the points.
```{r bdims-units, fig.cap = "Two scatterplots, both displaying the relationship between weights and heights of 507 physically healthy adults. In Plot A, the units are kilograms and centimeters. In Plot B, the units are pounds and inches. Also noted on both plots is the correlation coefficient, $r = 0.72.$", fig.asp = 0.5}
p_1 <- ggplot(bdims, aes(x = hgt, y = wgt)) +
geom_point(alpha = 0.8) +
labs(x = "Height (cm)", y = "Weight (kg)") +
stat_cor(aes(label = paste("r", ..r.., sep = "~`=`~")))
p_2 <- bdims %>%
mutate(
hgt = hgt * 0.393701,
wgt = wgt * 2.20462
) %>%
ggplot(aes(x = hgt, y = wgt)) +
geom_point(alpha = 0.8) +
labs(x = "Height (in)", y = "Weight (lbs)") +
stat_cor(aes(label = paste("r", ..r.., sep = "~`=`~")))
p_1 + p_2 +
plot_annotation(tag_levels = "A")
```
## Least squares regression {#least-squares-regression}
Fitting linear models by eye is open to criticism since it is based on an individual's preference.
In this section, we use *least squares regression* as a more rigorous approach to fitting a line to a scatterplot.
### Gift aid for freshman at Elmhurst College
This section considers a dataset on family income and gift aid data from a random sample of fifty students in the freshman class of Elmhurst College in Illinois.
Gift aid is financial aid that does not need to be paid back, as opposed to a loan.
A scatterplot of these data is shown in Figure \@ref(fig:elmhurstScatterWLine) along with a linear fit.
The line follows a negative trend in the data; students who have higher family incomes tended to have lower gift aid from the university.
```{r elmhurstScatterWLine, fig.cap = "Gift aid and family income for a random sample of 50 freshman students from Elmhurst College."}
ggplot(elmhurst, aes(x = family_income, y = gift_aid)) +
geom_point(size = 2) +
geom_smooth(method = "lm", se = FALSE) +
scale_y_continuous(labels = label_dollar(scale = 1, suffix = "K", accuracy = 1)) +
scale_x_continuous(labels = label_dollar(scale = 1, suffix = "K", accuracy = 1)) +
labs(
x = "Family income",
y = "Gift aid from university"
)
```
::: {.guidedpractice data-latex=""}
Is the correlation positive or negative in Figure \@ref(fig:elmhurstScatterWLine)?[^model-slr-5]
:::
[^model-slr-5]: Larger family incomes are associated with lower amounts of aid, so the correlation will be negative.
Using a computer, the correlation can be computed: -0.499.
### An objective measure for finding the best line
We begin by thinking about what we mean by the "best" line.
Mathematically, we want a line that has small residuals.
But beyond the mathematical reasons, hopefully it also makes sense intuitively that whatever line we fit, the residuals should be small (i.e., the points should be close to the line).
The first option that may come to mind is to minimize the sum of the residual magnitudes:
$$|e_1| + |e_2| + \dots + |e_n|$$
which we could accomplish with a computer program.
The resulting dashed line shown in Figure \@ref(fig:elmhurstScatterW2Lines) demonstrates this fit can be quite reasonable.
```{r elmhurstScatterW2Lines, fig.cap = "Gift aid and family income for a random sample of 50 freshman students from Elmhurst College. The dashed line represents the line that minimizes the sum of the absolute value of residuals, the solid line represents the line that minimizes the sum of squared residuals, i.e., the least squares line."}
ggplot(elmhurst, aes(x = family_income, y = gift_aid)) +
geom_point(alpha = 0.8) +
geom_smooth(method = "lm", se = FALSE) +
geom_smooth(method = quantreg::rq, formula = y ~ x, se = FALSE, linetype = "dashed") +
scale_y_continuous(labels = label_dollar(scale = 1, suffix = "K", accuracy = 1)) +
scale_x_continuous(labels = label_dollar(scale = 1, suffix = "K", accuracy = 1)) +
labs(
x = "Family income",
y = "Gift aid from university"
)
```
However, a more common practice is to choose the line that minimizes the sum of the squared residuals:
$$e_{1}^2 + e_{2}^2 + \dots + e_{n}^2$$
The line that minimizes this least squares criterion is represented as the solid line in Figure \@ref(fig:elmhurstScatterW2Lines) and is commonly called the **least squares line**.
The following are three possible reasons to choose the least squares option instead of trying to minimize the sum of residual magnitudes without any squaring:
```{r include=FALSE}
terms_chp_7 <- c(terms_chp_7, "least squares line")
```
1. It is the most commonly used method.
2. Computing the least squares line is widely supported in statistical software.
3. In many applications, a residual twice as large as another residual is more than twice as bad. For example, being off by 4 is usually more than twice as bad as being off by 2. Squaring the residuals accounts for this discrepancy.
4. The analyses which link the model to inference about a population are most straightforward when the line is fit through least squares.
The first two reasons are largely for tradition and convenience; the third and fourth reasons explain why the least squares criterion is typically most helpful when working with real data.[^model-slr-6]
[^model-slr-6]: There are applications where the sum of residual magnitudes may be more useful, and there are plenty of other criteria we might consider.
However, this book only applies the least squares criterion.
### Finding and interpreting the least squares line
For the Elmhurst data, we could write the equation of the least squares regression line as $$\widehat{\texttt{aid}} = \beta_0 + \beta_{1}\times \texttt{family_income}$$
Here the equation is set up to predict gift aid based on a student's family income, which would be useful to students considering Elmhurst.
These two values, $\beta_0$ and $\beta_1,$ are the parameters of the regression line.
The parameters are estimated using the observed data.
In practice, this estimation is done using a computer in the same way that other estimates, like a sample mean, can be estimated using a computer or calculator.
The dataset where these data are stored is called `elmhurst`.
The first 5 rows of this dataset are given in Table \@ref(tab:elmhurst-data).
```{r elmhurst-data}
elmhurst %>%
slice_head(n = 5) %>%
kbl(linesep = "", booktabs = TRUE, caption = caption_helper("First five rows of the `elmhurst` dataset.")) %>%
kable_styling(bootstrap_options = c("striped", "condensed"),
latex_options = c("striped", "hold_position"), full_width = FALSE) %>%
column_spec(1:3, width = "8em")
```
We can see that family income is recorded in a variable called `family_income` and gift aid from university is recorded in a variable called `gift_aid`.
For now, we won't worry about the `price_paid` variable.
We should also note that these data are from the 2011-2012 academic year, and all monetary amounts are given in \$1,000s, i.e., the family income of the first student in the data shown in Table \@ref(tab:elmhurst-data) is \$92,920 and they received a gift aid of \$21,700.
(The data source states that all numbers have been rounded to the nearest whole dollar.)
Statistical software is usually used to compute the least squares line and the typical output generated as a result of fitting regression models looks like the one shown in Table \@ref(tab:rOutputForIncomeAidLSRLine).
For now we will focus on the first column of the output, which lists ${b}_0$ and ${b}_1.$ In Chapter \@ref(inf-model-slr) we will dive deeper into the remaining columns which give us information on how accurate and precise these values of intercept and slope that are calculated from a sample of 50 students are in estimating the population parameters of intercept and slope for *all* students.
```{r}
m_ga_fi <- lm(gift_aid ~ family_income, data = elmhurst) %>% tidy()
m_ga_fi_int <- m_ga_fi %>% filter(term == "(Intercept)") %>% pull(estimate)
m_ga_fi_slope <- m_ga_fi %>% filter(term == "family_income") %>% pull(estimate)
```
```{r rOutputForIncomeAidLSRLine}
m_ga_fi %>%
mutate(p.value = ifelse(p.value < .0001, "<0.0001", round(p.value, 4))) %>%
kbl(linesep = "", booktabs = TRUE,
caption = "Summary of least squares fit for the Elmhurst data.",
digits = 2, align = "lrrrr") %>%
kable_styling(bootstrap_options = c("striped", "condensed"),
latex_options = c("striped", "hold_position")) %>%
column_spec(1, width = "15em", monospace = TRUE) %>%
column_spec(2:5, width = "5em")
```
The model output tells us that the intercept is approximately `r m_ga_fi_int` and the slope on `family_income` is approximately `r m_ga_fi_slope`.
But what do these values mean?
Interpreting parameters in a regression model is often one of the most important steps in the analysis.
::: {.workedexample data-latex=""}
The intercept and slope estimates for the Elmhurst data are $b_0$ = `r m_ga_fi_int` and $b_1$ = `r m_ga_fi_slope`.
What do these numbers really mean?
------------------------------------------------------------------------
Interpreting the slope parameter is helpful in almost any application.
For each additional \$1,000 of family income, we would expect a student to receive a net difference of 1,000 $\times$ (-0.0431) = -\$43.10 in aid on average, i.e., \$43.10 *less*.
Note that a higher family income corresponds to less aid because the coefficient of family income is negative in the model.
We must be cautious in this interpretation: while there is a real association, we cannot interpret a causal connection between the variables because these data are observational.
That is, increasing a particular student's family income may not cause the student's aid to drop.
(Although it would be reasonable to contact the college and ask if the relationship is causal, i.e., if Elmhurst College's aid decisions are partially based on students' family income.)
The estimated intercept $b_0$ = `r m_ga_fi_int` describes the average aid if a student's family had no income, \$`r format(round(m_ga_fi_int*1000,0), scientific = FALSE, big.mark = ",")`.
The meaning of the intercept is relevant to this application since the family income for some students at Elmhurst is \$0.
In other applications, the intercept may have little or no practical value if there are no observations where $x$ is near zero.
:::
::: {.important data-latex=""}
**Interpreting parameters estimated by least squares.**
The slope describes the estimated difference in the predicted average outcome of $y$ if the predictor variable $x$ happened to be one unit larger.
The intercept describes the average outcome of $y$ if $x = 0$ *and* the linear model is valid all the way to $x = 0$ (values of $x = 0$ are not observed or relevant in many applications).
:::
If you would like to learn more about using R to fit linear models, see Section \@ref(model-tutorials) for the interactive R tutorials.
An alternative way of calculating the values of intercept and slope of a least squares line is manual calculations using formulas.
While manual calculations are not commonly used by practicing statisticians and data scientists, it is useful to work through the first time you're learning about the least squares line and modeling in general.
Calculating the values by hand leverages two properties of the least squares line:
1. The slope of the least squares line can be estimated by
$$b_1 = \frac{s_y}{s_x} r $$
where $r$ is the correlation between the two variables, and $s_x$ and $s_y$ are the sample standard deviations of the predictor and outcome, respectively.
2. If $\bar{x}$ is the sample mean of the predictor variable and $\bar{y}$ is the sample mean of the outcome variable, then the point $(\bar{x}, \bar{y})$ falls on the least squares line.
Table \@ref(tab:summaryStatsElmhurstRegr) shows the sample means for the family income and gift aid as \$101,780 and \$19,940, respectively.
We could plot the point $(102, 19.9)$ on Figure \@ref(fig:elmhurstScatterWLine) to verify it falls on the least squares line (the solid line).
```{r summaryStatsElmhurstRegr}
elmhurst %>%
select(family_income, gift_aid) %>%
summarise(
fi_m = mean(family_income),
fi_s = sd(family_income),
ga_m = mean(gift_aid),
ga_s = sd(gift_aid),
r = cor(family_income, gift_aid)
) %>%
kbl(linesep = "", booktabs = TRUE,
col.names = c("mean", "sd", "mean", "sd", "r"),
align = "ccccc",
caption = "Summary statistics for family income and gift aid."
) %>%
kable_styling(bootstrap_options = c("striped", "condensed"),
latex_options = c("striped", "hold_position"), full_width = FALSE) %>%
add_header_above(c("Family income, x" = 2, "Gift aid, y" = 2, " " = 1)) %>%
column_spec(1:5, width = "6em")
```
Next, we formally find the point estimates $b_0$ and $b_1$ of the parameters $\beta_0$ and $\beta_1.$
::: {.workedexample data-latex=""}
Using the summary statistics in Table \@ref(tab:summaryStatsElmhurstRegr), compute the slope for the regression line of gift aid against family income.
------------------------------------------------------------------------
Compute the slope using the summary statistics from Table \@ref(tab:summaryStatsElmhurstRegr):
$$b_1 = \frac{s_y}{s_x} r = \frac{5.46}{63.2}(-0.499) = -0.0431$$
:::
You might recall the form of a line from math class, which we can use to find the model fit, including the estimate of $b_0.$ Given the slope of a line and a point on the line, $(x_0, y_0),$ the equation for the line can be written as
$$y - y_0 = slope\times (x - x_0)$$
::: {.important data-latex=""}
**Identifying the least squares line from summary statistics.**
To identify the least squares line from summary statistics:
- Estimate the slope parameter, $b_1 = (s_y / s_x) r.$
- Note that the point $(\bar{x}, \bar{y})$ is on the least squares line, use $x_0 = \bar{x}$ and $y_0 = \bar{y}$ with the point-slope equation: $y - \bar{y} = b_1 (x - \bar{x}).$
- Simplify the equation, we get $y = \bar{y} - b_1 \bar{x} + b_1 x,$ which reveals that $b_0 = \bar{y} - b_1 \bar{x}.$
:::
::: {.workedexample data-latex=""}
Using the point (102, 19.9) from the sample means and the slope estimate $b_1 = -0.0431,$ find the least-squares line for predicting aid based on family income.
------------------------------------------------------------------------
Apply the point-slope equation using $(102, 19.9)$ and the slope $b_1 = -0.0431$:
$$
\begin{aligned}
y - y_0 &= b_1 (x - x_0) \\
y - 19.9 &= -0.0431 (x - 102)
\end{aligned}
$$
Expanding the right side and then adding 19.9 to each side, the equation simplifies:
$$
\widehat{\texttt{aid}} = 24.3 - 0.0431 \times \texttt{family_income}
$$
Here we have replaced $y$ with $\widehat{\texttt{aid}}$ and $x$ with $\texttt{family_income}$ to put the equation in context.
The final least squares equation should always include a "hat" on the variable being predicted, whether it is a generic $``y"$ or a named variable like $``aid"$.
:::
::: {.workedexample data-latex=""}
Suppose a high school senior is considering Elmhurst College.
Can they simply use the linear equation that we have estimated to calculate her financial aid from the university?
------------------------------------------------------------------------
She may use it as an estimate, though some qualifiers on this approach are important.
First, the data all come from one freshman class, and the way aid is determined by the university may change from year to year.
Second, the equation will provide an imperfect estimate.
While the linear equation is good at capturing the trend in the data, no individual student's aid will be perfectly predicted (as can be seen from the individual data points in the cloud around the line).
:::
### Extrapolation is treacherous
> *When those blizzards hit the East Coast this winter, it proved to my satisfaction that global warming was a fraud. That snow was freezing cold. But in an alarming trend, temperatures this spring have risen. Consider this: On February 6 it was 10 degrees.* *Today it hit almost 80.* *At this rate, by August it will be 220 degrees.* *So clearly folks the climate debate rages on.*[^model-slr-7]
>
> Stephen Colbert April 6th, 2010
[^model-slr-7]: <http://www.cc.com/video-clips/l4nkoq>
Linear models can be used to approximate the relationship between two variables.
However, like any model, they have real limitations.
Linear regression is simply a modeling framework.
The truth is almost always much more complex than a simple line.
For example, we do not know how the data outside of our limited window will behave.
::: {.workedexample data-latex=""}
Use the model $\widehat{\texttt{aid}} = 24.3 - 0.0431 \times \texttt{family_income}$ to estimate the aid of another freshman student whose family had income of \$1 million.
------------------------------------------------------------------------
We want to calculate the aid for a family with \$1 million income.
Note that in our model this will be represented as 1,000 since the data are in \$1,000s.
$$24.3 - 0.0431 \times 1000 = -18.8 $$
The model predicts this student will have -\$18,800 in aid (!).
However, Elmhurst College does not offer *negative aid* where they select some students to pay extra on top of tuition to attend.
:::
Applying a model estimate to values outside of the realm of the original data is called **extrapolation**.
Generally, a linear model is only an approximation of the real relationship between two variables.
If we extrapolate, we are making an unreliable bet that the approximate linear relationship will be valid in places where it has not been analyzed.
```{r include=FALSE}
terms_chp_7 <- c(terms_chp_7, "extrapolation")
```
### Describing the strength of a fit {#r-squared}
We evaluated the strength of the linear relationship between two variables earlier using the correlation, $r.$ However, it is more common to explain the strength of a linear fit using $R^2,$ called **R-squared**.
If provided with a linear model, we might like to describe how closely the data cluster around the linear fit.
```{r include=FALSE}
terms_chp_7 <- c(terms_chp_7, "R-squared")
```
The $R^2$ of a linear model describes the amount of variation in the outcome variable that is explained by the least squares line.
For example, consider the Elmhurst data, shown in Figure \@ref(fig:elmhurstScatterWLine).
The variance of the outcome variable, aid received, is about $s_{aid}^2 \approx 29.8$ million (calculated from the data, some of which is shown in Table \@ref(tab:elmhurst-data)).
However, if we apply our least squares line, then this model reduces our uncertainty in predicting aid using a student's family income.
The variability in the residuals describes how much variation remains after using the model: $s_{_{RES}}^2 \approx 22.4$ million.
In short, there was a reduction of $$\frac{s_{aid}^2 - s_{_{RES}}^2}{s_{aid}^2}
= \frac{29800 - 22400}{29800}
= \frac{7500}{29800}
\approx 0.25,$$ or about 25%, of the outcome variable's variation by using information about family income for predicting aid using a linear model.
It turns out that $R^2$ corresponds exactly to the squared value of the correlation:
$$r = -0.499 \rightarrow R^2 = 0.25$$
::: {.guidedpractice data-latex=""}
If a linear model has a very strong negative relationship with a correlation of -0.97, how much of the variation in the outcome is explained by the predictor?[^model-slr-8]
:::
[^model-slr-8]: About $R^2 = (-0.97)^2 = 0.94$ or 94% of the variation in the outcome variable is explained by the linear model.
$R^2$ is also called the **coefficient of determination**.
```{r include=FALSE}
terms_chp_7 <- c(terms_chp_7, "coefficient of determination")
```
::: {.important data-latex=""}
**Coefficient of determination: proportion of variability in the outcome variable explained by the model.**
Since $r$ is always between -1 and 1, $R^2$ will always be between 0 and 1.
This statistic is called the **coefficient of determination**, and it measures the proportion of variation in the outcome variable, $y,$ that can be explained by the linear model with predictor $x.$
:::
More generally, $R^2$ can be calculated as a ratio of a measure of variability around the line divided by a measure of total variability.
::: {.important data-latex=""}
**Sums of squares to measure variability in** $y.$
We can measure the variability in the $y$ values by how far they tend to fall from their mean, $\bar{y}.$ We define this value as the **total sum of squares**, calculated using the formula below, where $y_i$ represents each $y$ value in the sample, and $\bar{y}$ represents the mean of the $y$ values in the sample.
$$
SST = (y_1 - \bar{y})^2 + (y_2 - \bar{y})^2 + \cdots + (y_n - \bar{y})^2.
$$
Left-over variability in the $y$ values if we know $x$ can be measured by the **sum of squared errors**, or sum of squared residuals, calculated using the formula below, where $\hat{y}_i$ represents the predicted value of $y_i$ based on the least squares regression.[^model-slr-9],
$$
\begin{aligned}
SSE &= (y_1 - \hat{y}_1)^2 + (y_2 - \hat{y}_2)^2 + \cdots + (y_n - \hat{y}_n)^2\\
&= e_{1}^2 + e_{2}^2 + \dots + e_{n}^2
\end{aligned}
$$
The coefficient of determination can then be calculated as
$$
R^2 = \frac{SST - SSE}{SST} = 1 - \frac{SSE}{SST}
$$
:::
[^model-slr-9]: The difference $SST - SSE$ is called the **regression sum of squares**, $SSR,$ and can also be calculated as $SSR = (\hat{y}_1 - \bar{y})^2 + (\hat{y}_2 - \bar{y})^2 + \cdots + (\hat{y}_n - \bar{y})^2.$ $SSR$ represents the variation in $y$ that was accounted for in our model.
```{r include=FALSE}
terms_chp_7 <- c(terms_chp_7, "total sum of squares", "sum of squared error")
```
::: {.workedexample data-latex=""}
Among 50 students in the `elmhurst` dataset, the total variability in gift aid is $SST = 1461$[^model-slr-10] The sum of squared residuals is $SSE = 1098.$ Find $R^2.$
------------------------------------------------------------------------
Since we know $SSE$ and $SST,$ we can calculate $R^2$ as $$R^2 = 1 - \frac{SSE}{SST} = 1 - \frac{1098}{1461} = 0.25,$$ the same value we found when we squared the correlation: $R^2 = (-0.499)^2 = 0.25.$
:::
[^model-slr-10]: $SST$ can be calculated by finding the sample variance of the outcome variable, $s^2$ and multiplying by $n-1.$