forked from fdabl/fdabl.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path2020-12-17-Dynamical-Systems.Rmd
1046 lines (806 loc) · 68.2 KB
/
2020-12-17-Dynamical-Systems.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
layout: post
title: "A gentle introduction to dynamical systems theory"
date: 2020-12-17 13:30:00 +0100
categories: R
status: process
published: true
# status: development
# published: false
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(
echo = TRUE, warning = FALSE, message = FALSE, fig.align = 'center',
fig.width = 10, fig.height = 5, dpi = 400
)
```
Dynamical systems theory provides a unifying framework for studying how systems as disparate as the climate and the behaviour of humans change over time. In this blog post, I provide an introduction to some of its core concepts. Since the study of dynamical systems is vast, I will barely scratch the surface, focusing on low-dimensional systems that, while rather simple, nonetheless show interesting properties such as multiple stable states, critical transitions, hysteresis, and critical slowing down.
While I have previously written about linear differential equations (in the context of [love affairs](https://fabiandablander.com/r/Linear-Love.html)) and nonlinear differential equations (in the context of [infectious diseases](https://fabiandablander.com/r/Nonlinear-Infection.html)), this post provides a gentler introduction. If you have not been exposed to dynamical systems theory before, you may find this blog post more accessible than the other two.
The bulk of this blog post may be read as a preamble to Dablander, Pichler, Cika, & Bacilieri ([2020](https://psyarxiv.com/5wc28)), who provide an in-depth discussion of early warning signals and critical transitions. I recently gave a talk on this work and had the chutzpah to have it be [recorded](https://www.youtube.com/watch?v=055Ou_aqKUQ) (with slides available from [here](https://fabiandablander.com/assets/talks/Early-Warning.html)). The first thirty minutes or so cover part of what is explained here, in case you prefer frantic hand movements to the calming written word. But without any further ado, let's dive in!
# Differential equations
Dynamical systems are systems that change over time. The dominant way of modeling how such systems change is by means of differential equations. Differential equations relate the rate of change of a quantity $x$ --- which is given by the time derivative $\frac{\mathrm{d}x}{\mathrm{d}t}$ --- to the quantity itself:
$$
\frac{\mathrm{d}x}{\mathrm{d}t} = f(x) \enspace .
$$
If we knew the function $f$, then this differential equation would give us the rate of change for any value of $x$. We are not particularly interested in this rate of change per se, however, but at the value of $x$ as a function of time $t$. We call the function $x(t)$ the *solution* of the differential equation. Most differential equations cannot be solved analytically, that is, we cannot get a closed-form expression of $x(t)$. Instead, differential equations are frequently solved numerically.
How the system changes as a function of time, given by $x(t)$, is implicitly encoded in the differential equation. This is because, given any particular value of $x$, $f(x)$ tells us in which direction $x$ will change, and how quickly. It is this fact that we exploit when numerically solving differential equations. Specifically, given an initial condition $x_0 \equiv x(t = 0)$, $f(x_0)$ tells us in which direction and how quickly the system is changing. This suggests the following approximation method:
$$
x_{n + 1} = x_n + \Delta_t \cdot f(x_n) \enspace ,
$$
where $n$ indexes the set {$x_0, x_1, \ldots$} and $\Delta_t$ is the time that passes between two iterations. This is the most primitive way of numerically solving differential equations, known as [Euler's method](https://en.wikipedia.org/wiki/Euler_method), but it will do for this blog post. The derivative $\frac{\mathrm{d}x}{\mathrm{d}t}$ tells us how $x$ changes in an *infinitesimally* small time interval $\Delta_t$, and so for sufficiently small $\Delta_t$ we can get a good approximation of $x(t)$. In computer code, Euler's method looks something like this:
```{r}
solve <- function(f, x0, delta_t = 0.01, nr_iter = 1000) {
xt <- rep(x0, nr_iter)
for (n in seq(2, nr_iter + 1)) {
xt[n] <- xt[n-1] + delta_t * f(xt[n-1])
}
time <- seq(0, nr_iter * delta_t, delta_t)
res <- cbind(time, xt)
res
}
```
The equation above is a deterministic differential equation --- randomness does not enter the picture. If one knows the initial condition, then one can perfectly predict the state of the system at any time point $t$.[^1] In the next few sections, we use simple differential equations to model how a population grows over time.
# Modeling population growth
Relatively simple differential equations can lead to surprisingly intricate behaviour. Over the next few sections, we will discover this by slowly extending a simple model for population growth.
## Exponential growth
In his 1798 *Essay on the Principle of Population*, Thomas Malthus noted the problems that may come about when the growth of a population is proportional to its size.[^2] Letting $N$ be the number of individuals in a population, we may formalize such a growth process as:
$$
\frac{\mathrm{d}N}{\mathrm{d}t} = r N \enspace ,
$$
which states that the change in population size is proportional to itself, with $r > 0$ being a parameter indicating the growth rate. Using $r = 1$, the left panel in the figure below visualizes this *linear* differential equation. The right panel visualizes its solutions, that is, the number of individuals as a function of time, $N(t)$, for three different initial conditions.
```{r, echo = FALSE}
library('rootSolve')
library('latex2exp')
library('RColorBrewer')
par(mfrow = c(1, 2))
N <- seq(0, 100, 1)
x <- seq(0, 10, .01)
t <- seq(-10, 10, .01)
plot(
N, N, type = 'l', axes = FALSE,
# main = TeX('$\\frac{dx}{dt} = rx \\, \\left(1 - \\frac{x}{K} \\, \\right)$'), ylab = '', lwd = 2
main = 'Linear equation', ylab = '', lwd = 2, font.main = 1, xlab = 'N'
)
points(0, 0, cex = 1.5)
points(0, 0, pch = 20, col = 'white', cex = 2)
axis(1)
axis(2, las = 2)
mtext(side = 2, text = TeX('$$\\frac{dN}{dt}$$'), line = 2)
cols <- rev(RColorBrewer::brewer.pal(3, 'Set1'))
ts <- seq(0, 5, .01)
plot(
ts, 2*exp(ts), type = 'l', axes = FALSE,
col = cols[1], xlab = 'Time t',
# main = TeX('$x(t) = \\frac{K}{1 + \\frac{K - x_0}{x_0} \\, e^{-rt}}$'), ylab = 'x(t)', lwd = 2, ylim = c(0, 10)
main = 'Solutions N(t)', ylab = 'N(t)', lwd = 2, ylim = c(0, 100), font.main = 1
)
lines(ts, 4*exp(ts), col = cols[2], lwd = 2)
lines(ts, 8*exp(ts), col = cols[3], lwd = 2)
axis(1)
axis(2, las = 2)
legend(
'bottomright', lty = c(1, 1, 1),
legend = c(expression(N[0] ~ ' = 2'), expression(N[0] ~ ' = 4'), expression(N[0] ~ ' = 8')),
cex = 1, lwd = 1.5, col = cols, box.lty = 0, bty = 'n'
)
```
As can be seen, the population grows exponentially, without limits. While differential equations cannot be solved analytically in general, linear differential equations can. In our case, the solution is given by $N(t) = N_0 e^{t}$, as derived in a [previous](https://fabiandablander.com/r/Linear-Love.html) blog post.
You can reproduce the trajectories shown in the right panel by using our solver from above:
```{r, eval = FALSE}
malthus <- function(x) x
solution_malthus <- solve(f = malthus, x0 = 2, nr_iter = 500, delta_t = 0.01)
```
We can inquire about qualitative features of dynamical systems models. One key feature are *equilibrium points*, that is, points at which the system does not change. Denote such points as $N^{\star}$, then formally:
$$
\frac{\mathrm{d}N^{\star}}{\mathrm{d}t} = f(N^{\star}) = 0 \enspace .
$$
In our model, the only equilibrium point is $N = 0$. Equilibrium points --- also called fixed points --- can be *stable* or *unstable*. A system that is at a stable equilibrium point returns to it after a small, exogeneous perturbation, but does not do so at an unstable equilibrium point. $N = 0$ is an unstable equilibrium point, and this is indicated by the white circle in the left panel above. In other words, if the population size is zero, and if we were to add some individuals, then the population would grow exponentially rather than die out.
## Units and time scales
In a differential equation, the units of the left hand-side must match the units of the right-hand side. In our example above, the left hand-side is given in population per unit of time, and so the right hand-side must also be in population per unit of time. Since $N$ is given in population, $r$ must be a rate, that is, have units $1 / \text{time}$. This brings us to a key question when dealing with dynamical system models. What is the time scale of the system?
The model cannot by itself provide an appropriate time scale. In our case, it clearly depends on whether we are looking at, say, a population of bacteria or rabbits. We can provide the system with a time scale by appropriately changing $r$. Take the bacterium *Escherichia coli*, which can double every 20 minutes. We know from above that this means exponential growth:
$$
N(t) = N_0 e^{r} \enspace.
$$
Supposing that we start with two bacteria $N_0 = 2$, then the value of $r$ that leads to a doubling every twenty minutes is given by:
$$
\begin{aligned}
4 &= 2 e^{r_{\text{coli}}} \\[0.5em]
r_{\text{coli}} &= \text{log }2 \enspace ,
\end{aligned}
$$
where this growth rate is with respect to twenty minutes. To get this per minute, we write $r_{\text{coli}} = \text{log }2 / 20$, resulting in the following differential equation for the population growth of *Escherichia coli*:
$$
\frac{\mathrm{d}N_{\text{coli}}}{\mathrm{d}t} = \frac{\text{log }2}{20} N_{\text{coli}} \enspace ,
$$
where the unit of time is now minutes. What about a population of rabbits? They grow much slower, of course. Suppose they take three months to double in population size (but see [here](https://fabiandablander.com/r/Fibonacci.html)). This also yields a rate $r_{\text{rabbits}} = \text{log }2$, but this is with respect to three months. To get this in minutes, we assume that one month has 30 days and write
$$
r_{\text{rabbits}} = \text{log }2 / (3 \times 30 \times 24 \times 60) = \text{log }2 / 129600 \enspace.
$$
This yields the following differential equation for the growth of a population of rabbits:
$$
\frac{\mathrm{d}N_{\text{rabbits}}}{\mathrm{d}t} = \frac{\text{log }2}{129600} N_{\text{rabbits}} \enspace .
$$
The figure below contrasts the growth of *Escherichia coli* (left panel) with the growth of rabbits (right panel). Unsurprisingly, we see that rabbits are much slower --- compare the $x$-axes! --- to increase in population than *Escherichia coli*.[^3]
```{r, echo = FALSE}
par(mfrow = c(1, 2))
ts <- seq(0, 120, .01)
plot(
ts, 2*exp(log(2) * ts / 20), type = 'l', axes = FALSE,
col = cols[1], xlab = 'Minutes',
# main = TeX('$x(t) = \\frac{K}{1 + \\frac{K - x_0}{x_0} \\, e^{-rt}}$'), ylab = 'x(t)', lwd = 2, ylim = c(0, 10)
main = TeX('Solutions $N_{coli}(t)$'), ylab = TeX('$N_{coli}(t)$'), lwd = 2, ylim = c(0, 100), font.main = 1
)
lines(ts, 4*exp(log(2) * ts/20), col = cols[2], lwd = 2)
lines(ts, 8*exp(log(2) * ts/20), col = cols[3], lwd = 2)
axis(1)
axis(2, las = 2)
legend(
'bottomright', lty = c(1, 1, 1),
legend = c(expression(N[0] ~ ' = 2'), expression(N[0] ~ ' = 4'), expression(N[0] ~ ' = 8')),
cex = 1, lwd = 1.5, col = cols, box.lty = 0, bty = 'n'
)
ts <- seq(0, 129600 * 6, 10)
plot(
ts, 2*exp(log(2) * ts / 129600), type = 'l', axes = FALSE,
col = cols[1], xlab = 'Minutes',
# main = TeX('$x(t) = \\frac{K}{1 + \\frac{K - x_0}{x_0} \\, e^{-rt}}$'), ylab = 'x(t)', lwd = 2, ylim = c(0, 10)
main = TeX('Solutions $N_{rabbits}(t)$'), ylab = TeX('$N_{rabbits}(t)$'), lwd = 2, ylim = c(0, 100), font.main = 1
)
lines(ts, 4*exp(log(2) * ts / 129600), col = cols[2], lwd = 2)
lines(ts, 8*exp(log(2) * ts / 129600), col = cols[3], lwd = 2)
axis(1, at = seq(0, 800000, 200000), labels = c('0', '2e6', '4e6', '6e6', '8e6'))
axis(2, las = 2)
legend(
'bottomright', lty = c(1, 1, 1),
legend = c(expression(N[0] ~ ' = 2'), expression(N[0] ~ ' = 4'), expression(N[0] ~ ' = 8')),
cex = 1, lwd = 1.5, col = cols, box.lty = 0, bty = 'n',
)
```
This exponential growth model assumes that populations grow indefinitely, without limits. This is arguably incorrect, as Pierre-François Verhulst, a Belgian number theorist, was quick to point out.
## Sigmoidal population growth
A population cannot grow indefinitely because its growth depends on resources, which are finite. To account for this, Pierre-François Verhulst introduced a model with a *carrying capacity* K in 1838, which gives the maximum size of the population that can be sustained given resource constraints (e.g., Bacaër, [2011](https://www.springer.com/gp/book/9780857291141), pp. 35-39). He wrote:
$$
\frac{\mathrm{d}N}{\mathrm{d}t} = r N \left(1 - \frac{N}{K}\right) \enspace .
$$
This equation is *nonlinear* in $N$ and is known as the *logistic equation*. If $K > N$ then $(1 − N / K) < 1$, slowing down the growth rate of $N$. If on the other hand $N > K$, then the population needs more resources than are available, and the growth rate becomes negative, resulting in population decrease.
The equation above has particular units. For example, $N$ gives the number of individuals in a population, be it bacteria or rabbits, and $K$ counts the maximum number of individuals that can be sustained given the available resources. Similarly, $r$ is a rate with respect to minutes or months, for example. For the purposes of this blog post, we are interested in general properties of this system and extensions thereof, rather than in modeling any particular real-world system. Therefore, we want to get rid of the parameters $K$ and $r$, which are specific to a particular population (say bacteria or rabbits).
We can eliminate $K$ by reformulating the differential equation in terms of $x = \frac{N}{K}$, which is $1$ if the population is at the carrying capacity. Implicit differentiation yields $K \cdot \mathrm{d}x = \mathrm{d}N$, which when plugged into the system gives:
$$
\begin{aligned}
K \cdot \frac{\mathrm{d}x}{\mathrm{d}t} &= r N \left(1 - \frac{N}{K}\right) \\[0.5em]
\frac{\mathrm{d}x}{\mathrm{d}t} &= r \frac{N}{K} \left(1 - \frac{N}{K}\right) \\[0.5em]
\frac{\mathrm{d}x}{\mathrm{d}t} &= rx \left(1 - x\right) \enspace .
\end{aligned}
$$
Both $N$ and $K$ count the number of individuals (e.g., bacteria or rabbits) in the population, and their ratio $x$ is unit- or dimensionless. For example, $x = 0.50$ means that the population is at half the size that can be sustained at carrying capacity, and we do not need to know the exact number of individuals $N$ and $K$ for this statement to make sense.
In other words, we have *non-dimensionalized* the differential equation, at least in terms of $x$. We can also remove the dimension of time (whether it is minutes or months, for example), by making the change of variables $\tau = t r$. Since $t$ is given in units of time, and $r$ is given in inverse units of time since it is a rate, $\tau$ is dimensionless. Implicit differentiation yields $\frac{1}{r}\mathrm{d}\tau = \mathrm{d}t$, which plugged in gives:
$$
\begin{aligned}
\frac{\mathrm{d}x}{\left(\frac{1}{r}\mathrm{d}\tau\right)} &= r x (1 - x) \\[0.5em]
r \frac{\mathrm{d}x}{\mathrm{d}\tau} & = r x (1 - x) \\[0.5em]
\frac{\mathrm{d}x}{\mathrm{d}\tau} & = x (1 - x) \enspace .
\end{aligned}
$$
This got rid of another parameter, $r$, and hence simplifies subsequent analysis. The differential equation now tells us how the population relative to carrying capacity ($x$) changes per unit of dimensionless time ($\tau$). The left panel below shows the dimensionless logistic equation, while the right panel shows its solution for three different initial conditions.[^4]
```{r, echo = FALSE}
logistic <- function(x, r = 1, K = 1) { r*x * (1 - x/K) }
sigmoid <- function(x, r = 1, K = 1) { K / (1 + exp(-r*x))}
sigmoid_solution <- function(x, x0, r = 1, K = 1) { K / (1 + (K - x0)/x0 * exp(-r*x))}
x <- seq(0, 1, 0.01)
par(mfrow = c(1, 2))
plot(
x, logistic(x), type = 'l', axes = FALSE,
# main = TeX('$\\frac{dx}{dt} = rx \\, \\left(1 - \\frac{x}{K} \\, \\right)$'), ylab = '', lwd = 2
main = 'Logistic equation', ylab = '', lwd = 2, font.main = 1
)
points(0, 0, cex = 1.5)
points(0, 0, pch = 20, col = 'white', cex = 2)
points(1, 0, pch = 20, col = 'gray', cex = 2)
points(1, 0, cex = 1.5)
axis(1)
axis(2, las = 2)
mtext(side = 2, text = TeX('$\\frac{dx}{d\\tau}$'), line = 2)
ts <- seq(0, 10, .01)
plot(
ts, sigmoid_solution(ts, 0.01), type = 'l', axes = FALSE,
col = cols[1], xlab = expression('Time ' ~ tau),
# main = TeX('$x(t) = \\frac{K}{1 + \\frac{K - x_0}{x_0} \\, e^{-rt}}$'), ylab = 'x(t)', lwd = 2, ylim = c(0, 10)
main = TeX('Solutions x($\\tau$)'), ylab = TeX('x($\\tau$)'), lwd = 2, ylim = c(0, 1), font.main = 1
)
lines(ts, sigmoid_solution(ts, 0.6), col = cols[2], lwd = 2)
lines(ts, sigmoid_solution(ts, 0.2), col = cols[3], lwd = 2)
axis(1)
axis(2, las = 2)
legend(
'bottomright', lty = c(1, 1, 1),
legend = c(expression(x[0] ~ ' = 0.01'), expression(x[0] ~ ' = 0.20'), expression(x[0] ~ ' = 0.60')),
cex = 1, lwd = 1.5, col = cols, box.lty = 0, bty = 'n'
)
```
You can again reproduce the solutions by running:
```{r, eval = FALSE}
verhulst <- function(x) x * (1 - x)
solution_verhulst <- solve(f = verhulst, x0 = 0.01, nr_iter = 1000, delta_t = 0.01)
```
In contrast to the exponential population growth in the previous model, this model shows a sigmoidal growth that hits its ceiling at carrying capacity, $x = N / K = 1$.
We can again analyze the equilibrium points of this system. In addition to the unstable fixed point at $x^{\star} = 0$, the model also has a stable fixed point at $x^{\star} = N / K = 1$, which is indicated by the gray circle in the left panel. Why is this point stable? Looking at the left panel above, we see that if we were to decrease the population size we have that $\frac{\mathrm{d}x}{\mathrm{d}\tau} > 0$, and hence the population increases towards $x^{\star} = 1$. If, on the other hand, we would increase the population size above the carrying capacity $x > 1$, we have that $\frac{\mathrm{d}x}{\mathrm{d}\tau} < 0$ (not shown in the graph), and so the population size decreases towards $x^{\star} = 1$.
Given any initial condition $x_0 > 0$, the system moves towards the stable equilibrium point $x^{\star} = 1$. This initial movement is a *transient phase*. Once this phase is over, the system stays at the stable fixed point forever (unless perturbations move it away). I will come back to transients in a later section.
While we have improved on the exponential growth model by encoding [limits to growth](https://www.youtube.com/watch?v=kz9wjJjmkmc), many populations are subject to another force that constraints their growth: predation. In the next section, we will extend the model to allow for predation.
## Population growth under predation
<!-- Most animals get eaten by other animals, and so the population size of a particular *prey* is influenced by *predators*. -->
In a classic article, Robert May ([1977](https://www.nature.com/articles/269471a0)) studied the following model:
$$
\frac{\mathrm{d}x}{\mathrm{d}\tau} = \underbrace{x \left(1 - x\right)}_{\text{Logistic term}} - \underbrace{\gamma \frac{x^2}{\alpha^2 + x^2}}_{\text{Predation term}} \enspace ,
$$
which includes a predation term that depends nonlinearly on the population size $x$.[^5] This term tells us, for any population size $x$, how strong the pressure on the population due to predation is. The parameter $\alpha$ gives the saturation point, that is, the population size at which predation slows down. If this value is low, the extent of predation rises rapidly with an increased population. To see this, the left panel in the figure below visualizes the predation term for different values of $\alpha$, fixing $\gamma = 0.50$. The parameter $\gamma$, on the other hand, influences the maximal extent of predation. Fixing $\alpha = 0.10$, the right panel shows how the extent of the predation increases with $\gamma$.
```{r, echo = FALSE}
par(mfrow = c(1, 2))
x <- seq(0, 1, .01)
pred <- function(x, gamma = .5, alpha = 1) {
gamma * x^2 / (alpha^2 + x^2)
}
plot(
x, pred(x, alpha = 0.10), type = 'l', lwd = 1.5,
axes = FALSE, col = cols[1], ylab = 'Predation term', main = expression(gamma ~ ' = 0.50')
)
lines(x, pred(x, alpha = 0.25), lwd = 1.5, col = cols[2])
lines(x, pred(x, alpha = 0.50), lwd = 1.5, col = cols[3])
axis(1)
axis(2, las = 2)
legend(
'bottomright', lty = c(1, 1, 1),
legend = c(expression(alpha ~ ' = 0.10'), expression(alpha ~ ' = 0.25'), expression(alpha ~ ' = 0.50')),
cex = 0.90, lwd = 1.5, col = cols, box.lty = 0, bty = 'n'
)
plot(
x, pred(x, gamma = 0.10, alpha = 0.10), type = 'l', lwd = 1.5, ylim = c(0, .5),
axes = FALSE, col = cols[1], ylab = 'Predation term', main = expression(alpha ~ ' = 0.10')
)
lines(x, pred(x, gamma = 0.25, alpha = 0.10), lwd = 1.5, col = cols[2])
lines(x, pred(x, gamma = 0.50, alpha = 0.10), lwd = 1.5, col = cols[3])
axis(1)
axis(2, las = 2)
legend(
'bottomright', lty = c(1, 1, 1),
legend = c(expression(gamma ~ ' = 0.10'), expression(gamma ~ ' = 0.25'), expression(gamma ~ ' = 0.50')),
cex = 0.90, lwd = 1.5, col = cols, box.lty = 0, bty = 'n'
)
```
As we have discussed before, the units of the left hand side need to be the same as the units of the right hand side. As our excursion in nondimensionalization has established earlier, the logistic equation in terms of $\frac{\mathrm{d}x}{\mathrm{d}\tau}$ is dimensionless --- it tells us how the population relative to carrying capacity ($x$) changes per unit of dimensionless time ($\tau$). The predation term we added also needs to be dimensionless, because summing quantities of different units is meaningless. In order for $\alpha^2 + x^2$ to make sense, $\alpha$ must also be given in population relative to the carrying capacity, that is, it must be dimensionless. The parameter $\gamma$ must be given in population relative to carrying capacity per unit of dimensionless time. We can interpret it as the maximum proportion of individuals (relative to carrying capacity) that is killed by predation that is theoretically possible (if $\alpha = 0$ and $x = 1$), per unit of dimensionless time. What a mouthful! Maybe we should have kept the original dimensions? But that would have left us with more parameters! Fearless and undeterred, we move on. For simplicity of analysis, however, we fix $\alpha = 0.10$ for the remainder of this blog post.
Now that we have the units straight, let's continue with the analysis of the system. We are interested in finding the equilibrium points $x^{\star}$, and we could do this algebraically by solving the following for $x$:
$$
\begin{aligned}
0 &= x \left(1 - x\right) - \gamma \frac{x^2}{0.10^2 + x^2} \\
x \left(1 - x\right) &= \gamma \frac{x^2}{0.10^2 + x^2} \enspace ,
\end{aligned}
$$
However, we can also find the equilibrium points graphically, by visualizing both the left-hand and the right-hand side and seeing where the two lines intersect. Importantly, the intersections will depend on the parameter $\gamma$. The left panel in the figure below illustrates this for three different values of $\gamma$. For all values of $\gamma$, there exists an unstable equilibrium point at $x^{\star} = 0$. If $\gamma = 0$, then the predation term vanishes and we get back the logistic equation, which has a stable equilibrium point at $x^{\star} = 1$. For a low predation rate $\gamma = 0.10$, this stable equilibrium point gets shifted below the carrying capacity and settles at $x^{\star} = 0.89$. Astonishingly, for the intermediate value $\gamma = 0.22$, two stable equilibrium points emerge, one at $x^{\star} = 0.03$ and one at $x^{\star} = 0.68$, separated by an unstable equilibrium point at $x^{\star} = 0.29$. For $\gamma = 0.30$, the stable and unstable equilibrium points have vanished, and a single stable equilibrium point at a very low population size $x^{\star} = 0.04$ remains.
```{r, echo = FALSE}
bifurcation <- function(equation, cs, x = seq(0, 1, .001), r = 1, K = 1) {
n <- length(cs)
roots <- matrix(NA, nrow = n, ncol = 4)
for (i in seq(n)) {
root <- uniroot.all(function(x) equation(x, cs[i], r = r, K = K), c(0, 1))
roots[i, ] <- c(rep(NA, 4 - length(root)), root)
}
roots
}
grazing_sn <- function(c, x, a = 0.1) { c * x^2 / (a^2 + x^2) }
logistic_sn <- function(x, cparam, r = 1, K = 1, a = 0.1) r*x * (1 - x/K) - cparam * x^2 / (a^2 + x^2)
par(mfrow = c(1, 2))
plot(
x, logistic(x), type = 'l', axes = FALSE, ylim = c(0, 0.40),
main = 'Logistic equation with predation', ylab = '', lwd = 2, font.main = 1
)
axis(1)
axis(2, las = 2, at = seq(0, 0.40, 0.1))
mtext(side = 2, text = TeX('$\\frac{dx}{d\\tau}$'), line = 2)
c1 <- 0.10
c2 <- 0.22
c3 <- 0.30
lines(x, grazing_sn(c1, x), lwd = 1.5, lty = 4, col = cols[1])
lines(x, grazing_sn(c2, x), lwd = 1.5, lty = 3, col = cols[2])
lines(x, grazing_sn(c3, x), lwd = 1.5, lty = 2, col = cols[3])
lines(x, logistic(x), lwd = 2, col = 'black')
# add the fixed points
p1 <- uniroot.all(function(x) logistic_sn(x, c = c1), interval = c(0, 1))[-1]
p2 <- uniroot.all(function(x) logistic_sn(x, c = c2), interval = c(0, 1))[-1]
p3 <- uniroot.all(function(x) logistic_sn(x, c = c3), interval = c(0, 1))[-1]
points(0, 0, pch = 20, col = 'white', cex = 2)
points(0, 0, cex = 1.5)
points(p1, grazing_sn(c1, p1), pch = 20, col = 'gray', cex = 2)
points(p1, grazing_sn(c1, p1), cex = 1.6, col = cols[1])
points(p2[3], grazing_sn(c2, p2[3]), pch = 20, col = 'gray', cex = 2)
points(p2[3], grazing_sn(c2, p2[3]), cex = 1.5, col = cols[2])
points(p2[2], grazing_sn(c2, p2[2]), pch = 20, col = 'white', cex = 2)
points(p2[2], grazing_sn(c2, p2[2]), cex = 1.5, col = cols[2])
points(p2[1], grazing_sn(c2, p2[1]), pch = 20, col = 'gray', cex = 2)
points(p2[1], grazing_sn(c2, p2[1]), cex = 1.5, col = cols[2])
# ps <- mean(p3[c(1, 3)])
# points(ps, grazing_sn(c3, ps), pch = 20, col = 'gray', cex = 2)
# points(ps, grazing_sn(c3, ps), cex = 1.5)
points(p3, grazing_sn(c3, p3), pch = 20, col = 'gray', cex = 2)
points(p3, grazing_sn(c3, p3), cex = 1.5, col = cols[3])
points(p3[2], grazing_sn(c3, p3[2]), pch = 20, col = 'gray', cex = 2)
points(p3[2], grazing_sn(c3, p3[2]), cex = 1.5, col = cols[3])
legend(
'topleft', lty = c(4, 3, 2),
legend = c(expression(gamma ~ ' = 0.10'), expression(gamma ~ ' = 0.22'),
expression(gamma ~ ' = 0.30')), lwd = 1.5,
col = cols, box.lty = 0, bty = 'n'
)
gammas <- seq(0, 0.40, length.out = 1000)
roots <- bifurcation(logistic_sn, gammas)
stable_ix1 <- which(roots[, 4] > max(roots[, 2], na.rm = TRUE))
stable_ix2 <- which(roots[, 4] < min(roots[, 2], na.rm = TRUE))[-1]
unstable_ix <- which(roots[, 2] > min(roots[, 2], na.rm = TRUE) & roots[, 2] < max(roots[, 2], na.rm = TRUE))
plot(
gammas, roots[, 2], type = 'l', ylim = c(0, 1), lwd = 1.5, xlim = c(0, 0.40),
axes = FALSE, xlab = expression(gamma), ylab = '', main = 'Bifurcation diagram', font.main = 1
)
lines(gammas[unstable_ix], roots[unstable_ix, 3], lwd = 1.5, lty = 2)
lines(gammas[stable_ix1], roots[stable_ix1, 4], lwd = 1.5)
lines(gammas[stable_ix2], roots[stable_ix2, 4], lwd = 1.5)
axis(1)
axis(2, las = 2)
mtext(expression('Equilibria' ~ x^'*'), side = 2, line = 2.5)
lines(c(0, 0.40), c(0, 0), lty = 2, lwd = 1.5) # Unstable fixed point
# gammas[unstable_ix][which.max(roots[unstable_ix, 3])] # sudden collapse
# gammas[which.max(roots[, 2])] # sudden recovery
lines(c(c1, c1), c(0, 1), lty = 4, col = cols[1], lwd = 2)
lines(c(c2, c2), c(0, 1), lty = 3, col = cols[2], lwd = 2)
lines(c(c3, c3), c(0, 1), lty = 2, col = cols[3], lwd = 2)
points(gammas[unstable_ix][1], roots[unstable_ix, 3][1] - .015, pch = 20, cex = 1.5)
points(gammas[unstable_ix][length(unstable_ix)], roots[unstable_ix, 3][length(unstable_ix)] + .035, pch = 20, cex = 1.5)
```
While the left panel above shows three specific values of $\gamma$, the panel on the right visualizes the stable (solid lines) and unstable (dashed lines) equilibrium points as a function of $\gamma \in (0, 0.40)$. This is known as a *bifurcation diagram* --- it tells us the equilibrium points of the system and their stability as a function of $\gamma$. The black dots indicate the *bifurcation points*, that is, values of $\gamma$ at which the (stability of) equilibrium points change. For this system, we have that a stable and unstable equilibrium point collide and vanish at $\gamma = 0.18$ and $\gamma = 0.26$; while there are many [types of bifurcations](https://en.wikipedia.org/wiki/Bifurcation_theory#Bifurcation_types) a system can exhibit, this type of bifurcation is known as a *saddle-node bifurcation*. The coloured lines indicate the three specific values from the left panel.
This simple model has a number of interesting properties that we will explore in the next sections. Before we do this, however, we look at another way to visualize the behaviour of the system.
## Potentials
For unidimensional systems one can visualize the dynamics of the system in an intuitive way by using so-called "ball-in-a-cup" diagrams. Such diagrams visualize the *potential function* $V(x)$, which is defined in the following manner:
$$
\frac{\mathrm{d}x}{\mathrm{d}\tau} = -\frac{\mathrm{d}V}{\mathrm{d}x} \enspace .
$$
To solve this, we can integrate both sides with respect to $x$, which yields
$$
V(x) = - \int \frac{\mathrm{d}x}{\mathrm{d}\tau} \mathrm{d}x + C \enspace ,
$$
where $C$ is the constant of integration, and the potential is defined only up to an additive constant. Notice that $V$ is a function of $x$, rather than a function of time $\tau$. As we will see shortly, $x$ will be the "ball" in the "cup" or landscape that is carved out by the potential $V$. Setting $C = 0$, the potential for the logistic equation with predation is given by:
$$
V(x) = -\gamma\, \alpha \, \text{tan}^{-1} \left(\frac{x}{\alpha}\right) + \gamma x - \frac{1}{2} x^2 + \frac{1}{3} x^3 \enspace .
$$
The figure below visualizes the potentials for three different values of $\gamma$; since the scaling of $V(x)$ is arbitrary, I removed the $y$-axis. The left panel shows the potential for $\gamma = 0.10$, and this corresponds to the case where one unstable fixed point $x^{\star} = 0$ and one stable fixed point at $x^{\star} = 0.89$ exists. We can imagine the population $x$ as a ball in this landscape; if $x = 0$ and we add individuals to the population, then the ball rolls down into the valley whose lowest point is the stable state $x^{\star} = 0.89$.
The rightmost panel shows that under a high predation rate $\gamma = 0.30$, there are again two fixed points, one unstable one at $x^{\star} = 0$ and one stable fixed point at a very low population size $x^{\star} = 0.04$. Whereever we start on this landscape, unless $x_0 = 0$, the population will move towards $x^{\star} = 0.04$.
```{r, echo = FALSE}
potential <- function(x, gamma, alpha = 0.10) {
-gamma * alpha * atan(x / alpha) + gamma * x - x^2 / 2 + x^3 / 3
# gamma * alpha * atan(x / alpha) - x^2 / 2 - x^3 / 3
# 1/6 * (2 * x - 3) * x^2 - gamma * alpha * atan(x / alpha) + gamma * x
}
par(mfrow = c(1, 3))
y1 <- potential(x, gamma = 0.10)
y2 <- potential(x, gamma = 0.22)
y3 <- potential(x, gamma = 0.30)
len <- 0.1 + 0.025
cx <- 1.8
plot(
x, y1, type = 'l', lwd = 1.5, ylim = c(-0.1, 0.025),
axes = FALSE, col = cols[1], ylab = '', main = expression('Potential for ' ~ gamma ~ ' = 0.10'), font.main = 1,
cex.main = cx, cex.lab = cx, cex.axis = cx
)
axis(1)
polygon(c(min(x), x, max(x)), c(-4, y1, -5), col = grDevices::adjustcolor(cols[1], alpha.f = 0.60))
plot(
x, y2, type = 'l', lwd = 1.5, ylim = c(-len + .05, 0.05),
axes = FALSE, col = cols[2], ylab = '', main = expression('Potential for ' ~ gamma ~ ' = 0.22'), font.main = 1,
cex.main = cx, cex.lab = cx, cex.axis = cx
)
axis(1)
polygon(c(min(x), x, max(x)), c(-4, y2, -5), col = grDevices::adjustcolor(cols[2], alpha.f = 0.60))
plot(
x, y3, type = 'l', lwd = 1.5, ylim = c(-0.025, 0.1),
axes = FALSE, col = cols[2], ylab = '', main = expression('Potential for ' ~ gamma ~ ' = 0.30'), font.main = 1,
cex.main = cx, cex.lab = cx, cex.axis = cx
)
axis(1)
polygon(c(min(x), x, max(x)), c(-4, y3, -5), col = grDevices::adjustcolor(cols[3], alpha.f = 0.60))
```
The middle panel above is the most interesting one. It shows that the potential for $\gamma = 0.22$ exhibits two valleys, corresponding to the stable fixed points $x^{\star} = 0.03$ and $x^{\star} = 0.68$. These two points are separated by a hill, corresponding to an unstable fixed point at $x^{\star} = 0.29$. Depending on the initial condition, the population would either converge to a very low or moderately high stable size. For example, if $x_0 = 0.25$, then we would "roll" towards the left, into the valley whose lowest point corresponds to $x^{\star} = 0.03$. On the other hand, if $x_0 = 0.40$, then individuals can procreate unabated by predation and reach a stable point at $x^{\star} = 0.68$.
Visualizing potentials as "ball-in-a-cup" diagrams is a wide-spread way to communicate the ideas of stable and unstable equilibrium points. But they suffer from a number of limitations, and they are a little bit of a gimick. Potentials generally do not exist for higher-dimensional systems (see Rodríguez-Sánchez, van Nes, & Scheffer, [2020](https://journals.plos.org/ploscompbiol/article?rev=2&id=10.1371/journal.pcbi.1007788), for an approximation).
A necessary requirement for a system to exhibit multiple stable states are positive feedback mechanisms (e.g., Kefi, Holmgren, & Scheffer [2016](https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/1365-2435.12601)). In our example, it may be that, at a sufficient size, individuals in the population can coordinate so as to fight predators better. This would allow them to grow towards the higher stable population size. Below that "coordination" point, however, they cannot help each other efficiently, and predators may have an easy time feasting on them --- the population converges to the smaller stable population size. This constitutes an [Allee effect](https://en.wikipedia.org/wiki/Allee_effect).
Systems that exhibit multiple stable states can show *critical transitions* between them. These transitions are not only notoriously hard to predict, but they can also be hard to reverse, as we will see in the next section.
<!-- Two important features of a dynamical system are its *stability* and *resilience*. There is substantial heterogeneity in how these two terms are used (see e.g., Pimm, 1984; Scheffer et al., 2015). For our purposes here, we define stability as the time it takes the system to return to equilibrium after a small perturbation. Resilience, on the other hand, is defined as the size of the perturbation the system can withstand before going into another equilibrium. Stability is a multidimensional concept, however. -->
<!-- ## Multiple stable states -->
<!-- A necessary condition for the existence of multiple stable states are positive feedback mechanisms. -->
```{r, echo = FALSE, eval = FALSE, dpi = 400}
may <- function(x, gamma) x * (1 - x) - gamma * x^2 / (0.10^2 + x^2)
get_fixedpoints <- function(gamma) {
uniroot.all(f = function(x) may(x, gamma), interval = c(0.01, 1))
}
x <- seq(0, 1, .01)
ps <- get_fixedpoints(0.22)
par(mfrow = c(1, 2))
plot(
x, logistic(x), type = 'l', axes = FALSE, ylim = c(0, 0.40),
main = 'Logistic equation with predation', ylab = '', lwd = 1.5, font.main = 1
)
axis(1)
axis(2, las = 2, at = seq(0, 0.40, 0.1))
mtext(side = 2, text = TeX('$\\frac{dx}{d\\tau}$'), line = 2)
c1 <- 0.10
c2 <- 0.22
c3 <- 0.30
lines(x, grazing_sn(c2, x), lwd = 1.5, lty = 1, col = cols[2])
lines(x, logistic(x), lwd = 2, col = 'black')
# add the fixed points
p1 <- uniroot.all(function(x) logistic_sn(x, c = c1), interval = c(0, 1))[-1]
p2 <- uniroot.all(function(x) logistic_sn(x, c = c2), interval = c(0, 1))[-1]
p3 <- uniroot.all(function(x) logistic_sn(x, c = c3), interval = c(0, 1))[-1]
points(0, 0, pch = 20, col = 'white', cex = 2)
points(0, 0, cex = 1.5)
points(p2[3], grazing_sn(c2, p2[3]), pch = 20, col = 'gray', cex = 2)
points(p2[3], grazing_sn(c2, p2[3]), cex = 1.5)
points(p2[2], grazing_sn(c2, p2[2]), pch = 20, col = 'white', cex = 2)
points(p2[2], grazing_sn(c2, p2[2]), cex = 1.5)
points(p2[1], grazing_sn(c2, p2[1]), pch = 20, col = 'gray', cex = 2)
points(p2[1], grazing_sn(c2, p2[1]), cex = 1.5)
legend(
'topleft', lty = 1,
legend = c(expression(gamma ~ ' = 0.22')), lwd = 1.5,
col = cols[2], box.lty = 0, bty = 'n'
)
plot(
x, may(x, gamma = 0.22), type = 'l', ylim = c(-0.20, 0.10), lwd = 1.5, xlim = c(0, 1), col = cols[2],
axes = FALSE, xlab = 'x', ylab = '', main = c('Logistic equation with predation'), font.main = 1
)
lines(c(0, 1), c(0, 0), lty = 3, lwd = 1) # Unstable fixed point
axis(1)
axis(2, las = 2, at = seq(-0.20, 0.10, 0.10))
mtext(side = 2, text = TeX('$\\frac{dx}{d\\tau}$'), line = 2)
points(0, 0, cex = 1.5)
points(0, 0, pch = 20, col = 'white', cex = 2)
points(p2[1], 0, pch = 20, col = 'gray', cex = 2)
points(p2[1], 0, cex = 1.5)
points(p2[2], 0, cex = 1.5)
points(p2[2], 0, pch = 20, col = 'white', cex = 2)
points(p2[3], 0, pch = 20, col = 'gray', cex = 2)
points(p2[3], 0, cex = 1.5)
legend(
'topleft', lty = 1,
legend = c(expression(gamma ~ ' = 0.22')), lwd = 1.5,
col = cols[2], box.lty = 0, bty = 'n'
)
```
## Critical transitions and hysteresis
What happens if we slowly increase the extent of predation in our toy model? To answer this, we allow for a slowly changing $\gamma$. Formally, we add a differential equation for $\gamma$ to our system:
$$
\begin{aligned}
\frac{\mathrm{d}x}{\mathrm{d}\tau} &= x \left(1 - x\right) - \gamma \frac{x^2}{\alpha^2 + x^2} \\[0.50em]
\frac{\mathrm{d}\gamma}{\mathrm{d}\tau} &= \beta \enspace ,
\end{aligned}
$$
where $\beta > 0$ is a constant. In contrast to the first model we studied, $\frac{\mathrm{d}\gamma}{\mathrm{d}\tau}$ does not itself depend on $\gamma$, and hence will not show exponential growth. Instead, its rate of change is constant at $\beta$, and so $\gamma(\tau)$ is a linear function with slope given by $\beta$.
Note further that the differential equation for $\gamma$ does not feature a term that depends on $x$, which means that it is not influenced by changes in the population size. The differential equation for $x$ obviously includes a term that depends on $\gamma$, and so the population size will be influenced by changes in $\gamma$, as we will see shortly. We can again numerically approximate the system reasonably well when choosing $\Delta_t$ to be small. We add small additive perturbations to $x$ at each time step, writing:
$$
\begin{aligned}
x_{n + 1} &= x_n + \Delta_t \cdot f(x_n, \gamma_n) + \varepsilon_n \\[0.50em]
\gamma_{n + 1} &= \gamma_n + \Delta_t \cdot \beta \enspace ,
\end{aligned}
$$
where $f$ is the logistic equation with predation and $\varepsilon_n \sim \mathcal{N}(0, \sigma)$.[^6] We implement this in R as follows:
```{r}
solve_err <- function(
f, beta, x0, gamma0, delta_t = 0.01,
nr_iter = 1000, sigma = 0.0001
) {
xt <- rep(x0, nr_iter)
gammat <- rep(gamma0, nr_iter)
for (n in seq(2, nr_iter + 1)) {
gammat[n] <- gammat[n-1] + delta_t * beta
xt[n] <- xt[n-1] + delta_t * f(xt[n-1], gammat[n-1]) + rnorm(1, 0, sigma)
}
time <- seq(0, nr_iter * delta_t, delta_t)
res <- cbind(time, xt, gammat)
res
}
```
We treat $\gamma$ as a parameter that *slowly* increases from $\gamma = 0$ to $\gamma = 0.40$. We encode the fact that $\gamma$ changes slowly by setting $\beta$ to a small value, in this case $\beta = 0.004$. The average absolute rate of change of $x$ across population sizes and $\gamma \in [0, 0.40]$ is about $0.10$:
```{r, eval = TRUE}
may <- function(x, gamma) x * (1 - x) - gamma * x^2 / (0.10^2 + x^2)
x <- seq(0, 1, 0.01)
gammas <- seq(0, 0.40, 0.01)
mean(sapply(gammas, function(gamma) mean(abs(may(x, gamma)))))
```
and so $x$ changes about $0.10 / 0.004 = 25$ times faster than $\gamma$ on average. Systems where one component changes quickly and the other more slowly are called *fast-slow* systems (e.g., Kuehn, [2013](https://link.springer.com/article/10.1007/s00332-012-9158-x)). The code simulates one trajectory that we will visualize below.
```{r}
delta_t <- 0.01
nr_iter <- 10000
set.seed(1)
res <- solve_err(
f = may, beta = 0.004, x0 = 1, gamma0 = 0,
nr_iter = nr_iter, delta_t = delta_t, sigma = 0.001
)
```
As a reminder, the left panel in the figure below shows the bifurcation diagram for the logistic equation with predation. The right panel shows a critical transition. In particular, the solid black line shows the time-evolution of the population starting at carrying capacity $x_0 = 1$. We slowly increase the predation rate from $\gamma = 0$ up to $\gamma = 0.40$, as the solid blue line indicates. The population size decreases gradually as we increase $\gamma$, and this closely follows the bifurcation diagram on the left. At the bifurcation point $\gamma = 0.26$, however, the population size crashes down to very low levels.
```{r, echo = FALSE}
par(mfrow = c(1, 2))
gammas <- seq(0, 0.40, length.out = 1000)
roots <- bifurcation(logistic_sn, gammas)
stable_ix1 <- which(roots[, 4] > max(roots[, 2], na.rm = TRUE))
stable_ix2 <- which(roots[, 4] < min(roots[, 2], na.rm = TRUE))[-1]
unstable_ix <- which(roots[, 2] > min(roots[, 2], na.rm = TRUE) & roots[, 2] < max(roots[, 2], na.rm = TRUE))
plot(
gammas, roots[, 2], type = 'l', ylim = c(0, 1), lwd = 1.5, xlim = c(0, 0.40),
axes = FALSE, xlab = expression(gamma), ylab = '', main = 'Bifurcation diagram', font.main = 1
)
lines(gammas[unstable_ix], roots[unstable_ix, 3], lwd = 1.5, lty = 2)
lines(gammas[stable_ix1], roots[stable_ix1, 4], lwd = 1.5)
lines(gammas[stable_ix2], roots[stable_ix2, 4], lwd = 1.5)
axis(1)
axis(2, las = 2)
mtext(expression('Equilibria' ~ x^'*'), side = 2, line = 2.5)
lines(c(0, 0.40), c(0, 0), lty = 2, lwd = 1.5) # Unstable fixed point
points(gammas[unstable_ix][1], roots[unstable_ix, 3][1] - .015, pch = 20, cex = 1.5)
points(gammas[unstable_ix][length(unstable_ix)], roots[unstable_ix, 3][length(unstable_ix)] + .035, pch = 20, cex = 1.5)
gammas <- seq(0, 0.40, length.out = nr_iter + 1)
set.seed(1)
res2 <- solve_err(
f = may, beta = -0.004, x0 = 0, gamma0 = 0.40,
nr_iter = nr_iter, delta_t = delta_t, sigma = 0.001
)
par(mar = c(5.1, 3, 4.1, 4))
plot(
res[, 1], res[, 2], ylim = c(0, 1), type = 'l', axes = FALSE, lwd = 2,
xlab = expression('Time ' ~ tau), ylab = '', main = 'Critical transitions and hysteresis', font.main = 1
)
lines(res2[, 1], rev(res2[, 2]), col = grDevices::adjustcolor('grey76', alpha = 0.75), lwd = 2)
axis(1)
axis(2, las = 2)
par(new = TRUE)
gammas <- res[, 3]
plot(
res[, 1], gammas, type = 'l', lwd = 2,
axes = FALSE, xlab = '', ylab = '', col = cols[2]
)
axis(4)
mtext(TeX('$\\gamma(\\tau)$'), side = 4, line = 3)
mtext(TeX('x($\\tau$)'), side = 2, line = 1.8)
arrows(65, 0.075, 50, 0.075, col = 'gray76', length = 0.15, lwd = 1.5)
arrows(50, 0.35, 65, 0.35, col = 'black', length = 0.15, lwd = 1.5)
legend(
x = 20, y = 0.80, legend = expression(gamma),
col = cols[2],
lwd = 2, bty = 'n', cex = 1
)
```
Can we recover the population size by decreasing $\gamma$ again? We can, but it requires substantially more effort! The solid gray line indicates the trajectory starting from a low population size and a high predation rate $\gamma = 0.40$. We reduce $\gamma$, but we have to reduce it all the way to $\gamma = 0.18$ for the population to then suddenly recover again. The phenomenon that a transition may be hard to reverse in this specific sense is known as *hysteresis*.
In the time-series above, we see that the system moves eratically around the equilibrium --- after perturbations which push the system out of equilibrium, it is quick to return to equilibrium. Importantly, changes in $\gamma$ affect the equilibrium of the system itself. At the saddle-node bifurcation $\gamma = 0.26$, the stable equilibrium point vanishes, and the system moves towards the other stable equilibrium point that entails a much lower population size. This "crashing down" is a *transient phase* during which the system is out of equilibrium. How long the system takes to reach another stable equilibrium point after the one it tracked vanished depends on the nature of the system. Given the eagerness with which *Escherichia coli* reproduces, for example, it is a matter of mere hours until its population has recovered after predation has been sufficiently reduced. Transitions in the earth's climate, however, may take hundreds of years.
Here, we assume that we know the equation that describes population growth under predation. For almost all real-word systems, however, we do not have an adequate model and thus may not know whether a particular system is in a transient phase, or whether the changes we are seeing are due to changes in underlying parameters that influence the equilibrium. If the system is in a transient phase, it can change without any change in parameters or perturbations, which --- from a conservation perspective --- is slightly unsettling (Hastings et al., [2018](https://science.sciencemag.org/content/361/6406/eaat6412.abstract)). Yet transients can also hold opportunities. For example, if a system that is pushed over a tipping point has a slow transient, we may still be able to intervene and nurture the system back before it crashes into the unfavourable stable state (Hughes et al., [2013](https://www.sciencedirect.com/science/article/abs/pii/S0169534712002170)).
While the simple models we look at in this blog post quickly settle into equilibrium, many real-world systems are periodically forced (e.g., Rodríguez-Sánchez, [2020](https://research.wur.nl/en/publications/cycles-and-interactions-a-mathematician-among-biologists); Strogatz, [2003](http://www.stevenstrogatz.com/books/sync-the-emerging-science-of-spontaneous-order)) and may never do so. This can lead to interesting dynamics and has implication for critical transitions (e.g., Bathiany et al. [2018](https://www.nature.com/articles/s41598-018-23377-4)), but this is for another blog post.
Critical transitions --- such as the one illustrated in the figure above --- are hard to foresee. Looking at how the mean of the population size changes, one would not expect a dramatic crash as predation increases. In the next section, we will see how the phenomenon of *critical slowing down* may help us anticipate such critical transitions.
## Critical slowing down
The logistic equation with predation exhibits a phenomenon called *critical slowing down*: as the population approaches the saddle-node bifurcation, it returns more slowly to the stable equilibrium after small perturbations. We can study this analytically in our simple model. In particular, we are interested in the dynamics of the system after a (small) perturbation $\eta(\tau)$ that pushes the system away from the fixed point. We write:
$$
\begin{aligned}
x(\tau) &= x^{\star} + \eta(\tau) \enspace .
\end{aligned}
$$
This is essentialy what we had when we simulated from the system and added a little bit of noise at each time step. The dynamics of the system close to the fixed point turn out to be the same as the dynamics of the noise. To see this, we derive a differential equation for $\eta = x - x^{\star}$:
$$
\frac{\mathrm{d}\eta}{\mathrm{d}\tau} = \frac{\mathrm{d}}{\mathrm{d}\tau} (x - x^{\star}) = \frac{\mathrm{d}x}{\mathrm{d}\tau} - \frac{\mathrm{d}x^{\star}}{\mathrm{d}\tau} = \frac{\mathrm{d}x}{\mathrm{d}\tau} = f(x) = f(x^{\star} + \eta) \enspace ,
$$
since the rate of change at the fixed point, $\frac{\mathrm{d}x^{\star}}{\mathrm{d}\tau}$, is zero and where $f$ is the logistic equation with predation. This tells us that the dynamics of the perturbation $\eta$ is simply given by the dynamics of the system evaluated at $f(x^{\star} + \eta)$. For simplicity, we [linearize this equation](https://www.youtube.com/watch?v=3d6DsjIBzJ4), writing:
$$
\begin{aligned}
\frac{\mathrm{d}\eta}{\mathrm{d}\tau} = f(x^{\star} + \eta) &= f(x^{\star}) + \eta f'(x^{\star}) + \mathcal{O}(\eta^2) \\
&\approx \eta f'(x^{\star}) \enspace ,
\end{aligned}
$$
since $f(x^{\star}) = 0$ and where we ignore higher-order terms $\mathcal{O}(\eta^2)$. While the symbols are different, the structure of the equation might look familiar. In fact, it is a linear equation in $\eta$, and so its solution is given by the exponential function:
$$
\eta(\tau) = \eta_0 e^{\tau f'(x^{\star})} \enspace ,
$$
where $\eta_0$ is the initial condition. Therefore, the dynamics of the system close to the fixed point $x^{\star}$ is given by:
$$
x(\tau) = x^{\star} + \eta_0 e^{\tau f'(x^{\star})} \enspace .
$$
In sum, we have derived an (approximate) equation that describes the dynamics of the system close to equilibrium after a small perturbation.
As an aside, this approximation can be used to analyze the stability of fixed points: at a stable fixed point, $f'(x^{\star}) < 0$ and the system hence returns to the fixed point; at unstable fixed points, $f'(x^{\star}) > 0$ and the system moves away from the fixed point. Such an analysis is known as *linear stability analysis*, because we have linearized the system dynamics close to the fixed point.
We are now in a position to illustrate the phenomenon of critical slowing down. In particular, note that $\eta(\tau)$ depends on the *derivative* of the differential equation $f$ with respect to $x$ --- denoted by $f'$ --- evaluated at the fixed point $x^{\star}$. For the logistic equation with predation, we have that $f'$:
$$
\begin{aligned}
f' = \frac{\mathrm{d}f}{\mathrm{d}x} &= \frac{\mathrm{d}}{\mathrm{d}{x}}\left(x(1 - x) - \gamma \frac{x^2}{0.01 + x^2}\right) \\[0.50em]
&= 1 - 2x - \gamma \frac{0.02x}{(0.01 + x^2)^2} \enspace .
\end{aligned}
$$
In the following, we will evaluate this function at various equilibrium points $x^{\star}$, which depend on $\gamma$, as we have seen before in the bifurcation diagram. To make this apparent, we define a new function:
$$
\begin{equation}
\lambda(x^{\star}, \gamma) = 1 - 2x^{\star} - \gamma \frac{0.02x^{\star}}{(0.01 + (x^{\star})^2)^2} \enspace ,
\end{equation}
$$
where the value of $x^{\star}$ is constrained by $\gamma$.[^7] This function gives the *recovery rate* of the system from small perturbations close to the equilibrium. The code for this function is:
```{r}
lambda <- function(xstar, gamma) 1 - 2*xstar - gamma * 0.02 * xstar / (0.01 + xstar^2)^2
```
In order to get the equilibrium points $x^{\star}$ for a particular value of $\gamma$, we need to find the values of $x$ for which the logistic equation with predation is zero. We can do this using the following code:
```{r}
library('rootSolve')
get_fixedpoints <- function(gamma) {
uniroot.all(f = function(x) may(x, gamma), interval = c(0, 1))
}
```
Let's apply this on an example. The recovery rate from perturbations away from a particular fixed point $x^{\star}$ is given by $\lambda(x^{\star}, \gamma)$, and so a smaller absolute value for $\lambda$ will result in a slower recovery. Take $\gamma = 0.18$ and $\gamma = 0.24$ as examples. For these values, there are two stable fixed points, and suppose that the system is at the larger fixed point. These fixed points are given by $x^{\star} = 0.77$ and $x^{\star} = 0.63$, respectively, as the following computation shows:
```{r}
rbind(
# unstable, stable, unstable, stable
round(get_fixedpoints(gamma = 0.18), 2),
round(get_fixedpoints(gamma = 0.24), 2)
)
```
We can plug these fixed points into the equation for $\lambda$, which gives us the respective rates with which these systems return to equilibrium. These are:
$$
\begin{aligned}
\lambda(x^{\star} = 0.77, \gamma = 0.18) &= -0.55 \\
\lambda(x^{\star} = 0.63, \gamma = 0.24) &= -0.28 \enspace ,
\end{aligned}
$$
which can easily be verified:
```{r}
rbind(
round(lambda(x = 0.77, gamma = 0.18), 2),
round(lambda(x = 0.63, gamma = 0.24), 2)
)
```
Indeed, the system for which $\gamma = 0.24$ has $\lambda$ smaller in absolute value than the system for which $\gamma = 0.18$, and thus returns more slowly to equilibrium after an external perturbation.
The left panel in the figure below shows how $\lambda$ changes as a continuous function of $\gamma \in [0, 0.40]$. We see that $\lambda$ increases towards $\lambda = 0$ at the saddle-node bifurcation $\gamma = 0.26$ (when coming from the left) or $\gamma = 0.18$ (when coming from the right). The dashed gray lines indicates $\lambda > 0$, which is the case for unstable equilibrium points; in other words, perturbations do not decay but grow close to the unstable equilibrium point, and hence the system does not return to it.
The panel on the right illustrates the slower recovery rate. In particular, I simulate from these two systems and, at $\tau = 10$, half their population size. The system with $\gamma = 0.18$ recovers more swiftly to its stable equilibrium than the system with $\gamma = 0.24$.
```{r, echo = FALSE}
get_fixedpoints <- function(gamma) {
uniroot.all(f = function(x) may(x, gamma), interval = c(0.01, 1))
}
lambda <- function(x, gamma) 1 - 2*x - gamma * 0.02 * x / (0.01 + x^2)^2
gammas <- seq(0, 0.40, length.out = 5000)
ss <- list()
for (i in 1:length(gammas)) {
ss[[i]] <- get_fixedpoints(gammas[i])
}
# results as data.frames
df1 <- cbind(gammas, sapply(ss, function(x) x[1]))
df2 <- cbind(gammas, sapply(ss, function(x) x[2]))
df3 <- cbind(gammas, sapply(ss, function(x) x[3]))
# compute eigenvalues
e1 <- lambda(df1[, 2], gammas)
e2 <- lambda(df2[, 2], gammas)
e3 <- lambda(df3[, 2], gammas)
## align data properly -> have to distinguish between approaching point from below or above
cutmin <- min(which(!is.na(e3)))
ind <- which(!is.na(e3))
ss.below <- below <- rep(NA, length(e1))
# doing it for eigenvalues
below[1:cutmin] <- e1[1:cutmin]
below[cutmin:(cutmin + length(ind) - 1) ] <- e3[ind]
above <- rep(NA, length(e1))
above[cutmin:length(e1)] <- e1[cutmin:length(e1)]
## shaded area
x.left <- gammas[cutmin]
x.right <- gammas[min(which(is.na(below)))]
par(mfrow = c(1, 2))
# plot eigenvalues
plot(
NA, ylim = c(-1, 0.4), xlim = range(gammas),
xlab = expression(gamma), ylab = expression(lambda),
axes = FALSE, main = expression(lambda ~ ' approaching 0'), font.main = 1
)
rect(xleft = x.left, -100, xright = x.right, 0.40, col = rgb(0.5,0.5,0.5,1/4), border = NA)
lines(gammas, above, col = cols[3], lwd = 1.5)
lines(gammas, below, col = cols[2], lwd = 1.5)
lines(gammas, e2, col = "gray", lty = 2, lwd = 1.5)
abline(h = 0, lty = 3)
axis(1)
axis(2, las = 2)
time <- seq(0, 4001 * 0.01, 0.01)
x01 <- get_fixedpoints(0.18)[3]
res1a <- solve(function(x) may(x, 0.18), x01, nr_iter = 1000, delta_t = 0.01)
res1b <- solve(function(x) may(x, 0.18), res1a[1001, 2] / 2, nr_iter = 3000, delta_t = 0.01)
res1 <- rbind(res1a, res1b)
res1[, 1] <- time
x02 <- get_fixedpoints(0.23)[3]
res2a <- solve(function(x) may(x, 0.23), x02, nr_iter = 1000, delta_t = 0.01)
res2b <- solve(function(x) may(x, 0.23), res2a[1001, 2] / 2, nr_iter = 3000, delta_t = 0.01)
res2 <- rbind(res2a, res2b)
res2[, 1] <- time
plot(
as.numeric(res1[, 1]), as.numeric(res1[, 2] / x01), type = 'l', axes = FALSE,
xlab = expression('Time ' ~ tau), ylab = TeX('x($\\tau$)'), ylim = c(0, 1), col = 'black', lty = 1, lwd = 1.5,
main = 'Different recovery rates', font.main = 1
)
lines(as.numeric(res2[, 1]), as.numeric(res2[, 2] / x02), col = 'black', lty = 3, lwd = 1.5)
axis(1)
axis(2, las = 1)
legend(
'bottomright', legend = c(expression(gamma ~ ' = 0.18'), expression(gamma ~ ' = 0.24')),
col = 'black', lty = c(1, 3), bty = 'n', lwd = 1.5
)
```
The phenomenon of critical slowing down is the basis of widely used *early warning signals* such as autocorrelation and variance. Indeed, one can show that the autocorrelation and variance are given by $e^{\lambda}$ and $\frac{\sigma_{\varepsilon}^2}{1 - e^{2\lambda}}$, respectively, where $\sigma_{\varepsilon}^2$ is the noise variance (see e.g. the appendix in Dablander et al., [2020](https://psyarxiv.com/5wc28)). Hence, these quantities will increase as the system approaches the bifurcation point, as the figure below illlustrates.
```{r, echo = FALSE}
par(mfrow = c(1, 2))
# plot AR(1)
ar1 <- exp(above)
ar2 <- exp(below)
ar3 <- exp(e2)
# plot AR
plot(
NA, ylim = c(.4, 1.4), xlim = range(gammas),
xlab = expression(gamma), ylab = 'Autocorrelation',
axes = FALSE, main = 'Autocorrelation approaching 1', font.main = 1
)
rect(xleft = x.left, -100, xright = x.right, 1.40, col = rgb(0.5,0.5,0.5,1/4), border = NA)
lines(gammas, ar1, col = cols[3], lwd = 1.5)
lines(gammas, ar2, col = cols[2], lwd = 1.5)
lines(gammas, ar3, col = "gray", lty = 2, lwd = 1.5)
abline(h = 1, lty = 3)
axis(1)
axis(2, las = 2)
# plot standard deviation
stdev <- function(sig, ss) sqrt(sig / (1 - exp(2 * ss)))
sd1 <- stdev(0.1, above)
sd2 <- stdev(0.1, below)
plot(
NA, ylim = range(sd1, sd2, na.rm=T) * c(1, 1.2), xlim = range(gammas),
xlab = expression(gamma), ylab = "Standard deviation", axes = FALSE,
main = 'Increasing standard deviation', font.main = 1
)
axis(1)
axis(2, las = 2)
rect(xleft = x.left, -100, xright = x.right, 2, col = rgb(0.5,0.5,0.5,1/4), border = NA)
lines(gammas, sd1, col = cols[3], lwd = 1.5)
lines(gammas, sd2, col = cols[2], lwd = 1.5)
```
Early warning signals based on critical slowing down have seen a surge in attention in the last two decades, with prominent review articles in ecology and climate science (e.g., Scheffer et al., [2009](https://www.nature.com/articles/nature08227), [2012](https://science.sciencemag.org/content/338/6105/344.abstract); Lenton, [2011](https://www.nature.com/articles/nclimate1143)). The idea of critical slowing down goes back much further, and was well-known to proponents of *catastrophe theory* (e.g., Zeeman, [1976](https://www.jstor.org/stable/24950329)); indeed, critical slowing down is one of the so-called *catastrophe flags* (see van der Maas et al. [2003](https://journals.sagepub.com/doi/abs/10.1177/0049124103253773), for an overview and an application to attitudes). Wissel ([1984](https://pubmed.ncbi.nlm.nih.gov/28312117/)) (re)discovered critical slowing down in simple systems and used it to predict the extinction of a population of rotifers. The wonderful experimental demonstrations by Drake & Griffin ([2010](https://www.nature.com/articles/nature09389/)) and Dai et al. ([2012](https://science.sciencemag.org/content/336/6085/1175.abstract)) are modern variations on that theme.
Critical transitions are notoriously hard to predict, and the potential of generic signals that warn us of such transitions is vast. Early warning signals based on critical slowing down are subject to a number of practical and theoretical limitations, however --- for example, they can occur prior to transitions that are not critical, and they can fail to occur prior to critical transitions. For an overview and a discussion, see Dablander, Pichler, Cika, & Bacilieri ([2020](https://psyarxiv.com/5wc28)).
# Conclusion
Dynamical systems theory is a powerful framework for modelling how systems change over time. In this blog post, we have looked at simple toy models to elucidate some core concepts. Intriguingly, we have seen that even a very simple model can exhibit intricate behaviour, such as multiple stable states and critical transitions. Yet most interesting real-world systems are much more complex, and care must be applied when translating intuitions from low-dimensional toy models into high-dimensional reality.
---
*I would like to thank [Andrea Bacilieri](https://twitter.com/abacilieri), [Jill de Ron](https://twitter.com/jillderon), [Jonas Haslbeck](https://twitter.com/jonashaslbeck), and [Oisín Ryan](https://twitter.com/Oisin_Ryan_) for helpful comments on this blog post.*