jupytext | kernelspec | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
In this lecture we give a quick introduction to data and probability distributions using Python.
:tags: [hide-output]
!pip install --upgrade yfinance
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import yfinance as yf
import scipy.stats
import seaborn as sns
In this section we recall the definitions of some well-known distributions and explore how to manipulate them with SciPy.
Let's start with discrete distributions.
A discrete distribution is defined by a set of numbers
We say that a random variable
That is,
The mean or expected value of a random variable
Expectation is also called the first moment of the distribution.
We also refer to this number as the mean of the distribution (represented by)
The variance of
Variance is also called the second central moment of the distribution.
The cumulative distribution function (CDF) of
Here
Hence the second term takes all
One simple example is the uniform distribution, where
We can import the uniform distribution on
n = 10
u = scipy.stats.randint(1, n+1)
Here's the mean and variance:
u.mean(), u.var()
The formula for the mean is
Now let's evaluate the PMF:
u.pmf(1)
u.pmf(2)
Here's a plot of the probability mass function:
fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('PMF')
plt.show()
Here's a plot of the CDF:
fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.step(S, u.cdf(S))
ax.vlines(S, 0, u.cdf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('CDF')
plt.show()
The CDF jumps up by
:label: prob_ex1
Calculate the mean and variance for this parameterization (i.e., $n=10$)
directly from the PMF, using the expressions given above.
Check that your answers agree with `u.mean()` and `u.var()`.
Another useful distribution is the Bernoulli distribution on
Here
We can think of this distribution as modeling probabilities for a random trial with success probability
-
$p(1) = \theta$ means that the trial succeeds (takes value 1) with probability$\theta$ -
$p(0) = 1 - \theta$ means that the trial fails (takes value 0) with probability$1-\theta$
The formula for the mean is
We can import the Bernoulli distribution on
θ = 0.4
u = scipy.stats.bernoulli(θ)
Here's the mean and variance at
u.mean(), u.var()
We can evaluate the PMF as follows
u.pmf(0), u.pmf(1)
Another useful (and more interesting) distribution is the binomial distribution on
Again,
The interpretation of
For example, if
The formula for the mean is
Let's investigate an example
n = 10
θ = 0.5
u = scipy.stats.binom(n, θ)
According to our formulas, the mean and variance are
n * θ, n * θ * (1 - θ)
Let's see if SciPy gives us the same results:
u.mean(), u.var()
Here's the PMF:
u.pmf(1)
fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('PMF')
plt.show()
Here's the CDF:
fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.step(S, u.cdf(S))
ax.vlines(S, 0, u.cdf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('CDF')
plt.show()
:label: prob_ex3
Using `u.pmf`, check that our definition of the CDF given above calculates the same function as `u.cdf`.
:class: dropdown
Here is one solution:
fig, ax = plt.subplots()
S = np.arange(1, n+1)
u_sum = np.cumsum(u.pmf(S))
ax.step(S, u_sum)
ax.vlines(S, 0, u_sum, lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('CDF')
plt.show()
We can see that the output graph is the same as the one above.
The geometric distribution has infinite support
where
(A discrete distribution has infinite support if the set of points to which it assigns positive probability is infinite.)
To understand the distribution, think of repeated independent random trials, each with success probability
The interpretation of
It can be shown that the mean of the distribution is
Here's an example.
θ = 0.1
u = scipy.stats.geom(θ)
u.mean(), u.var()
Here's part of the PMF:
fig, ax = plt.subplots()
n = 20
S = np.arange(n)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('PMF')
plt.show()
The Poisson distribution on
The interpretation of
It can be shown that the mean is
Here's an example.
λ = 2
u = scipy.stats.poisson(λ)
u.mean(), u.var()
Here's the PMF:
u.pmf(1)
fig, ax = plt.subplots()
S = np.arange(1, n+1)
ax.plot(S, u.pmf(S), linestyle='', marker='o', alpha=0.8, ms=4)
ax.vlines(S, 0, u.pmf(S), lw=0.2)
ax.set_xticks(S)
ax.set_xlabel('S')
ax.set_ylabel('PMF')
plt.show()
A continuous distribution is represented by a probability density function, which is a function
We say that random variable
for all
The definition of the mean and variance of a random variable
For example, the mean of
The cumulative distribution function (CDF) of
Perhaps the most famous distribution is the normal distribution, which has density
This distribution has two parameters,
Using calculus, it can be shown that, for this distribution, the mean is
We can obtain the moments, PDF and CDF of the normal density via SciPy as follows:
μ, σ = 0.0, 1.0
u = scipy.stats.norm(μ, σ)
u.mean(), u.var()
Here's a plot of the density --- the famous "bell-shaped curve":
μ_vals = [-1, 0, 1]
σ_vals = [0.4, 1, 1.6]
fig, ax = plt.subplots()
x_grid = np.linspace(-4, 4, 200)
for μ, σ in zip(μ_vals, σ_vals):
u = scipy.stats.norm(μ, σ)
ax.plot(x_grid, u.pdf(x_grid),
alpha=0.5, lw=2,
label=rf'$\mu={μ}, \sigma={σ}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
Here's a plot of the CDF:
fig, ax = plt.subplots()
for μ, σ in zip(μ_vals, σ_vals):
u = scipy.stats.norm(μ, σ)
ax.plot(x_grid, u.cdf(x_grid),
alpha=0.5, lw=2,
label=rf'$\mu={μ}, \sigma={σ}$')
ax.set_ylim(0, 1)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
The lognormal distribution is a distribution on
This distribution has two parameters,
It can be shown that, for this distribution, the mean is
It can be proved that
- if
$X$ is lognormally distributed, then$\log X$ is normally distributed, and - if
$X$ is normally distributed, then$\exp X$ is lognormally distributed.
We can obtain the moments, PDF, and CDF of the lognormal density as follows:
μ, σ = 0.0, 1.0
u = scipy.stats.lognorm(s=σ, scale=np.exp(μ))
u.mean(), u.var()
μ_vals = [-1, 0, 1]
σ_vals = [0.25, 0.5, 1]
x_grid = np.linspace(0, 3, 200)
fig, ax = plt.subplots()
for μ, σ in zip(μ_vals, σ_vals):
u = scipy.stats.lognorm(σ, scale=np.exp(μ))
ax.plot(x_grid, u.pdf(x_grid),
alpha=0.5, lw=2,
label=fr'$\mu={μ}, \sigma={σ}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
fig, ax = plt.subplots()
μ = 1
for σ in σ_vals:
u = scipy.stats.norm(μ, σ)
ax.plot(x_grid, u.cdf(x_grid),
alpha=0.5, lw=2,
label=rf'$\mu={μ}, \sigma={σ}$')
ax.set_ylim(0, 1)
ax.set_xlim(0, 3)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
The exponential distribution is a distribution supported on
This distribution has one parameter
The exponential distribution can be thought of as the continuous analog of the geometric distribution.
It can be shown that, for this distribution, the mean is
We can obtain the moments, PDF, and CDF of the exponential density as follows:
λ = 1.0
u = scipy.stats.expon(scale=1/λ)
u.mean(), u.var()
fig, ax = plt.subplots()
λ_vals = [0.5, 1, 2]
x_grid = np.linspace(0, 6, 200)
for λ in λ_vals:
u = scipy.stats.expon(scale=1/λ)
ax.plot(x_grid, u.pdf(x_grid),
alpha=0.5, lw=2,
label=rf'$\lambda={λ}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
fig, ax = plt.subplots()
for λ in λ_vals:
u = scipy.stats.expon(scale=1/λ)
ax.plot(x_grid, u.cdf(x_grid),
alpha=0.5, lw=2,
label=rf'$\lambda={λ}$')
ax.set_ylim(0, 1)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
The beta distribution is a distribution on
where
(The role of the gamma function is just to normalize the density, so that it integrates to one.)
This distribution has two parameters,
It can be shown that, for this distribution, the mean is
We can obtain the moments, PDF, and CDF of the Beta density as follows:
α, β = 3.0, 1.0
u = scipy.stats.beta(α, β)
u.mean(), u.var()
α_vals = [0.5, 1, 5, 25, 3]
β_vals = [3, 1, 10, 20, 0.5]
x_grid = np.linspace(0, 1, 200)
fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
u = scipy.stats.beta(α, β)
ax.plot(x_grid, u.pdf(x_grid),
alpha=0.5, lw=2,
label=rf'$\alpha={α}, \beta={β}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
u = scipy.stats.beta(α, β)
ax.plot(x_grid, u.cdf(x_grid),
alpha=0.5, lw=2,
label=rf'$\alpha={α}, \beta={β}$')
ax.set_ylim(0, 1)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
The gamma distribution is a distribution on
This distribution has two parameters,
It can be shown that, for this distribution, the mean is
One interpretation is that if
We can obtain the moments, PDF, and CDF of the Gamma density as follows:
α, β = 3.0, 2.0
u = scipy.stats.gamma(α, scale=1/β)
u.mean(), u.var()
α_vals = [1, 3, 5, 10]
β_vals = [3, 5, 3, 3]
x_grid = np.linspace(0, 7, 200)
fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
u = scipy.stats.gamma(α, scale=1/β)
ax.plot(x_grid, u.pdf(x_grid),
alpha=0.5, lw=2,
label=rf'$\alpha={α}, \beta={β}$')
ax.set_xlabel('x')
ax.set_ylabel('PDF')
plt.legend()
plt.show()
fig, ax = plt.subplots()
for α, β in zip(α_vals, β_vals):
u = scipy.stats.gamma(α, scale=1/β)
ax.plot(x_grid, u.cdf(x_grid),
alpha=0.5, lw=2,
label=rf'$\alpha={α}, \beta={β}$')
ax.set_ylim(0, 1)
ax.set_xlabel('x')
ax.set_ylabel('CDF')
plt.legend()
plt.show()
Sometimes we refer to observed data or measurements as "distributions".
For example, let's say we observe the income of 10 people over a year:
data = [['Hiroshi', 1200],
['Ako', 1210],
['Emi', 1400],
['Daiki', 990],
['Chiyo', 1530],
['Taka', 1210],
['Katsuhiko', 1240],
['Daisuke', 1124],
['Yoshi', 1330],
['Rie', 1340]]
df = pd.DataFrame(data, columns=['name', 'income'])
df
In this situation, we might refer to the set of their incomes as the "income distribution."
The terminology is confusing because this set is not a probability distribution --- it's just a collection of numbers.
However, as we will see, there are connections between observed distributions (i.e., sets of numbers like the income distribution above) and probability distributions.
Below we explore some observed distributions.
Suppose we have an observed distribution with values
The sample mean of this distribution is defined as
The sample variance is defined as
For the income distribution given above, we can calculate these numbers via
x = df['income']
x.mean(), x.var()
:label: prob_ex4
If you try to check that the formulas given above for the sample mean and sample
variance produce the same numbers, you will see that the variance isn't quite
right. This is because SciPy uses $1/(n-1)$ instead of $1/n$ as the term at the
front of the variance. (Some books define the sample variance this way.)
Confirm.
Let's look at different ways that we can visualize one or more observed distributions.
We will cover
- histograms
- kernel density estimates and
- violin plots
We can histogram the income distribution we just constructed as follows
fig, ax = plt.subplots()
ax.hist(x, bins=5, density=True, histtype='bar')
ax.set_xlabel('income')
ax.set_ylabel('density')
plt.show()
Let's look at a distribution from real data.
In particular, we will look at the monthly return on Amazon shares between 2000/1/1 and 2024/1/1.
The monthly return is calculated as the percent change in the share price over each month.
So we will have one observation for each month.
:tags: [hide-output]
df = yf.download('AMZN', '2000-1-1', '2024-1-1', interval='1mo')
prices = df['Close']
x_amazon = prices.pct_change()[1:] * 100
x_amazon.head()
The first observation is the monthly return (percent change) over January 2000, which was
x_amazon.iloc[0]
Let's turn the return observations into an array and histogram it.
fig, ax = plt.subplots()
ax.hist(x_amazon, bins=20)
ax.set_xlabel('monthly return (percent change)')
ax.set_ylabel('density')
plt.show()
Kernel density estimates (KDE) provide a simple way to estimate and visualize the density of a distribution.
If you are not familiar with KDEs, you can think of them as a smoothed histogram.
Let's have a look at a KDE formed from the Amazon return data.
fig, ax = plt.subplots()
sns.kdeplot(x_amazon, ax=ax)
ax.set_xlabel('monthly return (percent change)')
ax.set_ylabel('KDE')
plt.show()
The smoothness of the KDE is dependent on how we choose the bandwidth.
fig, ax = plt.subplots()
sns.kdeplot(x_amazon, ax=ax, bw_adjust=0.1, alpha=0.5, label="bw=0.1")
sns.kdeplot(x_amazon, ax=ax, bw_adjust=0.5, alpha=0.5, label="bw=0.5")
sns.kdeplot(x_amazon, ax=ax, bw_adjust=1, alpha=0.5, label="bw=1")
ax.set_xlabel('monthly return (percent change)')
ax.set_ylabel('KDE')
plt.legend()
plt.show()
When we use a larger bandwidth, the KDE is smoother.
A suitable bandwidth is not too smooth (underfitting) or too wiggly (overfitting).
Another way to display an observed distribution is via a violin plot.
fig, ax = plt.subplots()
ax.violinplot(x_amazon)
ax.set_ylabel('monthly return (percent change)')
ax.set_xlabel('KDE')
plt.show()
Violin plots are particularly useful when we want to compare different distributions.
For example, let's compare the monthly returns on Amazon shares with the monthly return on Costco shares.
:tags: [hide-output]
df = yf.download('COST', '2000-1-1', '2024-1-1', interval='1mo')
prices = df['Close']
x_costco = prices.pct_change()[1:] * 100
fig, ax = plt.subplots()
ax.violinplot([x_amazon['AMZN'], x_costco['COST']])
ax.set_ylabel('monthly return (percent change)')
ax.set_xlabel('retailers')
ax.set_xticks([1, 2])
ax.set_xticklabels(['Amazon', 'Costco'])
plt.show()
Let's discuss the connection between observed distributions and probability distributions.
Sometimes it's helpful to imagine that an observed distribution is generated by a particular probability distribution.
For example, we might look at the returns from Amazon above and imagine that they were generated by a normal distribution.
(Even though this is not true, it might be a helpful way to think about the data.)
Here we match a normal distribution to the Amazon monthly returns by setting the sample mean to the mean of the normal distribution and the sample variance equal to the variance.
Then we plot the density and the histogram.
μ = x_amazon.mean()
σ_squared = x_amazon.var()
σ = np.sqrt(σ_squared)
u = scipy.stats.norm(μ, σ)
x_grid = np.linspace(-50, 65, 200)
fig, ax = plt.subplots()
ax.plot(x_grid, u.pdf(x_grid))
ax.hist(x_amazon, density=True, bins=40)
ax.set_xlabel('monthly return (percent change)')
ax.set_ylabel('density')
plt.show()
The match between the histogram and the density is not bad but also not very good.
One reason is that the normal distribution is not really a good fit for this observed data --- we will discuss this point again when we talk about {ref}heavy tailed distributions<heavy_tail>
.
Of course, if the data really is generated by the normal distribution, then the fit will be better.
Let's see this in action
- first we generate random draws from the normal distribution
- then we histogram them and compare with the density.
μ, σ = 0, 1
u = scipy.stats.norm(μ, σ)
N = 2000 # Number of observations
x_draws = u.rvs(N)
x_grid = np.linspace(-4, 4, 200)
fig, ax = plt.subplots()
ax.plot(x_grid, u.pdf(x_grid))
ax.hist(x_draws, density=True, bins=40)
ax.set_xlabel('x')
ax.set_ylabel('density')
plt.show()
Note that if you keep increasing
This convergence is a version of the "law of large numbers", which we will discuss {ref}later<lln_mr>
.