You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The original prior probability prior = Pmf(1, hypos) in Ch.4 is initialised with 1:
It's better to be initialised with prior2 = Pmf(Fraction(1, len(hypos)), hypos), which is a uniform distribution, and its sum adds up to 1.
The results and the final plot are the same under np.allclose(posterior, posterior2):
Also, for a better understanding, the loop inside the function should be given a more detailed explanation:
For example:
The reason we can use a loop to multiply likelihood is that each coin-flipping experiment is independent of others, hence the $P(\theta)$ represents the probability of a coin landing in its head in the Bayesian theorem, $P(D_x|\theta)$ represents $x$-th coin-flipping experiment, where $D_x$ can be 'H' or 'T'. The loop represents the process of $P(D_1|\theta) \times P(D_2|\theta) \times ... \times P(D_n|\theta) = P(D|\theta)$ since they are independent of each other.
Cheers,
Yifan
The text was updated successfully, but these errors were encountered:
iamyifan
changed the title
Proper Prior Probability Initialisation in Ch.4
Chapter 4 - Proper Prior Probability Initialisation
Feb 27, 2024
Thanks for these suggestions and sorry for taking so long to get to this! I use improper priors in a few places in the book because they get normalize after the first update anyway, so it doesn't affect the results -- as you showed.
Hi developer,
The original prior probability
![image](https://private-user-images.githubusercontent.com/33194917/308046466-4e01f0b5-7d5e-443b-a9ed-5ca91834067e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxNzM5OTYsIm5iZiI6MTczOTE3MzY5NiwicGF0aCI6Ii8zMzE5NDkxNy8zMDgwNDY0NjYtNGUwMWYwYjUtN2Q1ZS00NDNiLWE5ZWQtNWNhOTE4MzQwNjdlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEwVDA3NDgxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTQ1NWY4ZGQyYTA2MmNhOGJiY2FlYzQyN2IzODIzOTBjMWI5MTUzY2Q0YmUzOWIxNWMzNjYyNjBiMTU0ZWRjYjUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.pv59me4zwplRd8vnXiw0yyu-cWRM_1EFWhAMdIlIjaI)
prior = Pmf(1, hypos)
in Ch.4 is initialised with 1:It's better to be initialised with
prior2 = Pmf(Fraction(1, len(hypos)), hypos)
, which is a uniform distribution, and its sum adds up to 1.The results and the final plot are the same under
![image](https://private-user-images.githubusercontent.com/33194917/308047712-80c6d5b9-8665-45d8-a372-98e8366c6254.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxNzM5OTYsIm5iZiI6MTczOTE3MzY5NiwicGF0aCI6Ii8zMzE5NDkxNy8zMDgwNDc3MTItODBjNmQ1YjktODY2NS00NWQ4LWEzNzItOThlODM2NmM2MjU0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEwVDA3NDgxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTNkYWIyYzJkZDI1M2M0YTdjMzIyOGQ3YjMxMWFhZGM0MDgxMGNkOWRjZjM5ODliNDBhZjY1NmFiNzI0ZDI4MzEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.fi6Kl2NBGvk0HEXuJVso0Gr0rNvNTiQfZ9USMFRTZt8)
![image](https://private-user-images.githubusercontent.com/33194917/308047757-ae6970df-3260-4838-8656-30dde4570fad.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxNzM5OTYsIm5iZiI6MTczOTE3MzY5NiwicGF0aCI6Ii8zMzE5NDkxNy8zMDgwNDc3NTctYWU2OTcwZGYtMzI2MC00ODM4LTg2NTYtMzBkZGU0NTcwZmFkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEwVDA3NDgxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTE3ODFjYzNkNmNjNzM0OGYwZWRjN2JjZTJhYjQ2YWI3NzFkODVlMWQ4YjY4MTcyZjM5NTk3ZDBkZGYyZWY5MjUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.C_tbeOGAEfJCAXIhbKV8mxBlWQ9VMYEvdkwLGKAz-F4)
np.allclose(posterior, posterior2)
:Also, for a better understanding, the loop inside the function should be given a more detailed explanation:
![image](https://private-user-images.githubusercontent.com/33194917/308048209-56eb50dc-559a-4447-a490-d9a8e117d77b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxNzM5OTYsIm5iZiI6MTczOTE3MzY5NiwicGF0aCI6Ii8zMzE5NDkxNy8zMDgwNDgyMDktNTZlYjUwZGMtNTU5YS00NDQ3LWE0OTAtZDlhOGUxMTdkNzdiLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTAlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEwVDA3NDgxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWQwOTU1MGYxZTM5MTQ0ZGI1OWRmZjgzYzAxNzg1OWZkNmFlMzA3MWI0OWYwNWI2NTg0YTgwMDY5ZTFjZmZjZTgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.C_El_zm_wsacpNr0x2eRxmEYMbeRjJrIPpNBHIYhQQk)
For example:$P(\theta)$ represents the probability of a coin landing in its head in the Bayesian theorem, $P(D_x|\theta)$ represents $x$ -th coin-flipping experiment, where $D_x$ can be $P(D_1|\theta) \times P(D_2|\theta) \times ... \times P(D_n|\theta) = P(D|\theta)$ since they are independent of each other.
The reason we can use a loop to multiply likelihood is that each coin-flipping experiment is independent of others, hence the
'H'
or'T'
. The loop represents the process ofCheers,
Yifan
The text was updated successfully, but these errors were encountered: