forked from PRML/PRMLT
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
8 changed files
with
72 additions
and
25 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,11 @@ | ||
function [model, llh] = hmmEm(x, init) | ||
% EM algorithm to fit the parameters of HMM model (a.k.a Baum-Welch algorithm) | ||
% x: 1 x n sequence of observations | ||
% Input: | ||
% x: 1 x n integer vector which is the sequence of observations | ||
% init: model or k | ||
% Output:s | ||
% model: trained model structure | ||
% llh: loglikelihood | ||
% Written by Mo Chen ([email protected]). | ||
n = size(x,2); | ||
d = max(x); | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,13 @@ | ||
function [alpha, energy] = hmmFilter(x, model) | ||
% HMM forward filtering algorithm | ||
% HMM forward filtering algorithm. This is a wrapper function which transform input and call underlying algorithm | ||
% Unlike the method described in the book of PRML, the alpha returned is the normalized version: alpha(t)=p(z_t|x_{1:t}) | ||
% The unnormalized version is numerical unstable. alpha(t)=p(z_t,x_{1:t}) grows exponential fast to infinity. | ||
% Computing unnormalized version alpha(t)=p(z_t,x_{1:t}) is numerical unstable, which grows exponential fast to infinity. | ||
% Input: | ||
% x: 1 x n integer vector which is the sequence of observations | ||
% model: model structure | ||
% Output: | ||
% alpha: k x n matrix of posterior alpha(t)=p(z_t|x_{1:t}) | ||
% enery: loglikelihood | ||
% Written by Mo Chen ([email protected]). | ||
A = model.A; | ||
E = model.E; | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,12 @@ | ||
function [alpha, energy] = hmmFilter_(M, A, s) | ||
% HMM forward filtering algorithm | ||
% Unlike the method described in the book of PRML, the alpha returned is the normalized version: alpha(t)=p(z_t|x_{1:t}) | ||
% The unnormalized version is numerical unstable. alpha(t)=p(z_t,x_{1:t}) grows exponential fast to infinity. | ||
% Implmentation function of HMM forward filtering algorithm. | ||
% Input: | ||
% M: k x n emmision data matrix M=E*X | ||
% A: k x k transition matrix | ||
% s: k x 1 starting probability (prior) | ||
% Output: | ||
% alpha: k x n matrix of posterior alpha(t)=p(z_t|x_{1:t}) | ||
% enery: loglikelihood | ||
% Written by Mo Chen ([email protected]). | ||
[K,T] = size(M); | ||
At = A'; | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,16 @@ | ||
function [gamma, alpha, beta, c] = hmmSmoother(x, model) | ||
% HMM smoothing alogrithm (normalized forward-backward or normalized alpha-beta algorithm) | ||
% HMM smoothing alogrithm (normalized forward-backward or normalized alpha-beta algorithm). This is a wrapper function which transform input and call underlying algorithm | ||
% Unlike the method described in the book of PRML, the alpha returned is the normalized version: gamma(t)=p(z_t|x_{1:T}) | ||
% Computing unnormalized version gamma(t)=p(z_t,x_{1:T}) is numerical unstable, which grows exponential fast to infinity. | ||
% Input: | ||
% x: 1 x n integer vector which is the sequence of observations | ||
% model: model structure | ||
% Output: | ||
% gamma: k x n matrix of posterior gamma(t)=p(z_t,x_{1:T}) | ||
% alpha: k x n matrix of posterior alpha(t)=p(z_t|x_{1:T}) | ||
% beta: k x n matrix of posterior beta(t)=gamma(t)/alpha(t) | ||
% c: loglikelihood | ||
% Written by Mo Chen ([email protected]). | ||
A = model.A; | ||
E = model.E; | ||
s = model.s; | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,16 @@ | ||
function [gamma, alpha, beta, c] = hmmSmoother_(M, A, s) | ||
% HMM smoothing alogrithm (normalized forward-backward or normalized alpha-beta algorithm) | ||
% Implmentation function HMM smoothing alogrithm. | ||
% Unlike the method described in the book of PRML, the alpha returned is the normalized version: gamma(t)=p(z_t|x_{1:T}) | ||
% Computing unnormalized version gamma(t)=p(z_t,x_{1:T}) is numerical unstable, which grows exponential fast to infinity. | ||
% Input: | ||
% M: k x n emmision data matrix M=E*X | ||
% A: k x k transition matrix | ||
% s: k x 1 start prior probability | ||
% Output: | ||
% gamma: k x n matrix of posterior gamma(t)=p(z_t,x_{1:T}) | ||
% alpha: k x n matrix of posterior alpha(t)=p(z_t|x_{1:T}) | ||
% beta: k x n matrix of posterior beta(t)=gamma(t)/alpha(t) | ||
% c: loglikelihood | ||
% Written by Mo Chen ([email protected]). | ||
[K,T] = size(M); | ||
At = A'; | ||
|
@@ -13,5 +24,5 @@ | |
for t = T-1:-1:1 | ||
beta(:,t) = A*(beta(:,t+1).*M(:,t+1))/c(t+1); % 13.62 | ||
end | ||
gamma = alpha.*beta; | ||
gamma = alpha.*beta; % 13.64 | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,12 @@ | ||
function [argmax, prob] = hmmViterbi(x, model) | ||
% Viterbi algorithm | ||
function z = hmmViterbi(x, model) | ||
% Viterbi algorithm calculated in log scale to improve numerical stability. | ||
% This is a wrapper function which transform input and call underlying algorithm | ||
% Input: | ||
% x: 1 x n integer vector which is the sequence of observations | ||
% model: model structure | ||
% Output: | ||
% z: 1 x n latent state | ||
% Written by Mo Chen ([email protected]). | ||
A = model.A; | ||
E = model.E; | ||
s = model.s; | ||
|
@@ -8,4 +15,4 @@ | |
d = max(x); | ||
X = sparse(x,1:n,1,d,n); | ||
M = E*X; | ||
[argmax, prob] = hmmViterbi_(M, A, s); | ||
z = hmmViterbi_(M, A, s); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,9 +1,12 @@ | ||
function [argmax, prob] = hmmViterbi_(M, A, s) | ||
% Implmentation function of Viterbi algorithm. (not supposed to be called | ||
% directly) | ||
% M: data matrix | ||
% A: transition probability matrix | ||
% s: starting probability (prior) | ||
function [z, p] = hmmViterbi_(M, A, s) | ||
% Implmentation function of Viterbi algorithm. | ||
% Input: | ||
% M: k x n emmision data matrix M=E*X | ||
% A: k x k transition matrix | ||
% s: k x 1 starting probability (prior) | ||
% Output: | ||
% z: 1 x n latent state | ||
% p: 1 x n probability | ||
% Written by Mo Chen ([email protected]). | ||
[k,n] = size(M); | ||
Z = zeros(k,n); | ||
|
@@ -18,5 +21,5 @@ | |
Z(:,t) = 1:k; | ||
end | ||
[v,idx] = max(v); | ||
argmax = Z(idx,:); | ||
prob = exp(v); | ||
z = Z(idx,:); | ||
p = exp(v); |