forked from szcf-weiya/ESL-CN
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
418ec78
commit 3b82e41
Showing
23 changed files
with
531 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,147 @@ | ||
# ESL-CN | ||
The Elements of Statistical Learning (ESL)的中文翻译 | ||
|
||
## 进度 | ||
更新于2017.03.04 | ||
|
||
- [x] 1 2016.07.26 | ||
- [x] 2.1 2016.08.01 | ||
- [x] 2.2 2016.08.01 | ||
- [x] 2.3 2016.08.01 | ||
- [x] 2.4 2016.08.01 | ||
- [x] 2.5 2016.08.01 | ||
- [x] 2.6 2016.08.01 | ||
- [x] 2.7 2016.08.01 | ||
- [x] 2.8 2016.08.01 | ||
- [x] 2.9 2016.08.01 | ||
- [x] 3.1 2016.08.02 | ||
- [x] 3.2 2016.08.03 | ||
- [x] 3.3 2016.08.05 | ||
- [x] 3.4 2016.09.30:2016.10.14 | ||
- [x] 3.5 2016.10.14:2016.10.21 | ||
- [x] 3.6 2016.10.21 | ||
- [ ] 3.7 | ||
- [ ] 3.8 | ||
- [ ] 3.9 | ||
- [x] 4.1 2016.12.06 | ||
- [x] 4.2 2016.12.06 | ||
- [x] 4.3 2016.12.09:2016.12.10 | ||
- [x] 4.4 2016.12.09:2016.12.15 | ||
- [x] 4.5 2016.12.15 | ||
- [x] 5.1 2017.01.28 | ||
- [x] 5.2 2017.02.08:2017:02:16 | ||
- [x] 5.3 2017.02.16 | ||
- [x] 5.4 2017.02.16 | ||
- [ ] 5.5 | ||
- [ ] 5.6 | ||
- [ ] 5.7 | ||
- [ ] 5.8 | ||
- [ ] 5.9 | ||
- [x] 6.1 2017.02.27:2017.02.28 | ||
- [x] 6.2 2017.03.01 | ||
- [x] 6.3 2017.03.01:2017.03.02 | ||
- [x] 6.4 2017.03.03 | ||
- [x] 6.5 2017.03.04 | ||
- [x] 6.6 2017.03.04 | ||
- [ ] 6.7 | ||
- [ ] 6.8 | ||
- [ ] 6.9 | ||
- [x] 7.1 2017.01.28 | ||
- [x] 7.2 2017.02.18 | ||
- [x] 7.3 2017.02.18 | ||
- [x] 7.4 2017.02.18 | ||
- [x] 7.5 2017.02.18 | ||
- [x] 7.6 2017.02.18 | ||
- [x] 7.7 2017.02.18:2017.02.19 | ||
- [x] 7.8 2017.02.19 | ||
- [x] 7.9 2017.02.19 | ||
- [x] 7.10 2017.02.17:2017.02.18 | ||
- [x] 7.11 2017.02.19 | ||
- [x] 7.12 2017.02.20 | ||
- [x] 8.1 2017.01.28 | ||
- [x] 8.2 2017.01.31 | ||
- [x] 8.3 2017.02.01 | ||
- [x] 8.4 2017.02.01 | ||
- [x] 8.5 2016.12.20 & 2017.02.01:2017.02.03 | ||
- [x] 8.6 2016.02.03 | ||
- [x] 8.7 2016.02.03 | ||
- [ ] 8.8 | ||
- [ ] 8.9 | ||
- [x] 9.1 2017.02.04 | ||
- [x] 9.2 2017.02.05 | ||
- [ ] 9.3 | ||
- [ ] 9.4 | ||
- [ ] 9.5 | ||
- [ ] 9.6 | ||
- [ ] 9.7 | ||
- [x] 10.1 2017.02.06 | ||
- [x] 10.2 2017.02.06 | ||
- [x] 10.3 2017.02.06 | ||
- [x] 10.4 2017.02.06 | ||
- [x] 10.5 2017.02.06 | ||
- [x] 10.6 2017.02.06 | ||
- [ ] 10.7 | ||
- [ ] 10.8 | ||
- [ ] 10.9 | ||
- [ ] 10.10 | ||
- [ ] 10.11 | ||
- [ ] 10.12 | ||
- [ ] 10.13 | ||
- [ ] 10.14 | ||
- [x] 11.1 2017.01.28 | ||
- [x] 11.2 2017.02.07 | ||
- [x] 11.3 2017.02.07 | ||
- [x] 11.4 2017.02.07 | ||
- [x] 11.5 2017.02.07 | ||
- [x] 11.6 2017.02.07 | ||
- [ ] 11.7 | ||
- [ ] 11.8 | ||
- [ ] 11.9 | ||
- [ ] 11.10 | ||
- [x] 12.1 2016.12.09 | ||
- [x] 12.2 2016.12.19:2016.12.20 | ||
- [ ] 12.3 | ||
- [ ] 12.4 | ||
- [ ] 12.5 | ||
- [ ] 12.6 | ||
- [ ] 12.7 | ||
- [x] 13.1 2017.01.28 | ||
- [ ] 13.2 | ||
- [ ] 13.3 | ||
- [ ] 13.4 | ||
- [ ] 13.5 | ||
- [x] 14.1 2017.02.20 | ||
- [x] 14.2 2017.02.20:2017.02.22 | ||
- [x] 14.3 2017.02.22:2017.02.23 & | ||
- [ ] 14.4 | ||
- [ ] 14.5 | ||
- [ ] 14.6 | ||
- [ ] 14.7 | ||
- [ ] 14.8 | ||
- [ ] 14.9 | ||
- [ ] 14.10 | ||
- [x] 15.1 2017.01.28 | ||
- [ ] 15.2 | ||
- [ ] 15.3 | ||
- [ ] 15.4 | ||
- [ ] 16.1 | ||
- [ ] 16.2 | ||
- [ ] 16.3 | ||
- [x] 17.1 2017.02.24 | ||
- [x] 17.2 2017.02.24 | ||
- [x] 17.3 2017.02.24:2017.02.25 | ||
- [x] 17.4 2017.02.25:2017.02.26 | ||
- [ ] 18.1 | ||
- [ ] 18.2 | ||
- [ ] 18.3 | ||
- [ ] 18.4 | ||
- [ ] 18.5 | ||
- [ ] 18.6 | ||
- [ ] 18.7 | ||
- [ ] 18.8 | ||
|
||
|
||
|
||
|
||
|
||
|
53 changes: 53 additions & 0 deletions
53
docs/06 Kernel Smoothing Methods/6.5 Local Likelihood and Other Models.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,53 @@ | ||
# 局部似然和其他模型 | ||
|
||
| 原文 | [The Elements of Statistical Learning](../book/The Elements of Statistical Learning.pdf) | | ||
| ---- | ---------------------------------------- | | ||
| 翻译 | szcf-weiya | | ||
| 时间 | 2017-03-04:2017-03-04 | | ||
|
||
局部回归和可变参数模型的概念非常广泛:如果拟合方法适合观测的权重系数,则任意参数模型可以变得局部。这里有一些例子: | ||
|
||
- 与每个观测$y_i$有关的是与协变量$x_i$成线性的参数$\theta_i=\theta(x_i)=x_i^T\beta$,并且基于对数似然$l(\beta)=\sum_{i=1}^Nl(y_i,x_i^T\beta)$对 $\beta$进行推断。我们可以使用$x_0$局部的概率来灵活地建立$\theta(X)$的模型。这用来推断$\theta(x_0)=x_0^T\beta(x_0)$: | ||
|
||
$$ | ||
l(\beta(x_0))=\sum\limits_{i=1}^NK_\lambda(x_0,x_i)l(y_i,x_i^T\beta(x_0)) | ||
$$ | ||
|
||
许多似然模型,特别是包含逻辑斯蒂回归和多数线性模型的广义线性模型。局部似然允许对全局线性模型进行松弛得到局部线性。 | ||
|
||
- 如上所述,除了不同的变量与用于定义局部似然性的$\theta$相关联: | ||
$$ | ||
l(\theta(z_0))=\sum\limits_{i=1}^NK_\lambda(z_0,z_i)l(y_i,\eta(x_i,\theta(z_0))) | ||
$$ | ||
举个例子,$\eta(x,\theta)=x^T\theta$可以是关于$x$成线性的。这会通过最大化局部似然来拟合可变参数模型$\theta(z)$。 | ||
|
||
- 阶数为$k$的自回归时间序列形式为$y_t=\beta_0+\beta_1y_{t-1}+\beta_2y_{t-2}+\cdots+\beta_ky_{t-k}+\epsilon_t$。用$z_t=(y_{t-1},y_{t-2},\cdots,y_{t-k})$记为滞后集(lag set),模型则看起来是一个标准的线性模型$y_t=z_t^T\beta+\epsilon_t$,而且一般使用最小二乘来拟合。通过核为$K(z_0,z_t)$的局部最小二乘拟合允许模型根据序列的短期记忆来改变。这区别于更传统的因窗口时间变化的动态线性模型。 | ||
|
||
作为局部似然的图示,我们考虑第4章中多类线性逻辑斯蒂回归模型(4.36)的局部形式。数据包含特征$x_i$和相关的类别响应变量$g_i\in\{1,2,\ldots,J\}$,并且线性模型有如下形式 | ||
$$ | ||
Pr(G=j\mid X=x)=\frac{e^{\beta_{i0}+\beta_j^Tx}}{1+\sum_{k=1}^{J-1}e^{\beta_{k0+\beta_k^Tx}}}\qquad (6.18) | ||
$$ | ||
第$J$类的局部对数似然模型可以写成 | ||
$$ | ||
\sum\limits_{i=1}^NK_\lambda(x_0,x_i)\{beta_{g_i0}(x_0)+\beta_{g_i}(x_0)^T(x_i-x_0)-log[1+\sum\limits_{k=1}^{J-1}exp(\beta_{k0}(x_0)+\beta_k(x_0)^T(x_i-x_0))]\}\qquad (6.19) | ||
$$ | ||
注意到 | ||
- 我们使用$g_i$作为下标来选择合适的分子。 | ||
- 由模型定义得$\beta_{J0}=0,\beta_J=0$ | ||
- 我们已经对$x_0$处的局部回归进行了标准化,所以在$x_0$处的拟合后验概率简化为 | ||
$$ | ||
\widehat{Pr}(G=j\mid X=x_0)=\frac{e^{\hat \beta_{j0}(x_0)}}{1+\sum_{k=1}^{J-1}e^{\hat\beta_{k0}(x_0)}}\qquad (6.20) | ||
$$ | ||
|
||
这个模型可以用来在适当低维度中的灵活多类别分类,尽管在高维的ZIP号码分类问题中也能成功使用。广义加性模型(第9章)采用核光滑方法是很相关的,并且通过假设回归函数的加性结构来避免了维数问题。 | ||
|
||
作为一个简单的说明,我们对第4章中的心脏病数据来拟合两个类别的局部线性逻辑斯蒂回归模型。图6.12显示了对风险因子中的两个来拟合单变量局部逻辑斯蒂回归模型(分别)。这对于检查非线性是很有用的筛选工具,当数据本身提供很少的可视信息。这种情形下会发现未期望的异常值,而用传统的方法可能会没注意到。 | ||
|
||
![](../img/06/fig6.12.png) | ||
|
||
> 对于南非心脏病数据,每张图显示了二值响应变量CHD(冠状心脏病)作为风险因子的函数。对于每张图我们采用局部线性逻辑斯蒂回归模型来计算拟合的CHD患病率。在定义域的底端处CHD患病率的意想不到的增加是因为它们是后面的数据,并且一些主题已经经历了降低血压和体重的处理方式。图中的阴影区域表明逐点估计的标准误差带。 | ||
因为CHD是二值指示变量,所以我们可以通过直接地光滑二值响应变量来估计条件患病率$Pr(G=j\mid x_0)$,而不是采取似然的组成。这意味着拟合局部常值逻辑斯蒂回归模型(练习6.5)。为了享受局部线性拟合的偏差纠正,在无限制的逻辑斯蒂尺度上操作更加自然。 | ||
|
||
一般地,我们用逻辑斯蒂回归来计算参数估计和它们的标准差。这个也可以局部实现,因为我们可以得到如图中所示的关于我们拟合的患病率的逐点标准误差带。 | ||
|
Empty file.
79 changes: 79 additions & 0 deletions
79
...06 Kernel Smoothing Methods/6.6 Kernel Density Estimation and Classification.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,79 @@ | ||
# 核密度估计和分类 | ||
|
||
| 原文 | [The Elements of Statistical Learning](../book/The Elements of Statistical Learning.pdf) | | ||
| ---- | ---------------------------------------- | | ||
| 翻译 | szcf-weiya | | ||
| 时间 | 2017-03-04:2017-03-04 | | ||
|
||
核密度估计是非监督学习过程,在历史上先于核回归。这也很自然地得到非参分类过程的简单族。 | ||
|
||
## 核密度估计 | ||
假设我们从概率密度$f_X(x)$中取随机样本$x_1,x_2,\ldots,x_N$,并且我们希望估计$x_0$处的$f_X$。为了简化,我们现在假设$X\in R$。与前面一样,一个自然的局部估计有如下形式 | ||
$$ | ||
\hat f_X(x_0)=\frac{#x_i\in \cal N(x_0)}{N\lambda}\qquad (6.21) | ||
$$ | ||
其中$\cal N(x_0)$为 $x_0$附近的宽度为$\lambda$的小度量邻域。这个估计非常振荡,并且偏向于光滑的Parzen估计 | ||
$$ | ||
\hat f(x_0)=\frac{1}{N\lambda}\sum\limits_{i=1}^NK_\lambda(x_0,x_i)\qquad (6.22) | ||
$$ | ||
因为它统计离$x_0$近的观测,并且观测的权重系数随着到$x_0$的距离而减小。这种情形下$K_\lambda$受欢迎的选择是高斯核$K_\lambda(x_0,x)=\phi(\vert x-x_0\vert/\lambda)$。图6.13显示了根据CHD群中systolic blood pressure的样本值来拟合高斯核密度。令$\phi_\lambda$为均值为0标准误差为$\lambda$的高斯密度,则(6.22)有如下形式 | ||
$$ | ||
\begin{align} | ||
\hat f_X(x)& = \frac{1}{N}\sum\limits_{i=1}^N\phi_\lambda(x-x_i)\\ | ||
&=(\hat F\star\phi_\lambda)(x)\qquad\qquad (6.23) | ||
\end{align} | ||
$$ | ||
|
||
样本分布$\hat F$和 $\phi_\lambda$的卷积。分布$\hat F(x)$在每个观测$x_i$处赋予$1/N$的权重,并且是跳跃的;在$\hat f_X(x)$中我们已经对每个观测$x_i$加上独立高斯噪声来光滑$\hat F$. | ||
|
||
Parzen密度估计是局部平均的等价,并且与局部回归一起提出了改进[在密度的对数尺度;见Loader(1999)]。我们这里不继续讨论这些。在$R^p$中, 高斯密度的自然泛化意味着在(6.23)中使用高斯积的核, | ||
|
||
$$ | ||
\hat f_X(x_0)=\frac{1}{N(2\lambda^2\pi)^{p/2}}\sum\limits_{i=1}^Ne^{-\frac{1}{2}(\Vert x_i-x_0\Vert/\lambda)^2}\qquad (6.24) | ||
$$ | ||
|
||
## 核密度分类 | ||
|
||
利用贝叶斯定理采用直接方式来对分类做非参数密度估计。假设对于$J$个分类的问题,我们对每个类别拟合非参密度估计$\hat f_j(X),j=1,\ldots,J$,并且我们有类别先验概率$\hat \pi_j$的估计(通常是样本比例)。接着 | ||
$$ | ||
\widehat{Pr}(G=j\mid X=x_0)=\frac{\hat \pi_j\hat f_j(x_0)}{\sum_{k=1}^J\hat\pi_k\hat f_k(x_0)}\qquad (6.25) | ||
$$ | ||
|
||
图6.14应用这个方法在心脏风险因子研究中估计CHD的患病率,并且应该与图6.12的左图进行比较。主要的差异出现在图6.14中右图的高SBP区域。这个区域中对于两个类别的数据都是稀疏的,并且因为高斯核密度估计采用度量核,所以密度估计在其他区域中是低的并且不好(高方差)。局部逻辑斯蒂回归方法(6.20)采用$k$-NN带宽的三次立方核;这有效地拓宽了这个区域中的核,并且利用局部线性假设来对估计进行光滑(在逻辑斯蒂尺度上)。 | ||
|
||
![](../img/06/fig6.14.png) | ||
|
||
> 左图显示了在CHD和no-CHD群体中systolic blood preesure的两个独立的密度估计,每个都采用高斯核估计。右图显示了用(6.25)估计的CHD的后验概率。 | ||
如果分类是最终的目标,则很好地学习单独的类别密度可能不是必要的,并且可以实际上有误导。图6.15显示了密度都是多模式的,但是后验比例非常光滑。在从数据中学习单独的密度,可能考虑设置一个更粗糙,高方差的拟合来捕捉这些特征,这与估计后验概率的目的是不相关的。实际上,如果分类是最终目标,我们仅仅需要估计在判别边界附近的后验(对于两个类别,是集合$\{x\mid Pr(G=1\mid X=x)=\frac{1}{2}\}$) | ||
|
||
## 朴素贝叶斯分类器 | ||
|
||
这是这些年仍然流行的技巧,尽管它的名字(也称为“白痴的贝叶斯”!)当特征空间的维数$p$很高,这种方式特别合适,使得密度估计不再吸引人。朴素贝叶斯模型假设给定类别$G=j$,特征$X_k$是独立的: | ||
|
||
$$ | ||
f_j(X)=\prod\limits_{k=1}^pf_{jk}(X_k)\qquad (6.26) | ||
$$ | ||
|
||
尽管这个假设一般是不对的,但是确实很大程度上简化了估计: | ||
|
||
- 单独的类别条件的边缘密度$f_{jk}$可以采用一维核密度估计分别估计出来。这实际上是原始朴素贝叶斯过程的泛化,采用单变量高斯分布来表示这些边缘密度。 | ||
- 如果$X$的组分$X_j$是离散的,则可以使用合适的直方图估计。这提供了在特征向量中混合变量类型的完美方式。 | ||
|
||
尽管这些乐观的假设,朴素贝叶斯分类器经常比更复杂的分类器做得更好。原因与图6.15相关:尽管单独的类别密度估计可能是有偏的,这个偏差或许不会对后验概率不会有太大的影响,特别是在判别区域附近。实际上,这个问题或许可以承受为了节省方差造成的相当大的偏差,比如“天真的”假设得到的。 | ||
|
||
从(6.26)开始我们可以导出逻辑斯蒂变换(采用类别$J$作为基底): | ||
|
||
$$ | ||
\begin{align} | ||
log\frac{Pr(G=\ell\mid X)}{Pr(G=J\mid X)}&=log\frac{\pi_\ell f_\ell(X)}{\pi_Jf_J(X)}\\ | ||
&=log\frac{\pi_\ell\prod_{k=1}^pf_{\ell k}(X_k)}{\pi_J\prod_{k=1}^pf_{Jk}(X_k)}\\ | ||
&=log\frac{\pi_\ell}{\pi_J}+\sum\limits_{k=1}^plog\frac{f_{\ell k}(X_k)}{f_{Jk}(X_k)}\\ | ||
&=\alpha_\ell + \sum\limits_{k=1}^pg_{\ell k}(X_k) | ||
\end{align} | ||
\qquad (6.27) | ||
$$ | ||
|
||
这有广义加性模型的形式,更多细节将在第9章中描述。尽管模型以完全不同的方式来拟合;它们的差别将在练习6.9中探索。朴素贝叶斯和广义加性模型类比于线性判别分析和逻辑斯蒂回归(4.4.5节) | ||
|
||
|
48 changes: 48 additions & 0 deletions
48
docs/06 Kernel Smoothing Methods/6.6 Kernel Density Estimation and Classification.md~
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
# 核密度估计和分类 | ||
|
||
| 原文 | [The Elements of Statistical Learning](../book/The Elements of Statistical Learning.pdf) | | ||
| ---- | ---------------------------------------- | | ||
| 翻译 | szcf-weiya | | ||
| 时间 | 2017-03-04:2017-03-04 | | ||
|
||
核密度估计是非监督学习过程,在历史上先于核回归。这也很自然地得到非参分类过程的简单族。 | ||
|
||
## 核密度估计 | ||
假设我们从概率密度$f_X(x)$中取随机样本$x_1,x_2,\ldots,x_N$,并且我们希望估计$x_0$处的$f_X$。为了简化,我们现在假设$X\in R$。与前面一样,一个自然的局部估计有如下形式 | ||
$$ | ||
\hat f_X(x_0)=\frac{#x_i\in \cal N(x_0)}{N\lambda}\qquad (6.21) | ||
$$ | ||
其中$\cal N(x_0)$为 $x_0$附近的宽度为$\lambda$的小度量邻域。这个估计非常振荡,并且偏向于光滑的Parzen估计 | ||
$$ | ||
\hat f(x_0)=\frac{1}{N\lambda}\sum\limits_{i=1}^NK_\lambda(x_0,x_i)\qquad (6.22) | ||
$$ | ||
因为它统计离$x_0$近的观测,并且观测的权重系数随着到$x_0$的距离而减小。这种情形下$K_\lambda$受欢迎的选择是高斯核$K_\lambda(x_0,x)=\phi(\vert x-x_0\vert/\lambda)$。图6.13显示了根据CHD群中systolic blood pressure的样本值来拟合高斯核密度。令$\phi_\lambda$为均值为0标准误差为$\lambda$的高斯密度,则(6.22)有如下形式 | ||
$$ | ||
\begin{align} | ||
\hat f_X(x)& = \frac{1}{N}\sum\limits_{i=1}^N\phi_\lambda(x-x_i)\\ | ||
&=(\hat F\star\phi_\lambda)(x)\qquad\qquad (6.23) | ||
\end{align} | ||
$$ | ||
|
||
样本分布$\hat F$和 $\phi_\lambda$的卷积。分布$\hat F(x)$在每个观测$x_i$处赋予$1/N$的权重,并且是跳跃的;在$\hat f_X(x)$中我们已经对每个观测$x_i$加上独立高斯噪声来光滑$\hat F$. | ||
|
||
Parzen密度估计是局部平均的等价,并且与局部回归一起提出了改进[在密度的对数尺度;见Loader(1999)]。我们这里不继续讨论这些。在$R^p$中, 高斯密度的自然泛化意味着在(6.23)中使用高斯积的核, | ||
|
||
$$ | ||
\hat f_X(x_0)=\frac{1}{N(2\lambda^2\pi)^{p/2}}\sum\limits_{i=1}^Ne^{-\frac{1}{2}(\Vert x_i-x_0\Vert/\lambda)^2}\qquad (6.24) | ||
$$ | ||
|
||
## 核密度分类 | ||
|
||
利用贝叶斯定理采用直接方式来对分类做非参数密度估计。假设对于$J$个分类的问题,我们对每个类别拟合非参密度估计$\hat f_j(X),j=1,\ldots,J$,并且我们有类别先验概率$\hat \pi_j$的估计(通常是样本比例)。接着 | ||
$$ | ||
\widehat{Pr}(G=j\mid X=x_0)=\frac{\hat \pi_j\hat f_j(x_0)}{\sum_{k=1}^J\hat\pi_k\hat f_k(x_0)}\qquad (6.25) | ||
$$ | ||
|
||
图6.14应用这个方法在心脏风险因子研究中估计CHD的患病率,并且应该与图6.12的左图进行比较。主要的差异出现在图6.14中右图的高SBP区域。这个区域中对于两个类别的数据都是稀疏的,并且因为高斯核密度估计采用度量核,所以密度估计在其他区域中是低的并且不好(高方差)。局部逻辑斯蒂回归方法(6.20)采用$k$-NN带宽的三次立方核;这有效地拓宽了这个区域中的核,并且利用局部线性假设来对估计进行光滑(在逻辑斯蒂尺度上)。 | ||
|
||
![](../img/06/fig6.14.png) | ||
|
||
> 左图显示了在CHD和no-CHD群体中systolic blood preesure的两个独立的密度估计,每个都采用高斯核估计。右图显示了用(6.25)估计的CHD的后验概率。 | ||
|
||
如果分类是最终的目标,则学习单独类的密度 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.