diff --git a/docs/14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling.md b/docs/14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling.md index ab294f1be0..36b066a973 100644 --- a/docs/14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling.md +++ b/docs/14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling.md @@ -4,7 +4,7 @@ | ---- | ---------------------------------------- | | 翻译 | szcf-weiya | | 时间 | 2017-09-04 | -|更新| 2018-01-22| +|更新| 2020-02-20 15:46:34| |状态|Done| 最近有人提出了一些用于非线性降维的方法,类似主曲面的思想.想法是将数据看成位于一个嵌在高维空间中的 **固有低维非线性流形 (intrinsically low-dimensional nonlinear manifold)** 的附近.这些方法可以看成是“压扁(flattening)”流形,因此将数据降低至低维坐标系统中,用于表示点在流形中的相对位置.在信噪比非常高的问题中非常有用(比如,物理系统),而且对于有低信噪比的观测数据不是很有用. @@ -15,14 +15,14 @@ 我们将简短地介绍用作非线性降维和流形映射的三个新方法. -**等距特征映射算法(ISOMAP)**(Tenenbaum et al. 2000[^1])构造了一个图来近似沿着流形的点之间的测地线距离.具体地,对每个数据点,我们找到其邻居——距该点的某个欧式距离范围内的点.我们构造任意两个邻居点间用边相连的图.任意两点的测地线距离则用图中点的最短路径来近似.最终,对图的距离应用经典缩放,来得到低纬映射. +**等距特征映射算法 (Isometric feature mapping, ISOMAP)** (Tenenbaum et al. 2000[^1]) 构造了一个图来近似沿着流形的点之间的测地线距离.具体地,对每个数据点,我们找到其邻居——距该点的某个欧式距离范围内的点.我们构造任意两个邻居点间用边相连的图.任意两点的测地线距离则用图中点的最短路径来近似.最终,对图的距离应用经典缩放,来得到低维映射. -**局部线性内嵌(LLE)**(Roweis and Saul, 2000[^2])采用完全不同的方式,它试图保持高维数据的局部仿射结构.每个数据点用邻居点的线性组合来近似.于是通过寻找保持局部近似的最优方式构造低维表示.细节非常有趣,所以在这里给出: +**局部线性内嵌 (Local linear embedding, LLE)**(Roweis and Saul, 2000[^2])采用完全不同的方式,它试图保持高维数据的局部仿射结构.每个数据点用邻居点的线性组合来近似.于是通过寻找保持局部近似的最优方式构造低维表示.细节非常有趣,所以在这里给出: -1. 对每个 $p$ 维中的数据点 $x_i$,寻找欧式距离的 $K$ 最近邻 $\cal N(i)$ +1. 对每个 $p$ 维中的数据点 $x_i$,寻找欧式距离意义下的 $K$ 最近邻点 $\cal N(i)$. 2. 对每个点用邻居点的混合仿射来近似 $$ -\underset{W_{ik}}\Vert x_i-\sum\limits_{k\in\cal N(i)}w_{ik}x_k\Vert^2\tag{14.102} +\underset{w_{ik}}{\min}\Vert x_i-\sum\limits_{k\in\cal N(i)}w_{ik}x_k\Vert^2\tag{14.102} $$ 其中权重 $w_{ik}$ 满足 $w_{ik}=0, k\not\in \cal N(i), \sum_{k=1}^Nw_{ik}=1$.$w_{ik}$ 是点 $k$ 对 $i$ 点的重构的贡献.注意到为了得到唯一解,我们必须要求 $K < p$. 3. 最后,固定 $w_{ik}$,在 $d < p$ 维空间中寻找点 $y_i$ 来最小化 @@ -36,9 +36,12 @@ $$ \tr [(\Y-\W\Y)^T(\Y-\W\Y)] = \tr[\Y^T(\I-\W)^T(\I-\W)\Y)]\tag{14.104} $$ -其中 $\W$ 是 $N\times N$; $\Y$ 是 $N\times d, d < p$.$\hat \Y$ 的解是 $\M=(\I-\W)^T(\I-\W)$ 的 trailing eigenvectors([Issue 59](https://github.com/szcf-weiya/ESL-CN/issues/59)).因为 $\1$ 是特征值为 0 的平凡特征向量,所以我们舍弃它并且保留接下来的 $d$ 个.这会产生额外的影响 $\1^T\Y=0$,因此嵌入坐标(embedding coordinates)进行了中心化. +其中 $\W$ 是 $N\times N$; $\Y$ 是 $N\times d, d < p$.$\hat \Y$ 的解是 $\M=(\I-\W)^T(\I-\W)$ 的 **尾特征向量 (trailing eigenvectors)**.因为 $\1$ 是特征值为 0 的平凡特征向量,所以我们舍弃它并且保留接下来的 $d$ 个.这会产生额外的影响 $\1^T\Y=0$,因此嵌入坐标的均值为0. -**局部 MDS**(Chen and Buja, 2008[^3]) 采用最简单的、而且可以说是最直接的方式.定义 $\cal N$ 为邻居点的对称集;具体地,如果点 $i$ 在 $i'$ 的 $K$ 最近邻中,则点对 $(i, i')$ 在 $\cal N$ 中,反过来也是如此. +!!! note "weiya 注:" + 这里的尾特征向量除去了特征值为 0 的平凡特征向量 $\1$,而因为特征向量间正交,所以有 $\1^T\Y=0$. + +**局部多维缩放 (Local MDS)**(Chen and Buja, 2008[^3])采用最简单的、而且可以说是最直接的方式.定义 $\cal N$ 为邻居点的对称集;具体地,如果点 $i$ 在 $i'$ 的 $K$ 最近邻中,则点对 $(i, i')$ 在 $\cal N$ 中,反过来也是如此. !!! note "weiya 注" 在[14.7节](14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)中的谱聚类的 mutual K-nearest-neighbor graph 也有用到 $\cal N$. @@ -46,24 +49,27 @@ $$ 于是我们构造压力函数 $$ -S_L(z_1,z_2,\ldots, z_N) = \sum\limits_{(i,i')\in \cal N}(d_{ii'}-\Vert z_i-z_{i'}\Vert)^2 + \sum\limits_{(i,i')\not\in \cal N}w\cdot (D-\Vert z_i-z_{i'}\Vert)^2\tag{14.105} +S_L(z_1,z_2,\ldots, z_N) = \sum\limits_{(i,i')\in \cal N}(d_{ii'}-\Vert z_i-z_{i'}\Vert)^2 + \sum\limits_{(i,i')\not\in \cal N}w\cdot (D-\Vert z_i-z_{i'}\Vert)^2\tag{14.105}\label{14.105} $$ -这里$D$是某个较大的常数,$w$是权重.想法是将不是邻居的点看成是距离非常远;这些点对被赋予小权重$w$使得它们不会主导整个压力函数.为了简化表达式,取$w\sim 1/D$,并令$D\rightarrow \infty$.展开式(14.105),得到 +这里 $D$ 是某个较大的常数,$w$ 是权重.想法是将不是邻居的点看成是距离非常远;这些点对被赋予小权重 $w$ 使得它们不会主导整个压力函数.为了简化表达式,取 $w\sim 1/D$,并令 $D\rightarrow \infty$.展开式 \eqref{14.105},得到 $$ -S_L(z_1,z_2,\ldots, z_N)=\sum\limits_{(i, i')\in\cal N}(d_{ii'}-\Vert z_i-z_{i'})^2-\tau \sum\limits_{(i,i')\not \in \cal N}\Vert z_i-z_{i'}\Vert\tag{14.106} +S_L(z_1,z_2,\ldots, z_N)=\sum\limits_{(i, i')\in\cal N}(d_{ii'}-\Vert z_i-z_{i'}\Vert)^2-\tau \sum\limits_{(i,i')\not \in \cal N}\Vert z_i-z_{i'}\Vert\tag{14.106}\label{14.106} $$ -其中$\tau =2wD$.式(14.106)试图保持数据的局部性质,而第二项促使非邻居对$(i, i')$的$z_i,z_{i'}$更远.局部MDS在固定邻居个数$K$以及调整参数$\tau$的情况下,在$z_i$上最小化压力函数(14.106). +其中 $\tau =2wD$.式 \eqref{14.106} 试图保持数据的局部性质,而第二项促使非邻居对 $(i, i')$ 的 $z_i,z_{i'}$ 更远.局部多维缩放在固定邻居个数 $K$ 以及调整参数 $\tau$ 的情况下,在 $z_i$ 上最小化压力函数 \eqref{14.106}. -图14.44的右图显示了采用$k=2$个邻居和$\tau = 0.01$的局部MDS的结果.我们采用多个起始值的坐标下降来寻找(非凸)损失函数一个好的最小值.沿着曲线的点的顺序大部分都被保持了. +图 14.44 的右图显示了采用 $k=2$ 个邻居和 $\tau = 0.01$ 的局部多维缩放的结果.我们采用多个起始值的 **坐标下降 (coordinate descent)** 来寻找(非凸)损失函数一个好的最小值.沿着曲线的点的顺序大部分都被保持了. ![](../img/14/fig14.45.png) -图14.45显示了LLE方法的一个有趣的应用.数据包含1965张图象,数字化为$20\times 28$的灰白图象.图中展示了LLE的前两个坐标结果,它们解释了摆放位置以及表情的一些变异.类似的图象可以通过局部MDS得到. +图 14.45 显示了 LLE 方法的一个有趣的应用.数据包含 1965 张图象,数字化为 $20\times 28$ 的灰白图象.图中展示了 LLE 的前两个坐标结果,它们解释了摆放位置以及表情的一些变异.类似的图象可以通过局部多维缩放得到. + +!!! note "原书脚注: " + Sam Roweis 和 Lawrence Saul 友好地提供了图 14.45. -在Chen and Buja(2008)[^3]报告的实验中,局部MDS与ISOMAP和LLE相比表现得更好.他们也演示了局部MDS在图象布局方面很有用的应用.有些方法与这里讨论的方法有着紧密的联系,如谱聚类(14.5.3节)和核PCA(14.5.4节). +在 Chen and Buja(2008)[^3] 报告的实验中,局部多维缩放与 ISOMAP 和 LLE 相比表现得更好.他们也演示了局部多维缩放在图象布局方面很有用的应用.有些方法与这里讨论的方法有着紧密的联系,如谱聚类([14.5.3 节](14.5-Principal-Components-Curves-and-Surfaces/index.html))和核主成分([14.5.4 节](14.5-Principal-Components-Curves-and-Surfaces/index.html)). [^1]: Tenenbaum, J. B., de Silva, V. and Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction, Science 290: 2319–2323. [^2]: Roweis, S. T. and Saul, L. K. (2000). Locally linear embedding, Science 290: 2323–2326. diff --git a/docs/tag.md b/docs/tag.md index 84f42b9c82..40d3f631b9 100644 --- a/docs/tag.md +++ b/docs/tag.md @@ -11,8 +11,8 @@ - alternative maximization procedure: 轮换最大化过程 ([第 8.5 节](08-Model-Inference-and-Averaging/8.5-The-EM-Algorithm/index.html)) - alternate: 轮换 ([第 8.5 节](08-Model-Inference-and-Averaging/8.5-The-EM-Algorithm/index.html)) - automatic flexible: 自动灵活 ([第 9.1 节](09-Additive-Models-Trees-and-Related-Methods/9.1-Generalized-Additive-Models/index.html)) -- abundance: 多度 ([第 10.14 节](10-Boosting-and-Additive-Trees/10.14-Illustrations/index.html)) - aggregated data: 聚合数据 ([第 10.14 节](10-Boosting-and-Additive-Trees/10.14-Illustrations/index.html)) +- abundance: 多度 ([第 10.14 节](10-Boosting-and-Additive-Trees/10.14-Illustrations/index.html)) - additive expansions: 加性展开 ([第 10.2 节](10-Boosting-and-Additive-Trees/10.2-Boosting-Fits-an-Additive-Model/index.html)) - adequacy: 充分性 ([第 14.1 节](14-Unsupervised-Learning/14.1-Introduction/index.html)) - Association rule analysis: 关联规则分析 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) @@ -25,14 +25,14 @@ ## B - bias: 偏差 ([第 2.3 节](02-Overview-of-Supervised-Learning/2.3-Two-Simple-Approaches-to-Prediction/index.html)) -- Bayes classifier: 贝叶斯分类 ([第 2.4 节](02-Overview-of-Supervised-Learning/2.4-Statistical-Decision-Theory/index.html)) - Bayes rate: 贝叶斯阶 ([第 2.4 节](02-Overview-of-Supervised-Learning/2.4-Statistical-Decision-Theory/index.html)) +- Bayes classifier: 贝叶斯分类 ([第 2.4 节](02-Overview-of-Supervised-Learning/2.4-Statistical-Decision-Theory/index.html)) - basis set: 基础集 ([第 7.6 节](07-Model-Assessment-and-Selection/7.6-The-Effective-Number-of-Parameters/index.html)) - Bayes factor: 贝叶斯因子 ([第 7.7 节](07-Model-Assessment-and-Selection/7.7-The-Bayesian-Approach-and-BIC/index.html)) - BIC: 贝叶斯信息准则 ([第 7.7 节](07-Model-Assessment-and-Selection/7.7-The-Bayesian-Approach-and-BIC/index.html)) - bootstrap: 自助法 ([第 8.7 节](08-Model-Inference-and-Averaging/8.7-Bagging/index.html)) -- Bootstrap aggregation: 自助法整合 ([第 8.7 节](08-Model-Inference-and-Averaging/8.7-Bagging/index.html)) - bag: 打包 ([第 8.7 节](08-Model-Inference-and-Averaging/8.7-Bagging/index.html)) +- Bootstrap aggregation: 自助法整合 ([第 8.7 节](08-Model-Inference-and-Averaging/8.7-Bagging/index.html)) - Bay Area: 海湾地区 ([第 10.14 节](10-Boosting-and-Additive-Trees/10.14-Illustrations/index.html)) - black box: 黑箱方法 ([第 10.7 节](10-Boosting-and-Additive-Trees/10.7-Off-the-Shelf-Procedures-for-Data-Mining/index.html)) - blurred: 模糊的 ([第 12.7 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.7-Mixture-Discriminant-Analysis/index.html)) @@ -42,63 +42,66 @@ - Boltzmann machines: 玻尔兹曼机 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) ## C -- categories: 类别型 ([第 2.2 节](02-Overview-of-Supervised-Learning/2.2-Variable-Types-and-Terminology/index.html)) - classification: 分类 ([第 2.2 节](02-Overview-of-Supervised-Learning/2.2-Variable-Types-and-Terminology/index.html), [第 13.1 节](13-Prototype-Methods-and-Nearest-Neighbors/13.1-Introduction/index.html)) +- categories: 类别型 ([第 2.2 节](02-Overview-of-Supervised-Learning/2.2-Variable-Types-and-Terminology/index.html)) - curse of dimensionality: 维度的诅咒 ([第 2.5 节](02-Overview-of-Supervised-Learning/2.5-Local-Methods-in-High-Dimensions/index.html)) - closed: 闭型 ([第 2.6 节](02-Overview-of-Supervised-Learning/2.6-Statistical-Models-Supervised-Learning-and-Function-Approximation/index.html)) - cubic smoothing spline: 三次光滑样条 ([第 2.8 节](02-Overview-of-Supervised-Learning/2.8-Classes-of-Restricted-Estimators/index.html)) - complexity: 复杂性 ([第 2.9 节](02-Overview-of-Supervised-Learning/2.9-Model-Selection-and-the-Bias-Variance-Tradeoff/index.html)) - computer era: 计算机时代 ([第 3.1 节](03-Linear-Methods-for-Regression/3.1-Introduction/index.html)) - closed form: 闭形式 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) +- canonical correlation analysis, CCA: 典则相关分析 ([第 3.7 节](03-Linear-Methods-for-Regression/3.7-Multiple-Outcome-Shrinkage-and-Selection/index.html)) - compressed sensing: 压缩传感 ([第 3.8 节](03-Linear-Methods-for-Regression/3.8-More-on-the-Lasso-and-Related-Path-Algorithms/index.html)) -- canonical: 典则 ([第 4.3 节](04-Linear-Methods-for-Classification/4.3-Linear-Discriminant-Analysis/index.html)) - canonical variables: 典则变量 ([第 4.3 节](04-Linear-Methods-for-Classification/4.3-Linear-Discriminant-Analysis/index.html)) -- cases: 案例集 ([第 4.4 节](04-Linear-Methods-for-Classification/4.4-Logistic-Regression/index.html)) -- controls: 控制集 ([第 4.4 节](04-Linear-Methods-for-Classification/4.4-Logistic-Regression/index.html)) +- canonical: 典则 ([第 4.3 节](04-Linear-Methods-for-Classification/4.3-Linear-Discriminant-Analysis/index.html)) - concave: 凹的 ([第 4.4 节](04-Linear-Methods-for-Classification/4.4-Logistic-Regression/index.html)) +- controls: 控制集 ([第 4.4 节](04-Linear-Methods-for-Classification/4.4-Logistic-Regression/index.html)) +- cases: 案例集 ([第 4.4 节](04-Linear-Methods-for-Classification/4.4-Logistic-Regression/index.html)) - compact kernel: 紧核 ([第 6.1 节](06-Kernel-Smoothing-Methods/6.1-One-Dimensional-Kernel-Smoothers/index.html)) -- consensus vote: 投票共识 ([第 8.7 节](08-Model-Inference-and-Averaging/8.7-Bagging/index.html)) - consensus vote: 共识投票 ([第 8.7 节](08-Model-Inference-and-Averaging/8.7-Bagging/index.html)) +- consensus vote: 投票共识 ([第 8.7 节](08-Model-Inference-and-Averaging/8.7-Bagging/index.html)) - collection of rules: 规则集合 ([第 9.3 节](09-Additive-Models-Trees-and-Related-Methods/9.3-PRIM/index.html)) - computationally intensive: 计算密集型的 ([第 10.2 节](10-Boosting-and-Additive-Trees/10.2-Boosting-Fits-an-Additive-Model/index.html)) - checkerboard: 跳棋盘 ([第 13.3 节](13-Prototype-Methods-and-Nearest-Neighbors/13.3-k-Nearest-Neighbor-Classifiers/index.html)) - condensing: 压缩 ([第 13.5 节](13-Prototype-Methods-and-Nearest-Neighbors/13.5-Computational-Considerations/index.html)) -- conjunctive rule: 联合规则 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) - cardinality: 基数 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) - confidence: 置信度 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) -- codebook: 码本 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) +- conjunctive rule: 联合规则 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) +- Complete linkage, CL: 全链接 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - combinatorial algorithms: 组合算法 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) -- cluster analysis: 聚类分析 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) +- codebook: 码本 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - cophenetic correlation coefficient: 共表型相关系数 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) +- cluster analysis: 聚类分析 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - constrained topological map: 约束拓扑图 ([第 14.4 节](14-Unsupervised-Learning/14.4-Self-Organizing-Maps/index.html)) - corpus callosum: 胼胝体 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) - convex hull: 凸包 ([第 14.6 节](14-Unsupervised-Learning/14.6-Non-negative-Matrix-Factorization/index.html)) - correlated: 相关的 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - classical scaling: 经典缩放 ([第 14.8 节](14-Unsupervised-Learning/14.8-Multidimensional-Scaling/index.html)) +- coordinate descent: 坐标下降 ([第 14.9 节](14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling/index.html)) - clique: 团 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) - clique potentials: 团势 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) - complete graph: 完全图 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) - covariance: 协方差图 ([第 17.3 节](17-Undirected-Graphical-Models/17.3-Undirected-Graphical-Models-for-Continuous-Variables/index.html)) - constant: 常值 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) -- contrastive divergence: 对比发散 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) -- cyclical coordinate descent: 循环坐标下降 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) - CD: 对比发散 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) +- cyclical coordinate descent: 循环坐标下降 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) +- contrastive divergence: 对比发散 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) - cyclical coordinate descent: 坐标轮换 ([第 18.4 节](18-High-Dimensional-Problems/18.4-Linear-Classifiers-with-L1-Regularization/index.html)) ## D - dependent variables: 因变量 ([第 2.1 节](02-Overview-of-Supervised-Learning/2.1-Introduction/index.html)) -- discrete: 离散 ([第 2.2 节](02-Overview-of-Supervised-Learning/2.2-Variable-Types-and-Terminology/index.html)) - dummy variables: 虚拟变量 ([第 2.2 节](02-Overview-of-Supervised-Learning/2.2-Variable-Types-and-Terminology/index.html), [第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) +- discrete: 离散 ([第 2.2 节](02-Overview-of-Supervised-Learning/2.2-Variable-Types-and-Terminology/index.html)) - dictionary: 字典 ([第 2.8 节](02-Overview-of-Supervised-Learning/2.8-Classes-of-Restricted-Estimators/index.html)) - diamond: 菱形 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) -- discriminant functions: 判别函数 ([第 4.1 节](04-Linear-Methods-for-Classification/4.1-Introduction/index.html)) - decision boundaries: 线性判别边界 ([第 4.1 节](04-Linear-Methods-for-Classification/4.1-Introduction/index.html)) +- discriminant functions: 判别函数 ([第 4.1 节](04-Linear-Methods-for-Classification/4.1-Introduction/index.html)) - discriminant coordinates: 判别坐标 ([第 4.3 节](04-Linear-Methods-for-Classification/4.3-Linear-Discriminant-Analysis/index.html)) -- discriminant: 判别 ([第 4.3 节](04-Linear-Methods-for-Classification/4.3-Linear-Discriminant-Analysis/index.html)) - discriminant variable: 判别变量 ([第 4.3 节](04-Linear-Methods-for-Classification/4.3-Linear-Discriminant-Analysis/index.html)) +- discriminant: 判别 ([第 4.3 节](04-Linear-Methods-for-Classification/4.3-Linear-Discriminant-Analysis/index.html)) - driven: 驱动 ([第 5.8 节](05-Basis-Expansions-and-Regularization/5.8-Regularization-and-Reproducing-Kernel-Hibert-Spaces/index.html)) -- dictionaries of basis functions: 基函数的字典集 ([第 5.9 节](05-Basis-Expansions-and-Regularization/5.9-Wavelet-Smoothing/index.html)) - dilation: 伸缩 ([第 5.9 节](05-Basis-Expansions-and-Regularization/5.9-Wavelet-Smoothing/index.html)) +- dictionaries of basis functions: 基函数的字典集 ([第 5.9 节](05-Basis-Expansions-and-Regularization/5.9-Wavelet-Smoothing/index.html)) - divided differences: 差商 ([第 Appendix 节](05-Basis-Expansions-and-Regularization/Appendix-Computations-for-B-splines/index.html)) - deviance: 偏差 ([第 7.2 节](07-Model-Assessment-and-Selection/7.2-Bias-Variance-and-Model-Complexity/index.html), [第 10.5 节](10-Boosting-and-Additive-Trees/10.5-Why-Exponential-Loss/index.html)) - demographics: 人口统计数据 ([第 10.14 节](10-Boosting-and-Additive-Trees/10.14-Illustrations/index.html)) @@ -106,10 +109,10 @@ - derived features: 导出特征 ([第 11.1 节](11-Neural-Networks/11.1-Introduction/index.html)) - digitized analog signals: 数字化模拟信号 ([第 12.6 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.6-Penalized-Discriminant-Analysis/index.html)) - DANN: 判别自适应最近邻 ([第 13.4 节](13-Prototype-Methods-and-Nearest-Neighbors/13.4-Adaptive-Nearest-Neighbor-Methods/index.html)) -- dissimilarities: 不相似性 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) +- dendrogram: 谱系图 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - decoding: 解码 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) +- dissimilarities: 不相似性 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - data segmentation: 数据分离 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) -- dendrogram: 谱系图 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - degree: 度 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) - differential entropy: 相对熵 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - dissimilarity: 不相似性 ([第 14.8 节](14-Unsupervised-Learning/14.8-Multidimensional-Scaling/index.html)) @@ -125,8 +128,8 @@ - effective: 有效 ([第 2.3 节](02-Overview-of-Supervised-Learning/2.3-Two-Simple-Approaches-to-Prediction/index.html)) - equivalent kernel: 等价核 ([第 2.7 节](02-Overview-of-Supervised-Learning/2.7-Structured-Regression-Models/index.html), [第 6.1 节](06-Kernel-Smoothing-Methods/6.1-One-Dimensional-Kernel-Smoothers/index.html)) - effect size: 有效大小 ([第 3.2 节](03-Linear-Methods-for-Regression/3.2-Linear-Regression-Models-and-Least-Squares/index.html)) -- effective degrees of freedom: 有效自由度 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html), [第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) - eigen decomposition: 特征值分解 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) +- effective degrees of freedom: 有效自由度 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html), [第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) - equivalent kernels: 等价核 ([第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) - EPE: 积分平方预测误差 ([第 5.5 节](05-Basis-Expansions-and-Regularization/5.5-Automatic-Selection-of-the-Smoothing-Parameters/index.html)) - evaluation: 赋值 ([第 6.0 节](06-Kernel-Smoothing-Methods/6.0-Introduction/index.html)) @@ -138,8 +141,8 @@ - experts: 专家 ([第 9.5 节](09-Additive-Models-Trees-and-Related-Methods/9.5-Hierarchical-Mixtures-of-Experts/index.html)) - early stopping: 早停 ([第 10.12 节](10-Boosting-and-Additive-Trees/10.12-Regularization/index.html)) - empirical risk: 经验风险 ([第 10.9 节](10-Boosting-and-Additive-Trees/10.9-Boosting-Trees/index.html)) -- editing: 编辑 ([第 13.5 节](13-Prototype-Methods-and-Nearest-Neighbors/13.5-Computational-Considerations/index.html)) - exterior point: 外点 ([第 13.5 节](13-Prototype-Methods-and-Nearest-Neighbors/13.5-Computational-Considerations/index.html)) +- editing: 编辑 ([第 13.5 节](13-Prototype-Methods-and-Nearest-Neighbors/13.5-Computational-Considerations/index.html)) - effectiveness: 有效性 ([第 14.1 节](14-Unsupervised-Learning/14.1-Introduction/index.html)) - encoding: 编码 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - exploratory projection pursuit: 探索投影寻踪 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) @@ -159,8 +162,8 @@ - forward stagewise: 向前逐步 ([第 10.9 节](10-Boosting-and-Additive-Trees/10.9-Boosting-Trees/index.html)) - flexible discriminant analysis: 可变的判别分析 ([第 12.1 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.1-Introduction/index.html)) - feature vector: 特征向量 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) -- factor loadings: 因子载荷 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - Factor Analysis: 因子分析 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) +- factor loadings: 因子载荷 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - feature selection: 特征选择 ([第 18.1 节](18-High-Dimensional-Problems/18.1-When-p-is-Much-Bigger-than-N/index.html)) ## G @@ -171,10 +174,11 @@ - generalized linear models: 广义线性模型 ([第 6.5 节](06-Kernel-Smoothing-Methods/6.5-Local-Likelihood-and-Other-Models/index.html)) - generalization error: 泛化误差 ([第 7.2 节](07-Model-Assessment-and-Selection/7.2-Bias-Variance-and-Model-Complexity/index.html)) - gating networks: 门控网络 ([第 9.5 节](09-Additive-Models-Trees-and-Related-Methods/9.5-Hierarchical-Mixtures-of-Experts/index.html)) +- Group Average, GA: 群平均 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - generalized additive spline model: 广义可加样条模型 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - Graph: 图 ([第 17.1 节](17-Undirected-Graphical-Models/17.1-Introduction/index.html)) -- global Markov properties: 全局马尔科夫性质 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) - global Markov properties: 整体马尔科夫性质 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) +- global Markov properties: 全局马尔科夫性质 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) - Gradient descent: 梯度下降 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) - gene expression arrays: 基因表达阵列 ([第 18.2 节](18-High-Dimensional-Problems/18.2-Diagonal-Linear-Discriminant-Analysis-and-Nearest-Shrunken-Centroids/index.html)) @@ -186,9 +190,9 @@ - having no say: 不起作用 ([第 6.7 节](06-Kernel-Smoothing-Methods/6.7-Radial-Basis-Functions-and-Kernels/index.html)) - holes: 洞 ([第 6.7 节](06-Kernel-Smoothing-Methods/6.7-Radial-Basis-Functions-and-Kernels/index.html)) - HME: 混合层次专家 ([第 9.0 节](09-Additive-Models-Trees-and-Related-Methods/9.0-Introduction/index.html)) +- hard decision: 硬决定 ([第 9.5 节](09-Additive-Models-Trees-and-Related-Methods/9.5-Hierarchical-Mixtures-of-Experts/index.html)) - hierarchical mixtures of experts: 专家的分层混合 ([第 9.5 节](09-Additive-Models-Trees-and-Related-Methods/9.5-Hierarchical-Mixtures-of-Experts/index.html)) - HME: 专家的分层混合 ([第 9.5 节](09-Additive-Models-Trees-and-Related-Methods/9.5-Hierarchical-Mixtures-of-Experts/index.html)) -- hard decision: 硬决定 ([第 9.5 节](09-Additive-Models-Trees-and-Related-Methods/9.5-Hierarchical-Mixtures-of-Experts/index.html)) - highly skewed: 高偏的 ([第 10.7 节](10-Boosting-and-Additive-Trees/10.7-Off-the-Shelf-Procedures-for-Data-Mining/index.html)) - hidden units: 隐藏层 ([第 11.3 节](11-Neural-Networks/11.3-Neural-Networks/index.html)) - hierarchical basis: 分层基 ([第 12.3 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.3-Support-Vector-Machines-and-Kernels/index.html)) @@ -217,23 +221,24 @@ - invariance manifolds: 不变流形 ([第 13.3 节](13-Prototype-Methods-and-Nearest-Neighbors/13.3-k-Nearest-Neighbor-Classifiers/index.html)) - invariant metric: 不变度量 ([第 13.3 节](13-Prototype-Methods-and-Nearest-Neighbors/13.3-k-Nearest-Neighbor-Classifiers/index.html)) - items set: 项目集 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) -- ICA: 独立成分分析 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - Independent Component Analysis: 独立成分分析 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) +- ICA: 独立成分分析 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - isotonic regression: 保序回归 ([第 14.8 节](14-Unsupervised-Learning/14.8-Multidimensional-Scaling/index.html)) -- ISOMAP: 等距特征映射算法 ([第 14.9 节](14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling/index.html)) +- Isometric feature mapping, ISOMAP: 等距特征映射算法 ([第 14.9 节](14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling/index.html)) - importance sampling: 重要性采样 ([第 16.3 节](16-Ensemble-Learning/16.3-Learning-Ensembles/index.html)) - Importance Sampling: 重要度采样 ([第 16.3 节](16-Ensemble-Learning/16.3-Learning-Ensembles/index.html)) - inference: 推断 ([第 17.1 节](17-Undirected-Graphical-Models/17.1-Introduction/index.html)) - iterative proportional fitting procedure: 迭代比例拟合过程 ([第 17.3 节](17-Undirected-Graphical-Models/17.3-Undirected-Graphical-Models-for-Continuous-Variables/index.html)) - iteratively reweighted least squares: 迭代重赋权最小二乘法 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) +- Iterative proportional fitting, IPF: 迭代比例过滤 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) - independent rule: 独立规则 ([第 18.2 节](18-High-Dimensional-Problems/18.2-Diagonal-Linear-Discriminant-Analysis-and-Nearest-Shrunken-Centroids/index.html)) ## J - joint maximization: 联合最大化 ([第 8.5 节](08-Model-Inference-and-Averaging/8.5-The-EM-Algorithm/index.html)) ## K -- kernel function: 核函数 ([第 2.8 节](02-Overview-of-Supervised-Learning/2.8-Classes-of-Restricted-Estimators/index.html)) - knots: 结点 ([第 2.8 节](02-Overview-of-Supervised-Learning/2.8-Classes-of-Restricted-Estimators/index.html)) +- kernel function: 核函数 ([第 2.8 节](02-Overview-of-Supervised-Learning/2.8-Classes-of-Restricted-Estimators/index.html)) - kernel property: 核性质 ([第 5.8 节](05-Basis-Expansions-and-Regularization/5.8-Regularization-and-Reproducing-Kernel-Hibert-Spaces/index.html)) - kernel reproducing property: 核再生性质 ([第 5.8 节](05-Basis-Expansions-and-Regularization/5.8-Regularization-and-Reproducing-Kernel-Hibert-Spaces/index.html)) - kernel: 核 ([第 6.0 节](06-Kernel-Smoothing-Methods/6.0-Introduction/index.html)) @@ -242,21 +247,21 @@ ## L - large: 大 ([第 2.2 节](02-Overview-of-Supervised-Learning/2.2-Variable-Types-and-Terminology/index.html)) - loss function: 损失函数 ([第 2.4 节](02-Overview-of-Supervised-Learning/2.4-Statistical-Decision-Theory/index.html)) -- learning by example: 样本学习 ([第 2.6 节](02-Overview-of-Supervised-Learning/2.6-Statistical-Models-Supervised-Learning-and-Function-Approximation/index.html)) - linear basis expansions: 线性基展开 ([第 2.6 节](02-Overview-of-Supervised-Learning/2.6-Statistical-Models-Supervised-Learning-and-Function-Approximation/index.html)) +- learning by example: 样本学习 ([第 2.6 节](02-Overview-of-Supervised-Learning/2.6-Statistical-Models-Supervised-Learning-and-Function-Approximation/index.html)) - learning problem: 学习问题 ([第 2 章文献笔记](02-Overview-of-Supervised-Learning/Bibliographic-Notes/index.html)) - least squares: 最小二乘 ([第 3.2 节](03-Linear-Methods-for-Regression/3.2-Linear-Regression-Models-and-Least-Squares/index.html)) -- LAR: 最小角回归 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) - Lagrangian form: 拉格朗日形式 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) +- LAR: 最小角回归 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) - LDA: 线性判别分析 ([第 4.3 节](04-Linear-Methods-for-Classification/4.3-Linear-Discriminant-Analysis/index.html)) - linear basis expansion: 线性基展开式 ([第 5.1 节](05-Basis-Expansions-and-Regularization/5.1-Introduction/index.html)) - linear smoother: 线性光滑 ([第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) - localized in time and in frequency: 在时间和在频率上局部化 ([第 5.9 节](05-Basis-Expansions-and-Regularization/5.9-Wavelet-Smoothing/index.html)) - low scale: 尺度最低 ([第 5.9 节](05-Basis-Expansions-and-Regularization/5.9-Wavelet-Smoothing/index.html)) -- lag set: 滞后集 ([第 6.5 节](06-Kernel-Smoothing-Methods/6.5-Local-Likelihood-and-Other-Models/index.html)) -- local linear logistic model: 局部线性逻辑斯蒂回归模型 ([第 6.5 节](06-Kernel-Smoothing-Methods/6.5-Local-Likelihood-and-Other-Models/index.html)) - logistic: 逻辑斯蒂回归 ([第 6.5 节](06-Kernel-Smoothing-Methods/6.5-Local-Likelihood-and-Other-Models/index.html)) - local regression: 局部回归 ([第 6.5 节](06-Kernel-Smoothing-Methods/6.5-Local-Likelihood-and-Other-Models/index.html)) +- lag set: 滞后集 ([第 6.5 节](06-Kernel-Smoothing-Methods/6.5-Local-Likelihood-and-Other-Models/index.html)) +- local linear logistic model: 局部线性逻辑斯蒂回归模型 ([第 6.5 节](06-Kernel-Smoothing-Methods/6.5-Local-Likelihood-and-Other-Models/index.html)) - light fitting: 轻拟合 ([第 7.11 节](07-Model-Assessment-and-Selection/7.11-Bootstrap-Methods/index.html)) - LOO: 舍一法 ([第 7.12 节](07-Model-Assessment-and-Selection/7.12-Conditional-or-Expected-Test-Error/index.html)) - latent class model: 潜类别模型 ([第 9.5 节](09-Additive-Models-Trees-and-Related-Methods/9.5-Hierarchical-Mixtures-of-Experts/index.html)) @@ -266,7 +271,8 @@ - left singular vectors: 左奇异向量 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) - loadings: 因子载荷 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - least squares scaling: 最小二乘缩放 ([第 14.8 节](14-Unsupervised-Learning/14.8-Multidimensional-Scaling/index.html)) -- LLE: 局部线性内嵌 ([第 14.9 节](14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling/index.html)) +- Local MDS: 局部多维缩放 ([第 14.9 节](14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling/index.html)) +- Local linear embedding, LLE: 局部线性内嵌 ([第 14.9 节](14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling/index.html)) - least angle regression: 最小角回归 ([第 16.2 节](16-Ensemble-Learning/16.2-Boosting-and-Regularization-Paths/index.html)) - learning: 学习 ([第 17.1 节](17-Undirected-Graphical-Models/17.1-Introduction/index.html)) - less fitting is better: 欠拟合更好 ([第 18.1 节](18-High-Dimensional-Problems/18.1-When-p-is-Much-Bigger-than-N/index.html)) @@ -278,8 +284,8 @@ - model complexity: 模型复杂度 ([第 2.9 节](02-Overview-of-Supervised-Learning/2.9-Model-Selection-and-the-Bias-Variance-Tradeoff/index.html)) - mean squared error: 均方误差 ([第 2.9 节](02-Overview-of-Supervised-Learning/2.9-Model-Selection-and-the-Bias-Variance-Tradeoff/index.html)) - model selection: 模型选择 ([第 3.3 节](03-Linear-Methods-for-Regression/3.3-Subset-Selection/index.html)) -- marginal likelihood: 边缘似然 ([第 4.4 节](04-Linear-Methods-for-Classification/4.4-Logistic-Regression/index.html)) - multinomial: 多项式分布 ([第 4.4 节](04-Linear-Methods-for-Classification/4.4-Logistic-Regression/index.html)) +- marginal likelihood: 边缘似然 ([第 4.4 节](04-Linear-Methods-for-Classification/4.4-Logistic-Regression/index.html)) - margin: 边缘 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) - margin: 空白 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) - moving beyond linearity: 超线性 ([第 5.1 节](05-Basis-Expansions-and-Regularization/5.1-Introduction/index.html)) @@ -290,8 +296,8 @@ - Markov chain Monte Carlo: 马尔科夫蒙特卡洛法 ([第 8.6 节](08-Model-Inference-and-Averaging/8.6-MCMC-for-Sampling-from-the-Posterior/index.html)) - mode: 最大值 ([第 8.8 节](08-Model-Inference-and-Averaging/8.8-Model-Averaging-and-Stacking/index.html)) - MARS: 多元自适应回归样条 ([第 9.0 节](09-Additive-Models-Trees-and-Related-Methods/9.0-Introduction/index.html)) -- MAR: 随机缺失 ([第 9.6 节](09-Additive-Models-Trees-and-Related-Methods/9.6-Missing-Data/index.html)) - MCAR: 完全随机缺失 ([第 9.6 节](09-Additive-Models-Trees-and-Related-Methods/9.6-Missing-Data/index.html)) +- MAR: 随机缺失 ([第 9.6 节](09-Additive-Models-Trees-and-Related-Methods/9.6-Missing-Data/index.html)) - multiple additive regression trees: 多重可加回归树 ([第 10.10 节](10-Boosting-and-Additive-Trees/10.10-Numerical-Optimization-via-Gradient-Boosting/index.html)) - marginal average: 边缘平均 ([第 10.13 节](10-Boosting-and-Additive-Trees/10.13-Interpretation/index.html)) - median house values: 房子价值的中位数 ([第 10.14 节](10-Boosting-and-Additive-Trees/10.14-Illustrations/index.html)) @@ -303,17 +309,18 @@ - mixing proportions: 混合比例 ([第 12.7 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.7-Mixture-Discriminant-Analysis/index.html)) - manifold: 流形 ([第 14.1 节](14-Unsupervised-Learning/14.1-Introduction/index.html)) - mode finding: 模式寻找 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) -- mixture modeling: 混合模型 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - melanomas: 黑素瘤 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - mode seeking: 模式寻找 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) +- mixture modeling: 混合模型 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - missing values: 缺失值 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - mutual information: 互信息量 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - metric: 度量 ([第 14.8 节](14-Unsupervised-Learning/14.8-Multidimensional-Scaling/index.html)) +- MDS, Multidimensional scaling: 多维缩放 ([第 14.8 节](14-Unsupervised-Learning/14.8-Multidimensional-Scaling/index.html)) - Markov random fields: 马尔科夫随机域 ([第 17.1 节](17-Undirected-Graphical-Models/17.1-Introduction/index.html)) - Markov networks: 马尔科夫网络 ([第 17.1 节](17-Undirected-Graphical-Models/17.1-Introduction/index.html)) - multiple testing: 多重检验 ([第 18.1 节](18-High-Dimensional-Problems/18.1-When-p-is-Much-Bigger-than-N/index.html), [第 18.7 节](18-High-Dimensional-Problems/18.7-Feature-Assessment-and-the-Multiple-Testing-Problem/index.html)) -- maximal margin classifier: 最大边界分类器 ([第 18.3 节](18-High-Dimensional-Problems/18.3-Linear-Classifiers-with-Quadratic-Regularization/index.html)) - margin tree: 边际树 ([第 18.3 节](18-High-Dimensional-Problems/18.3-Linear-Classifiers-with-Quadratic-Regularization/index.html)) +- maximal margin classifier: 最大边界分类器 ([第 18.3 节](18-High-Dimensional-Problems/18.3-Linear-Classifiers-with-Quadratic-Regularization/index.html)) - maximal margin solution: 最大边缘解 ([第 18.5 节](18-High-Dimensional-Problems/18.5-Classification-When-Features-are-Unavailable/index.html)) ## N @@ -329,10 +336,11 @@ - nonlinear approximating manifolds: 线性逼近流形 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) - negentropy: 负熵 ([第 14.7 节](14-Unsupervised-Learning/14.7-Independent-Component-Analysis-and-Exploratory-Projection-Pursuit/index.html)) - numerical quadrature: 数值积分 ([第 16.3 节](16-Ensemble-Learning/16.3-Learning-Ensembles/index.html)) -- nodes: 结点 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) - Newton updates: 牛顿更新 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) +- nodes: 结点 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) - nearest centroid classifer: 最近重心分类器 ([第 18.2 节](18-High-Dimensional-Problems/18.2-Diagonal-Linear-Discriminant-Analysis-and-Nearest-Shrunken-Centroids/index.html)) - nearest centroid classifier: 最近重心分类器 ([第 18.2 节](18-High-Dimensional-Problems/18.2-Diagonal-Linear-Discriminant-Analysis-and-Nearest-Shrunken-Centroids/index.html)) +- nearest shrunken centroids, NSC: 最近收缩重心 ([第 18.2 节](18-High-Dimensional-Problems/18.2-Diagonal-Linear-Discriminant-Analysis-and-Nearest-Shrunken-Centroids/index.html)) ## O - outputs: 输出变量 ([第 2.1 节](02-Overview-of-Supervised-Learning/2.1-Introduction/index.html)) @@ -355,8 +363,8 @@ - perceptron: 感知器 ([第 4.1 节](04-Linear-Methods-for-Classification/4.1-Introduction/index.html)) - pair of classes: 类别对 ([第 4.3 节](04-Linear-Methods-for-Classification/4.3-Linear-Discriminant-Analysis/index.html)) - parsimonious: 最简洁 ([第 4.4 节](04-Linear-Methods-for-Classification/4.4-Logistic-Regression/index.html)) -- perceptron learning algorithm: 感知器学习算法 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) - perceptrons: 感知器 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) +- perceptron learning algorithm: 感知器学习算法 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) - pairwise variance: 逐点方差 ([第 5.2 节](05-Basis-Expansions-and-Regularization/5.2-Piecewise-Polynomials-and-Splines/index.html)) - penalty matrix: 惩罚矩阵 ([第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) - projection: 投影 ([第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) @@ -370,9 +378,9 @@ - peeled: 剔除 ([第 9.3 节](09-Additive-Models-Trees-and-Related-Methods/9.3-PRIM/index.html)) - pseudo residuals: 伪残差 ([第 10.10 节](10-Boosting-and-Additive-Trees/10.10-Numerical-Optimization-via-Gradient-Boosting/index.html)) - primitive: 原型 ([第 10.11 节](10-Boosting-and-Additive-Trees/10.11-Right-Sized-Trees-for-Boosting/index.html)) -- partial dependence: 偏相依性 ([第 10.13 节](10-Boosting-and-Additive-Trees/10.13-Interpretation/index.html)) -- purely additive: 纯可加的 ([第 10.13 节](10-Boosting-and-Additive-Trees/10.13-Interpretation/index.html)) - partial: 偏 ([第 10.13 节](10-Boosting-and-Additive-Trees/10.13-Interpretation/index.html)) +- purely additive: 纯可加的 ([第 10.13 节](10-Boosting-and-Additive-Trees/10.13-Interpretation/index.html)) +- partial dependence: 偏相依性 ([第 10.13 节](10-Boosting-and-Additive-Trees/10.13-Interpretation/index.html)) - purely multiplicative: 纯可乘的 ([第 10.13 节](10-Boosting-and-Additive-Trees/10.13-Interpretation/index.html)) - Pacific coast: 太平洋沿岸 ([第 10.14 节](10-Boosting-and-Additive-Trees/10.14-Illustrations/index.html)) - predictive learning: 预测学习 ([第 10.7 节](10-Boosting-and-Additive-Trees/10.7-Off-the-Shelf-Procedures-for-Data-Mining/index.html)) @@ -388,16 +396,16 @@ - principal curves and surfaces: 主曲线和主曲面 ([第 14.4 节](14-Unsupervised-Learning/14.4-Self-Organizing-Maps/index.html)) - Principal points: 主点 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) - projection matrix: 投影矩阵 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) -- Percentage Squared Prediction Explained: 解释的方差的百分比 ([第 16.2 节](16-Ensemble-Learning/16.2-Boosting-and-Regularization-Paths/index.html)) - Percentage Misclassification Error Explained: 解释的误分类率的百分比 ([第 16.2 节](16-Ensemble-Learning/16.2-Boosting-and-Regularization-Paths/index.html)) +- Percentage Squared Prediction Explained: 解释的方差的百分比 ([第 16.2 节](16-Ensemble-Learning/16.2-Boosting-and-Regularization-Paths/index.html)) - potential: 势 ([第 17.1 节](17-Undirected-Graphical-Models/17.1-Introduction/index.html)) -- pairwise Markov independencies: 逐对马尔科夫独立 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) - partition: 分割 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) -- potential function: 势函数 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) - partition function: 分割函数 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html), [第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) - path: 路径 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) -- pairwise Markov graphs: 成对马尔科夫图 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) - pairwise Markov properties: 逐对马尔科夫性质 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) +- potential function: 势函数 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) +- pairwise Markov graphs: 成对马尔科夫图 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) +- pairwise Markov independencies: 逐对马尔科夫独立 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) - partial covariances: 偏协方差 ([第 17.3 节](17-Undirected-Graphical-Models/17.3-Undirected-Graphical-Models-for-Continuous-Variables/index.html)) - prior probability: 先验概率 ([第 18.2 节](18-High-Dimensional-Problems/18.2-Diagonal-Linear-Discriminant-Analysis-and-Nearest-Shrunken-Centroids/index.html)) - posterior mode: 后验的众数 ([第 18.4 节](18-High-Dimensional-Problems/18.4-Linear-Classifiers-with-L1-Regularization/index.html)) @@ -426,8 +434,8 @@ - regression splines: 回归样条 ([第 5.2 节](05-Basis-Expansions-and-Regularization/5.2-Piecewise-Polynomials-and-Splines/index.html)) - radial basis functions: 径向基函数 ([第 5.7 节](05-Basis-Expansions-and-Regularization/5.7-Multidimensional-Splines/index.html)) - reproducing kernel Hilbert space: 再生核希尔伯特空间 ([第 5.8 节](05-Basis-Expansions-and-Regularization/5.8-Regularization-and-Reproducing-Kernel-Hibert-Spaces/index.html)) -- RKHS: 再生核希尔伯特空间 ([第 5.8 节](05-Basis-Expansions-and-Regularization/5.8-Regularization-and-Reproducing-Kernel-Hibert-Spaces/index.html)) - reproducing kernel Hilbert spaces: 再生核希尔伯特空间 ([第 5.8 节](05-Basis-Expansions-and-Regularization/5.8-Regularization-and-Reproducing-Kernel-Hibert-Spaces/index.html), [第 12.3 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.3-Support-Vector-Machines-and-Kernels/index.html)) +- RKHS: 再生核希尔伯特空间 ([第 5.8 节](05-Basis-Expansions-and-Regularization/5.8-Regularization-and-Reproducing-Kernel-Hibert-Spaces/index.html)) - reference: 参考 ([第 5.9 节](05-Basis-Expansions-and-Regularization/5.9-Wavelet-Smoothing/index.html)) - relative overfitting rate: 相对过拟合率 ([第 7.11 节](07-Model-Assessment-and-Selection/7.11-Bootstrap-Methods/index.html)) - responsibilities: 责任 ([第 8.6 节](08-Model-Inference-and-Averaging/8.6-MCMC-for-Sampling-from-the-Posterior/index.html)) @@ -436,9 +444,9 @@ - ridge function: 岭函数 ([第 11.2 节](11-Neural-Networks/11.2-Projection-Pursuit-Regression/index.html)) - radical basis function network: 径向基函数网络 ([第 11.3 节](11-Neural-Networks/11.3-Neural-Networks/index.html)) - regions: 区域 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) -- rotation: 旋转 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) - reconstruction error: 重构误差 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) - right singular vectors: 右奇异向量 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) +- rotation: 旋转 ([第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) - relevance network: 相关网络 ([第 17.3 节](17-Undirected-Graphical-Models/17.3-Undirected-Graphical-Models-for-Continuous-Variables/index.html)) - RBM: 限制玻尔兹曼机 ([第 17.4 节](17-Undirected-Graphical-Models/17.4-Undirected-Graphical-Models-for-Discrete-Variables/index.html)) - RDA: 正则化判别分析 ([第 18.3 节](18-High-Dimensional-Problems/18.3-Linear-Classifiers-with-Quadratic-Regularization/index.html)) @@ -449,44 +457,47 @@ ## S - supervised learning: 监督学习 ([第 2.1 节](02-Overview-of-Supervised-Learning/2.1-Introduction/index.html), [第 14.1 节](14-Unsupervised-Learning/14.1-Introduction/index.html)) - small: 小 ([第 2.2 节](02-Overview-of-Supervised-Learning/2.2-Variable-Types-and-Terminology/index.html)) -- squared error loss: 平方误差损失 ([第 2.4 节](02-Overview-of-Supervised-Learning/2.4-Statistical-Decision-Theory/index.html)) - simultaneously: 同时 ([第 2.4 节](02-Overview-of-Supervised-Learning/2.4-Statistical-Decision-Theory/index.html)) +- squared error loss: 平方误差损失 ([第 2.4 节](02-Overview-of-Supervised-Learning/2.4-Statistical-Decision-Theory/index.html)) - smoothing: 光滑化 ([第 2.8 节](02-Overview-of-Supervised-Learning/2.8-Classes-of-Restricted-Estimators/index.html)) - smoothing: 光滑 ([第 2.9 节](02-Overview-of-Supervised-Learning/2.9-Model-Selection-and-the-Bias-Variance-Tradeoff/index.html)) -- stationary condition: 平稳条件 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) -- subset selection: 子集选择 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) - SVD: 奇异值分解 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) - shrinkage methods: 收缩方法 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) +- subset selection: 子集选择 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) +- stationary condition: 平稳条件 ([第 3.4 节](03-Linear-Methods-for-Regression/3.4-Shrinkage-Methods/index.html)) - scale invariant: 尺度不变 ([第 3.5 节](03-Linear-Methods-for-Regression/3.5-Methods-Using-Derived-Input-Directions/index.html)) - simple coordinate descent: 简单坐标下降 ([第 3.8 节](03-Linear-Methods-for-Regression/3.8-More-on-the-Lasso-and-Related-Path-Algorithms/index.html)) +- smoothly clipped absolute deviation, SCAD: 平稳削减绝对偏差法 ([第 3.8 节](03-Linear-Methods-for-Regression/3.8-More-on-the-Lasso-and-Related-Path-Algorithms/index.html)) +- stochastic gradient descent: 随机梯度下降 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) - separating hyperplane classifiers: 分离超平面分类器 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) -- support vector machine: 支持向量机 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html), [第 12.1 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.1-Introduction/index.html), [第 12.3 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.3-Support-Vector-Machines-and-Kernels/index.html)) - slab: 平板 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) -- stochastic gradient descent: 随机梯度下降 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) - support points: 支撑点 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html)) +- support vector machine: 支持向量机 ([第 4.5 节](04-Linear-Methods-for-Classification/4.5-Separating-Hyperplanes/index.html), [第 12.1 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.1-Introduction/index.html), [第 12.3 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.3-Support-Vector-Machines-and-Kernels/index.html)) - splines: 样条 ([第 5.1 节](05-Basis-Expansions-and-Regularization/5.1-Introduction/index.html)) +- smoothing parameter: 光滑参数 ([第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) - smoother matrix: 光滑矩阵 ([第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) - shrinking: 收缩 ([第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) -- smoothing parameter: 光滑参数 ([第 5.4 节](05-Basis-Expansions-and-Regularization/5.4-Smoothing-Splines/index.html)) - spurious: 假的 ([第 5.7 节](05-Basis-Expansions-and-Regularization/5.7-Multidimensional-Splines/index.html)) - sparse: 稀疏 ([第 5.9 节](05-Basis-Expansions-and-Regularization/5.9-Wavelet-Smoothing/index.html), [第 16.2 节](16-Ensemble-Learning/16.2-Boosting-and-Regularization-Paths/index.html)) - span: 跨度 ([第 6.2 节](06-Kernel-Smoothing-Methods/6.2-Selecting-the-Width-of-the-Kernel/index.html)) -- sample standard deviation: 样本标准偏差 ([第 7.10 节](07-Model-Assessment-and-Selection/7.10-Cross-Validation/index.html)) - standard error: 标准误差 ([第 7.10 节](07-Model-Assessment-and-Selection/7.10-Cross-Validation/index.html)) +- sample standard deviation: 样本标准偏差 ([第 7.10 节](07-Model-Assessment-and-Selection/7.10-Cross-Validation/index.html)) +- SRM, structural risk minimization: 结构风险最小化 ([第 7.9 节](07-Model-Assessment-and-Selection/7.9-Vapnik-Chervonenkis-Dimension/index.html)) - score function: 得分函数 ([第 8.2 节](08-Model-Inference-and-Averaging/8.2-The-Bootstrap-and-Maximum-Likelihood-Methods/index.html)) - Stacked generalization: 堆栈泛化 ([第 8.8 节](08-Model-Inference-and-Averaging/8.8-Model-Averaging-and-Stacking/index.html)) - soft splits: 软分割 ([第 9.5 节](09-Additive-Models-Trees-and-Related-Methods/9.5-Hierarchical-Mixtures-of-Experts/index.html)) - soft probabilistic: 软概率的决定 ([第 9.5 节](09-Additive-Models-Trees-and-Related-Methods/9.5-Hierarchical-Mixtures-of-Experts/index.html)) - surrogate splits: 代理分割 ([第 9.6 节](09-Additive-Models-Trees-and-Related-Methods/9.6-Missing-Data/index.html)) -- steepest descent: 最速下降 ([第 10.10 节](10-Boosting-and-Additive-Trees/10.10-Numerical-Optimization-via-Gradient-Boosting/index.html)) - step length: 步长 ([第 10.10 节](10-Boosting-and-Additive-Trees/10.10-Numerical-Optimization-via-Gradient-Boosting/index.html)) +- steepest descent: 最速下降 ([第 10.10 节](10-Boosting-and-Additive-Trees/10.10-Numerical-Optimization-via-Gradient-Boosting/index.html)) - squared relative importance: 平方相对重要度 ([第 10.13 节](10-Boosting-and-Additive-Trees/10.13-Interpretation/index.html)) - salinity: 盐度 ([第 10.14 节](10-Boosting-and-Additive-Trees/10.14-Illustrations/index.html)) - scaled classification tree: 缩放分类树 ([第 10.9 节](10-Boosting-and-Additive-Trees/10.9-Boosting-Trees/index.html)) - single index model: 单指标模型 ([第 11.2 节](11-Neural-Networks/11.2-Projection-Pursuit-Regression/index.html)) - stochastic approximation: 随机近似 ([第 11.4 节](11-Neural-Networks/11.4-Fitting-Neural-Networks/index.html)) -- statistics folklore: 统计民俗学 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) - support: 支撑集 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) +- statistics folklore: 统计民俗学 ([第 14.2 节](14-Unsupervised-Learning/14.2-Association-Rules/index.html)) +- Single linkage, SL: 单链接 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - similarities: 相似性 ([第 14.3 节](14-Unsupervised-Learning/14.3-Cluster-Analysis/index.html)) - stress function: 压力函数 ([第 14.8 节](14-Unsupervised-Learning/14.8-Multidimensional-Scaling/index.html)) - separate: 分离 ([第 17.2 节](17-Undirected-Graphical-Models/17.2-Markov-Graphs-and-Their-Properties/index.html)) @@ -507,10 +518,10 @@ - tensor product basis: 张量积基底 ([第 5.7 节](05-Basis-Expansions-and-Regularization/5.7-Multidimensional-Splines/index.html)) - translation: 平移 ([第 5.9 节](05-Basis-Expansions-and-Regularization/5.9-Wavelet-Smoothing/index.html), [第 14.5 节](14-Unsupervised-Learning/14.5-Principal-Components-Curves-and-Surfaces/index.html)) - time and frequency localization: 时间和频率的局部化 ([第 5.9 节](05-Basis-Expansions-and-Regularization/5.9-Wavelet-Smoothing/index.html)) -- tie: 结 ([第 6.1 节](06-Kernel-Smoothing-Methods/6.1-One-Dimensional-Kernel-Smoothers/index.html)) - trimming the hills: 截断山坡 ([第 6.1 节](06-Kernel-Smoothing-Methods/6.1-One-Dimensional-Kernel-Smoothers/index.html)) -- Training error: 训练误差 ([第 7.2 节](07-Model-Assessment-and-Selection/7.2-Bias-Variance-and-Model-Complexity/index.html)) +- tie: 结 ([第 6.1 节](06-Kernel-Smoothing-Methods/6.1-One-Dimensional-Kernel-Smoothers/index.html)) - tunning parameter: 调整参数 ([第 7.2 节](07-Model-Assessment-and-Selection/7.2-Bias-Variance-and-Model-Complexity/index.html)) +- Training error: 训练误差 ([第 7.2 节](07-Model-Assessment-and-Selection/7.2-Bias-Variance-and-Model-Complexity/index.html)) - test error: 测试误差 ([第 7.2 节](07-Model-Assessment-and-Selection/7.2-Bias-Variance-and-Model-Complexity/index.html)) - trees: 树 ([第 9.0 节](09-Additive-Models-Trees-and-Related-Methods/9.0-Introduction/index.html)) - target function: 目标函数 ([第 10.11 节](10-Boosting-and-Additive-Trees/10.11-Right-Sized-Trees-for-Boosting/index.html)) @@ -518,6 +529,7 @@ - the structured space of functions: 函数的结构空间 ([第 12.3 节](12-Support-Vector-Machines-and-Flexible-Discriminants/12.3-Support-Vector-Machines-and-Kernels/index.html)) - tangent distance: 切线距离 ([第 13.3 节](13-Prototype-Methods-and-Nearest-Neighbors/13.3-k-Nearest-Neighbor-Classifiers/index.html)) - tangent distance: 切向距离 ([第 13.5 节](13-Prototype-Methods-and-Nearest-Neighbors/13.5-Computational-Considerations/index.html)) +- trailing eigenvectors: 尾特征向量 ([第 14.9 节](14-Unsupervised-Learning/14.9-Nonlinear-Dimension-Reduction-and-Local-Multidimensional-Scaling/index.html)) - time of flight: 时间长度 ([第 18.4 节](18-High-Dimensional-Problems/18.4-Linear-Classifiers-with-L1-Regularization/index.html)) ## U diff --git a/gentag.py b/gentag.py index bbff0a0fc8..959e31f9f5 100644 --- a/gentag.py +++ b/gentag.py @@ -3,7 +3,7 @@ import glob # ([\u4e00-\u9fa5]+): chinese translation # (\b[a-zA-Z ]+\b): original -pat = re.compile(r"\*\*([\u4e00-\u9fa5]+)\s?\((\b[a-zA-Z ]+\b)\)\*\*") +pat = re.compile(r"\*\*([\u4e00-\u9fa5]+)\s?\((\b[a-zA-Z ,]+\b)\)\*\*") tags = [[] for i in range(26)] docsdir = os.listdir("docs/")