机器学习算法系列(十九)-自适应增强算法(Adaptive Boosting Algorithm / AdaBoost Algorithm)——上篇

阅读本文需要的背景知识点:集成学习、拉格朗日乘子法、一点点编程知识

一、简介

前面一节我们学习了随机森林算法(Random Forest Algorithm),讲到了其中一种集成学习的方法——Bagging 算法,这一节我们来学习另一种集成学习的方法——提升算法1(Boosting Algorithm),同时介绍其中比较常见的算法——自适应增强算法2(Adaptive Boosting Algorithm / AdaBoost Algorithm)

二、模型介绍

Boosting 算法

Boosting 算法也是一种集成学习,与 Bagging 算法不同的是,每次训练更关注训练出的估计器中分类错误或者回归误差大的样本,即每次训练都是根据上次训练的结果调整不同的样本权重,直到最后的输出小于预设的阈值。

0.png

图2-1

图 2-1 展示了提示算法的具体流程,其与 Bagging 算法的区别在于:其一,Bagging 算法的每个估计器相对独立且权重都相同,而 Boosting 算法的每个估计器都依赖于上一个估计器同时权重也不同。其二,一般情况下 Bagging 算法可以减小方差、而 Boosting 算法则是减小偏差。

Boosting 算法中比较有代表性的算法就是自适应增强算法(Adaptive Boosting Algorithm / AdaBoost Algorithm)

AdaBoost 算法

AdaBoost 算法是由 Yoav Freund 和 Robert E。Schapire 在 1995 年提出的,同时还提出了 AdaBoost.M1、AdaBoost.M2 算法用于多分类问题,AdaBoost.R 算法用于回归问题。后面陆续又有人提出了上述算法的变体 AdaBoost-SAMME、AdaBoost-SAMME.R、AdaBoost.R2 算法。

AdaBoost 算法的基本步骤与 Boosting 算法一样,是 Boosting 算法的具体实现,其定义了每次循环如何更新样本权重以及最后如何将每个估计器结合起来。

由于笔者能力所限,本文只会介绍基础的 AdaBoost 算法和现在 scikit-learn 中所实现的 AdaBoost-SAMME、AdaBoost-SAMME.R、AdaBoost.R2算法,其他的算法暂无法一一介绍,感兴趣的读者可以参考文末对应算法的论文原文。

3.算法步骤

下面给出各个算法的执行步骤,然后对这些算法步骤中公式的来源进行一一说明。

二级

假设训练集 T = { X_i, y_i },i = 1,…,N,y_i 可取-1,+1,h(x) 为估计器,估计器的数量为 K。

AdaBoost 算法步骤如下:

初始化样本权重向量 ω_1

%5Cbegin%7Baligned%7D%20%5Comega_%7B1%2Ci%7D%20%26%3D%20%5Cfrac%7B1%7D%7BN%7D%20%5Cquad%20i%20%3D%201%2C...%2CN%20%5Cend%7Baligned%7D

遍历估计器的数量 K 次:

在样本权重 ω_k 下训练估计器 h(x)

计算第k次的误差率 e_k

%5Cbegin%7Baligned%7D%20e_k%20%26%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Comega_%7Bk%2Ci%7D%20I%28y_i%20%5Cne%20h_k%28X_i%29%29%20%5Cend%7Baligned%7D

如果误差率 e_k 大于 0.5

中断循环

计算第k次的估计器权重 α_k

%5Cbegin%7Baligned%7D%20%5Calpha_k%20%26%3D%20%5Cfrac%7B1%7D%7B2%7D%20%5Cln%20%5Cfrac%7B1%20-%20e_k%7D%7Be_k%7D%5C%5C%20%5Cend%7Baligned%7D

计算第 k+1 次的权重向量 ω_{k+1}

%5Cbegin%7Baligned%7D%20%5Comega_%7Bk%2B1%2Ci%7D%20%26%3D%20%5Cfrac%7B%5Comega_%7Bk%2Ci%7D%20e%5E%7B-y_i%5Calpha_kh_k%28X_i%29%7D%7D%7B%5Csum_%7Bj%20%3D%200%7D%5EN%20%5Cleft%28%5Comega_%7Bk%2Cj%7D%20e%5E%7B-y_j%5Calpha_kh_k%28X_j%29%7D%5Cright%29%20%7D%20%5Cend%7Baligned%7D

结束循环

最后的结合策略,采用加权后的结果取 sign 函数,得到最终的强估计器:

%5Cbegin%7Baligned%7D%20H%28x%29%20%26%3D%20%5Coperatorname%7Bsign%7D%20%5Cleft%28%5Csum_%7Bi%20%3D%201%7D%5E%7BK%7D%20%5Calpha_i%20h_i%28x%29%5Cright%29%20%5Cend%7Baligned%7D

多类

假设训练集 T = { X_i, y_i },i = 1,…,N,y 的取值有 M 种可能,h(x) 为估计器,估计器的数量为 K。

AdaBoost-SUMME 算法步骤如下:

初始化样本权重向量 ω_1

%5Cbegin%7Baligned%7D%20%5Comega_%7B1%2Ci%7D%20%26%3D%20%5Cfrac%7B1%7D%7BN%7D%20%5Cquad%20i%20%3D%201%2C...%2CN%20%5Cend%7Baligned%7D

遍历估计器的数量 K 次:

在样本权重 ω_k 下训练估计器 h(x)

计算第k次的误差率 e_k

%5Cbegin%7Baligned%7D%20e_k%20%26%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Comega_%7Bk%2Ci%7D%20I%28y_i%20%5Cne%20h_k%28X_i%29%29%20%5Cend%7Baligned%7D

计算第 k 次的估计器权重 α_k

%5Cbegin%7Baligned%7D%20%5Calpha_k%20%26%3D%20%5Cln%20%5Cfrac%7B1%20-%20e_k%7D%7Be_k%7D%20%2B%20%5Cln%20%28M%20-%201%29%20%5C%5C%20%5Cend%7Baligned%7D

计算第 k+1 次的权重向量 ω_{k+1}

%5Cbegin%7Baligned%7D%20%5Cbar%7B%5Comega_%7Bk%2B1%2Ci%7D%7D%20%26%3D%20%5Comega_%7Bk%2Ci%7De%5E%7B%5Calpha_kI%28y_i%20%5Cne%20h_k%28X_i%29%29%7D%20%5Cend%7Baligned%7D

对权重向量 ω_{k+1} 进行归一化

%5Cbegin%7Baligned%7D%20%5Comega_%7Bk%2B1%2Ci%7D%20%26%3D%20%5Cfrac%7B%5Cbar%7B%5Comega_%7Bk%20%2B%201%2Ci%7D%7D%7D%7B%5Csum_%7Bj%20%3D%201%7D%5EN%20%5Cbar%7B%5Comega_%7Bk%20%2B%201%2Ci%7D%7D%20%7D%20%5Cend%7Baligned%7D

结束循环

在最终的组合策略中,对正确分类的结果进行加权,取最大值的分类得到最终的强估计量:

%5Cbegin%7Baligned%7D%20H%28x%29%20%26%3D%20%5Cunderset%7Bm%7D%7B%5Coperatorname%7Bargmax%7D%7D%20%5Cleft%28%20%5Csum_%7Bi%20%3D%201%7D%5E%7BK%7D%20%5Calpha_i%20I%28h_i%28x%29%20%3D%20m%29%20%5Cright%29%20%5Cend%7Baligned%7D

AdaBoost-SUMME.R 算法步骤如下:

初始化样本权重向量 ω_1

%5Cbegin%7Baligned%7D%20%5Comega_%7B1%2Ci%7D%20%26%3D%20%5Cfrac%7B1%7D%7BN%7D%20%5Cquad%20i%20%3D%201%2C...%2CN%20%5Cend%7Baligned%7D

遍历估计器的数量 K 次:

在样本权重 ω_k 下计算加权类概率估计向量 P_k

%5Cbegin%7Baligned%7D%20p_k%5Em%28x%29%20%3D%20P%28y%20%3D%20m%20%5Cmid%20x%29%20%5Cend%7Baligned%7D

计算第 k+1 次的权重向量 ω_{k+1}

%5Chat%7By%7D%20%3D%20%5Cleft%5C%7B%20%5Cbegin%7Barray%7D%7Bc%7D%201%20%26%20y%20%3Dm%5C%5C%20-%5Cfrac%7B1%7D%7BM-1%7D%20%26%20y%20%5Cne%20m%20%5Cend%7Barray%7D%5Cright.%20%5Cquad%20m%20%3D%201%2C%5Cdots%2CM

%5Cbegin%7Baligned%7D%20%5Cbar%7B%5Comega_%7Bk%2B1%2Ci%7D%7D%20%26%3D%20%5Comega_%7Bk%2Ci%7De%5E%7B-%5Cfrac%7BM-1%7D%7BM%7D%20%5Chat%7By_i%7D%5ET%20%5Cln%20p_k%28x%29%20%7D%20%5Cend%7Baligned%7D

对权重向量 ω_{k+1} 进行归一化

%5Cbegin%7Baligned%7D%20%5Comega_%7Bk%2B1%2Ci%7D%20%26%3D%20%5Cfrac%7B%5Cbar%7B%5Comega_%7Bk%20%2B%201%2Ci%7D%7D%7D%7B%5Csum_%7Bj%20%3D%201%7D%5EN%20%5Cbar%7B%5Comega_%7Bk%20%2B%201%2Ci%7D%7D%20%7D%20%5Cend%7Baligned%7D

结束循环

最终的组合策略使用概率估计计算结果中值最大的分类,得到最终的强估计量:

%5Cbegin%7Baligned%7D%20h_k%28x%29%20%26%3D%20%28M%20-%201%29%20%5Cleft%28%20%5Cln%20p_k%5Em%28x%29%20-%20%5Cfrac%7B1%7D%7BM%7D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BM%7D%20%5Cln%20p_k%5Ei%28x%29%20%5Cright%29%20%5C%5C%20H%28x%29%20%26%3D%20%5Cunderset%7Bm%7D%7B%5Coperatorname%7Bargmax%7D%7D%20%5Cleft%28%20%5Csum_%7Bi%20%3D%201%7D%5E%7BK%7D%20h_i%28x%29%5Cright%29%20%5Cend%7Baligned%7D

返回

假设训练集 T = { X_i, y_i },i = 1,…,N,h(x) 为估计器,估计器的数量为 K

AdaBoost.R2 算法步骤如下:

初始化样本权重向量 ω_1

%5Cbegin%7Baligned%7D%20%5Comega_%7B1%2Ci%7D%20%26%3D%20%5Cfrac%7B1%7D%7BN%7D%20%5Cquad%20i%20%3D%201%2C...%2CN%20%5Cend%7Baligned%7D

遍历估计器的数量 K 次:

在样本权重 ω_k 下训练估计器 h(x)

计算最大误差 E_k

%5Cbegin%7Baligned%7D%20E_k%20%26%3D%20%5Cmax%20%5Cmid%20y_i%20-%20h_k%28X_i%29%20%5Cmid%20%5Cend%7Baligned%7D

计算第 k 次的误差率 e_k

%5Cbegin%7Baligned%7D%20e_%7Bk%2Ci%7D%20%26%3D%20%5Cfrac%7B%5Cmid%20y_i%20-%20h_k%28X_i%29%20%5Cmid%7D%7BE_k%7D%20%26%20%E7%BA%BF%E6%80%A7%E8%AF%AF%E5%B7%AE%20%5C%5C%20e_%7Bk%2Ci%7D%20%26%3D%20%5Cfrac%7B%5Cleft%28%20y_i%20-%20h_k%28X_i%29%20%5Cright%29%5E2%7D%7BE_k%5E2%7D%20%26%20%E5%B9%B3%E6%96%B9%E8%AF%AF%E5%B7%AE%20%5C%5C%20e_%7Bk%2Ci%7D%20%26%3D%201%20-%20e%5E%7B-%5Cfrac%7B%5Cmid%20y_i%20-%20h_k%28X_i%29%20%5Cmid%7D%7BE_k%7D%20%7D%20%26%20%E6%8C%87%E6%95%B0%E8%AF%AF%E5%B7%AE%20%5C%5C%20e_k%20%26%20%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Comega_%7Bk%2Ci%7D%20e_%7Bk%2Ci%7D%20%5Cend%7Baligned%7D

如果误差率 e_k 大于 0.5

中断循环

计算第 k 次的估计器权重 α_k

%5Cbegin%7Baligned%7D%20%5Calpha_k%20%26%3D%20%5Cfrac%7Be_k%7D%7B1%20-%20e_k%7D%20%5Cend%7Baligned%7D

计算第 k+1 次的权重向量 ω_{k+1}

%5Cbegin%7Baligned%7D%20%5Cbar%7B%5Comega_%7Bk%2B1%2Ci%7D%7D%20%26%3D%20%5Comega_%7Bk%2Ci%7D%5Calpha_k%5E%7B1%20-%20e_%7Bk%2Ci%7D%7D%20%5Cend%7Baligned%7D

对权重向量 ω_{k+1} 进行归一化

%5Cbegin%7Baligned%7D%20%5Comega_%7Bk%2B1%2Ci%7D%20%26%3D%20%5Cfrac%7B%5Cbar%7B%5Comega_%7Bk%20%2B%201%2Ci%7D%7D%7D%7B%5Csum_%7Bj%20%3D%201%7D%5EN%20%5Cbar%7B%5Comega_%7Bk%20%2B%201%2Ci%7D%7D%20%7D%20%5Cend%7Baligned%7D

结束循环

最终的组合策略使用与估计量权重的中位数对应的估计量的结果来获得最终的强估计量:

%5Cbegin%7Baligned%7D%20H%28x%29%20%26%3D%20%5Cinf%20%5Cleft%5C%7B%20y%20%5Cin%20A%3A%20%5Csum_%7Bh_i%28x%29%20%5Cle%20y%20%7D%20%5Cln%20%5Cleft%28%5Cfrac%7B1%7D%7B%5Calpha_i%7D%5Cright%29%20%5Cge%20%5Cfrac%7B1%7D%7B2%7D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BK%7D%20%5Cln%20%5Cleft%28%5Cfrac%7B1%7D%7B%5Calpha_i%7D%5Cright%29%20%5Cright%5C%7D%20%5Cend%7Baligned%7D

4. 原理证明

AdaBoost 算法推导

同算法步骤中的前提条件一样,假设训练集 T = { X_i, y_i },i = 1,…,N,y_i 可取-1,+1,h(x) 为估计器,估计器的数量为 K。

AdaBoost 算法的一种解释是加法模型,通过多个估计器 h(x) 加权以后得到最后的强估计器 H(x),如下所示:

(1)第 k-1 轮的强估计器表达式

(2)第 k 轮的强估计器表达式

(3)第 k 轮的强估计器可以由第 k-1 轮的强估计器和第 k 轮的加权估计器来表示

%5Cbegin%7Baligned%7D%20H_%7Bk-1%7D%28x%29%20%26%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7Bk-1%7D%20%5Calpha_i%20h_i%28x%29%20%26%20%281%29%20%5C%5C%20H_k%28x%29%20%26%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7Bk%7D%20%5Calpha_i%20h_i%28x%29%20%26%20%282%29%20%5C%5C%20H_k%28x%29%20%26%3D%20H_%7Bk-1%7D%28x%29%20%2B%20%5Calpha_k%20h_k%28x%29%20%26%20%283%29%20%5C%5C%20%5Cend%7Baligned%7D

式4-1

接下来我们来定义最后强估计器的代价函数,AdaBoost 算法选用的是指数函数,相比于0/1 函数具有更好的数学性质。

(1)指数代价函数

(2)带入式 4-1中的(3)式

(3)我们的目标就是找到最优的估计器权重 α 和估计器 h(x)

(4)定义一个新的变量 ω,包含前一轮的强估计器等与 α 、h(x)无关的值

(5)替换 ω

%5Cbegin%7Baligned%7D%20Cost%28H%28x%29%29%20%26%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20e%5E%7B-y_iH%28X_i%29%7D%20%26%20%281%29%20%5C%5C%20Cost%28%5Calpha%2C%20h%28x%29%29%20%26%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20e%5E%7B-y_i%28H_%7Bk-1%7D%28X_i%29%20%2B%20%5Calpha%20h%28X_i%29%29%7D%20%26%20%282%29%20%5C%5C%20%5Calpha_k%2C%20h_k%28x%29%20%26%3D%20%5Cunderset%7B%5Calpha%2C%20h%28x%29%7D%7B%5Coperatorname%7Bargmin%7D%20%7D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20e%5E%7B-y_i%28H_%7Bk-1%7D%28X_i%29%20%2B%20%5Calpha%20h%28X_i%29%29%7D%20%26%20%283%29%20%5C%5C%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20%26%3D%20e%5E%7B-y_iH_%7Bk-1%7D%28X_i%29%7D%20%26%20%284%29%20%5C%5C%20%5Calpha_k%2C%20h_k%28x%29%20%26%3D%20%5Cunderset%7B%5Calpha%2C%20h%28x%29%7D%7B%5Coperatorname%7Bargmin%7D%20%7D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20e%5E%7B-y_i%5Calpha%20h%28X_i%29%7D%20%26%20%285%29%20%5C%5C%20%5Cend%7Baligned%7D

式4-2

我们先来看下估计器 h(x),在每次训练估计器后,估计器已经确定下来了,所以我们现在只需要关心每个估计器的权重 α 即可。

(1)找到最优的估计器权重 α 使得代价函数的取值最小

(2)代价函数 Cost(α)

(3)由于标签值可取正负 1,根据预测值与标签值是否相同拆为两项

(4)增加第二、三两项,不影响最后的结果

(5)将(4)式中前两项和后两项分别合并得到

%5Cbegin%7Baligned%7D%20%5Calpha_k%20%26%3D%20%5Cunderset%7B%5Calpha%7D%7B%5Coperatorname%7Bargmin%7D%20%7D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20e%5E%7B-y_i%5Calpha%20h_k%28X_i%29%7D%20%26%20%281%29%20%5C%5C%20Cost%28%5Calpha%29%20%26%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20e%5E%7B-y_i%5Calpha%20h_k%28X_i%29%7D%20%26%20%282%29%20%5C%5C%20%26%3D%20%5Csum_%7By_i%20%3D%20h_k%28X_i%29%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20e%5E%7B-%5Calpha%7D%20%2B%20%5Csum_%7By_i%20%5Cne%20h_k%28X_i%29%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20e%5E%7B%5Calpha%7D%20%26%20%283%29%20%5C%5C%20%26%3D%20%5Csum_%7By_i%20%3D%20h_k%28X_i%29%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20e%5E%7B-%5Calpha%7D%20%2B%20%5Csum_%7By_i%20%5Cne%20h_k%28X_i%29%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20e%5E%7B-%5Calpha%7D%20-%20%5Csum_%7By_i%20%5Cne%20h_k%28X_i%29%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20e%5E%7B-%5Calpha%7D%20%2B%20%5Csum_%7By_i%20%5Cne%20h_k%28X_i%29%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20e%5E%7B%5Calpha%7D%20%26%20%284%29%20%5C%5C%20%26%3D%20e%5E%7B-%5Calpha%7D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20%2B%20%28e%5E%7B%5Calpha%7D%20-%20e%5E%7B-%5Calpha%7D%29%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20I%28y_i%20%5Cne%20h_k%28X_i%29%29%20%26%20%285%29%20%5C%5C%20%5Cend%7Baligned%7D

式4-3

(1)对代价函数求导数并令其为零

(2)定义错误率 e_k 的表达式

(3)将错误率 e_k 带入(2)式

(4)两边同时乘以 exp(α)

(5)移项后整理得

(6)求得最后的估计器权重 α 的表达式

%5Cbegin%7Baligned%7D%20%5Cfrac%7B%5Cpartial%20Cost%28%5Calpha%20%29%7D%7B%5Cpartial%20%5Calpha%20%7D%20%26%3D%20-e%5E%7B-%5Calpha%7D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20%2B%20%28e%5E%7B%5Calpha%7D%20%2B%20e%5E%7B-%5Calpha%7D%29%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20I%28y_i%20%5Cne%20h_k%28X_i%29%29%20%3D%200%26%20%281%29%20%5C%5C%20e_k%20%26%3D%20%5Cfrac%7B%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20I%28y_i%20%5Cne%20h_k%28X_i%29%29%7D%7B%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%7D%20%26%20%282%29%20%5C%5C%200%20%26%3D%20-e%5E%7B-%5Calpha%7D%20%2B%20%28e%5E%5Calpha%20%2B%20e%5E%7B-%5Calpha%7D%29%20e_k%20%26%20%283%29%20%5C%5C%200%20%26%3D%20-1%20%2B%20%28e%5E%7B2%5Calpha%20%7D%20%2B%201%29e_k%20%26%20%284%29%20%5C%5C%20e%5E%7B2%5Calpha%20%7D%20%26%3D%20%5Cfrac%7B1%20-%20e_k%7D%7Be_k%7D%20%26%20%285%29%20%5C%5C%20%5Calpha%20%26%3D%20%5Cfrac%7B1%7D%7B2%7D%20%5Cln%20%5Cfrac%7B1%20-%20e_k%7D%7Be_k%7D%20%26%20%286%29%20%5C%5C%20%5Cend%7Baligned%7D

式4-4

(1)错误率 e_k 的定义

(2)定义 ω_k

(3)得到错误率 e_k 的表达式

%5Cbegin%7Baligned%7D%20e_k%20%26%3D%20%5Cfrac%7B%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%20I%28y_i%20%5Cne%20h_k%28X_i%29%29%7D%7B%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%7D%20%26%20%281%29%20%5C%5C%20%5Comega_%7Bk%2Ci%7D%20%26%3D%20%5Cfrac%7B%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%7D%7B%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7D%7D%20%26%20%282%29%20%5C%5C%20e_k%20%26%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Comega_%7Bk%2Ci%7D%20I%28y_i%20%5Cne%20h_k%28X_i%29%29%20%26%20%283%29%20%5C%5C%20%5Cend%7Baligned%7D

式4-5

接下来是ω的更新方法:

(1) ω_{k+1} 的定义

(2)带入式 4-1中的(3)式

(3)替换为 ω_k

%5Cbegin%7Baligned%7D%20%5Cbar%7B%5Comega_%7Bk%2B1%2Ci%7D%7D%20%26%3D%20e%5E%7B-y_iH_%7Bk%7D%28X_i%29%7D%20%26%20%281%29%20%5C%5C%20%26%3D%20e%5E%7B-y_i%28H_%7Bk-1%7D%28X_i%29%20%2B%20%5Calpha_kh_k%28X_i%29%29%7D%20%26%20%282%29%20%5C%5C%20%26%3D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7De%5E%7B-y_i%5Calpha_kh_k%28X_i%29%7D%20%26%20%283%29%20%5Cend%7Baligned%7D

式4-6

(1)式 4-6中的(3)

(2)两边同时除以归一化参数

(3)分子按照式 4-5中(2)式的定义替换,分母用式 4-7中(1)式替换

(4)分母再按照式 4-5中(2)式的定义替换

(5)由于 ω 的和为一个常数 C

(6)分子分母的常数 C 可以消除,得到 ω 的更新方表达式

%5Cbegin%7Baligned%7D%20%5Cbar%7B%5Comega_%7Bk%2B1%2Ci%7D%7D%20%26%3D%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7De%5E%7B-y_i%5Calpha_kh_k%28X_i%29%7D%20%26%20%281%29%20%5C%5C%20%5Comega_%7Bk%2B1%2Ci%7D%20%26%3D%20%5Cfrac%7B%20%5Cbar%7B%5Comega_%7Bk%2Ci%7D%7De%5E%7B-y_i%5Calpha_kh_k%28X_i%29%7D%20%7D%7B%5Csum_%7Bj%20%3D%200%7D%5EN%20%5Cbar%7B%5Comega_%7Bk%2B1%2Cj%7D%7D%7D%20%26%20%282%29%20%5C%5C%20%26%3D%20%5Cfrac%7B%5Comega_%7Bk%2Ci%7D%20%5Csum_%7Bj%20%3D%200%7D%5EN%20%5Cleft%28%5Cbar%7B%5Comega_%7Bk%2Cj%7D%7D%5Cright%29%20e%5E%7B-y_i%5Calpha_kh_k%28X_i%29%7D%20%7D%7B%5Csum_%7Bj%20%3D%200%7D%5EN%20%5Cleft%28%5Cbar%7B%5Comega_%7Bk%2Cj%7D%7D%20e%5E%7B-y_j%5Calpha_kh_k%28X_j%29%7D%20%5Cright%29%20%7D%20%26%20%283%29%20%5C%5C%20%26%3D%20%5Cfrac%7B%5Comega_%7Bk%2Ci%7D%20%5Csum_%7Bj%20%3D%200%7D%5EN%20%5Cleft%28%5Cbar%7B%5Comega_%7Bk%2Cj%7D%7D%5Cright%29%20e%5E%7B-y_i%5Calpha_kh_k%28X_i%29%7D%7D%7B%5Csum_%7Bj%20%3D%200%7D%5EN%20%5Cleft%28%5Comega_%7Bk%2Cj%7D%20%5Cleft%28%5Csum_%7Bl%20%3D%200%7D%5EN%20%5Cbar%7B%5Comega_%7Bk%2Cl%7D%7D%5Cright%29%20e%5E%7B-y_j%5Calpha_kh_k%28X_j%29%7D%5Cright%29%20%7D%20%26%20%284%29%20%5C%5C%20%26%3D%20%5Cfrac%7B%5Comega_%7Bk%2Ci%7D%20C%20e%5E%7B-y_i%5Calpha_kh_k%28X_i%29%7D%7D%7B%5Csum_%7Bj%20%3D%200%7D%5EN%20%5Cleft%28%5Comega_%7Bk%2Cj%7D%20C%20e%5E%7B-y_j%5Calpha_kh_k%28X_j%29%7D%5Cright%29%20%7D%20%26%20%285%29%20%5C%5C%20%26%3D%20%5Cfrac%7B%5Comega_%7Bk%2Ci%7D%20e%5E%7B-y_i%5Calpha_kh_k%28X_i%29%7D%7D%7B%5Csum_%7Bj%20%3D%200%7D%5EN%20%5Cleft%28%5Comega_%7Bk%2Cj%7D%20e%5E%7B-y_j%5Calpha_kh_k%28X_j%29%7D%5Cright%29%20%7D%20%26%20%286%29%20%5C%5C%20%5Cend%7Baligned%7D

式4-7

综合式 4-1~式 4-7 可以得到 AdaBoost 算法的表达式:

%5Cbegin%7Baligned%7D%20e_k%20%26%3D%20%5Csum_%7Bi%20%3D%201%7D%5E%7BN%7D%5Comega_%7Bk%2Ci%7D%20I%28y_i%20%5Cne%20h_k%28X_i%29%29%20%26%20%281%29%20%5C%5C%20%5Calpha_k%20%26%3D%20%5Cfrac%7B1%7D%7B2%7D%20%5Cln%20%5Cfrac%7B1%20-%20e_k%7D%7Be_k%7D%20%26%20%282%29%20%5C%5C%20%5Comega_%7Bk%2B1%2Ci%7D%20%26%3D%20%5Cfrac%7B%5Comega_%7Bk%2Ci%7D%20e%5E%7B-y_i%5Calpha_kh_k%28X_i%29%7D%7D%7B%5Csum_%7Bj%20%3D%200%7D%5EN%20%5Cleft%28%5Comega_%7Bk%2Cj%7D%20e%5E%7B-y_j%5Calpha_kh_k%28X_j%29%7D%5Cright%29%20%7D%20%26%20%283%29%20%5C%5C%20H%28x%29%20%26%3D%20%5Coperatorname%7Bsign%7D%20%5Cleft%28%5Csum_%7Bi%20%3D%201%7D%5E%7BK%7D%20%5Calpha_i%20h_i%28x%29%5Cright%29%20%26%20%284%29%20%5C%5C%20%5Cend%7Baligned%7D

式4-8

由于文章太长,CSDN无法发布,所以将文章拆成上下两篇发布


本文首发于——AI导图,欢迎关注

文章出处登录后可见!

已经登录?立即刷新

共计人评分,平均

到目前为止还没有投票!成为第一位评论此文章。

(0)
社会演员多的头像社会演员多普通用户
上一篇 2022年3月28日 上午10:46
下一篇 2022年3月28日

相关推荐