0%

白话机器学习的数学——评估

交叉验证

把获取的全部训练数据分成两份:一份用于测试,一份用于训练。然后用前者来评估模型。

回归问题的验证

\[ \frac{1}{n}\sum_{i=1}^{n}{(y^{(i)}-f_\theta(x^{(i)}))^2} \]

分类问题的验证

精度

\[ Accuracy = \frac{TP + TN}{TP + FP + FN + TN} \]

精确率

\[ Precision = \frac{TP}{TP + FP} \]

召回率

\[ Recall = \frac{TP}{TP + FN} \]

F 值

\[ Fmeasure = \frac{2}{\frac{1}{Precision} + \frac{1}{Recall}} \]

正则化

\[ \theta_0 := \theta_0 - \eta(\sum_{i=1}^{n}{(f_\theta(x^{(i)})-y^{(i)})x_j^{(i)}}) \]

\[ \theta_j := \theta_j - \eta(\sum_{i=1}^{n}{(f_\theta(x^{(i)})-y^{(i)})x_j^{(i)}}+\lambda\theta_j) \]

回归的正则化

\[ C(\theta) = \frac{1}{2}\sum_{i=1}^{n}{(y^{(i)}-f_\theta(x^{(i)}))^2} \]

\[ R(\theta) = \frac{\lambda}{2}\sum_{j=1}^{m}{\theta_j^2} \]

\[ E(\theta) = C(\theta) + R(\theta) \]

分类的正则化

\[ C(\theta) = -\sum_{i=1}^{n}{(y^{(i)}\log f_\theta(x^{(i)})+(1-y^{(i)})\log (1-f_\theta(x^{(i)})))} \]

\[ R(\theta) = \frac{\lambda}{2}\sum_{j=1}^{m}{\theta_j^2} \]

\[ E(\theta) = C(\theta) + R(\theta) \]

代码示例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
import numpy as np
import matplotlib.pyplot as plt

# 真正的函数
def g(x):
return 0.1 * (x ** 3 + x ** 2 + x)

# 随意准备一些向真正的函数加入了一点噪声的训练数据
train_x = np.linspace(-2, 2, 8)
train_y = g(train_x) + np.random.randn(train_x.size) * 0.05

# 标准化
mu = train_x.mean()
sigma = train_x.std()
def standardize(x):
return (x - mu) / sigma
train_z = standardize(train_x)

# 创建训练数据的矩阵
def to_matrix(x):
return np.vstack([
np.ones(x.size),
x,
x ** 2,
x ** 3,
x ** 4,
x ** 5,
x ** 6,
x ** 7,
x ** 8,
x ** 9,
x ** 10
]).T
X = to_matrix(train_z)

# 预测函数
def f(x):
return np.dot(x, theta)

# 目标函数
def E(x, y):
return 0.5 * np.sum((y - f(x)) ** 2)

# 参数初始化
theta = np.random.randn(X.shape[1])
# 正则化常量
LAMBDA = 0.5
# 学习率
ETA = 1e-4
# 误差
diff = 1
# 重复学习(不应用正则化)
error = E(X, train_y)
while diff > 1e-6:
theta = theta - ETA * (np.dot(f(X) - train_y, X))
current_error = E(X, train_y)
diff = error - current_error
error = current_error
theta1 = theta

# 重复学习(应用正则化)
theta = np.random.randn(X.shape[1])
diff = 1
error = E(X, train_y)
while diff > 1e-6:
reg_term = LAMBDA * np.hstack([0, theta[1:]])
theta = theta - ETA * (np.dot(f(X) - train_y, X) + reg_term)
current_error = E(X, train_y)
diff = error - current_error
error = current_error
theta2 = theta

# 绘图确认
plt.plot(train_z, train_y, 'o')
z = standardize(np.linspace(-2, 2, 100))
# 未应用正则化
theta = theta1
plt.plot(z, f(to_matrix(z)), linestyle='dashed')
# 应用正则化
theta = theta2
plt.plot(z, f(to_matrix(z)))
plt.show()