1 | import math |
初探 Python 量化交易
搭建交易执行层核心
1 | import base64 |
行情数据对接和抓取
1 | import copy |
策略与回测系统
Utils.py
1 | import os.path as path |
Strategy.py
1 | import abc |
Backtest.py
1 | from numbers import Number |
Python 并发编程
单线程
1 | import time |
多线程
1 | import concurrent.futures |
多进程
1 | def download_all(sites): |
Asyncio
1 | import asyncio |
如何选择
1 | if io_bound: |
白话机器学习的数学——评估
交叉验证
把获取的全部训练数据分成两份:一份用于测试,一份用于训练。然后用前者来评估模型。
回归问题的验证
\[ \frac{1}{n}\sum_{i=1}^{n}{(y^{(i)}-f_\theta(x^{(i)}))^2} \]
分类问题的验证
精度
\[ Accuracy = \frac{TP + TN}{TP + FP + FN + TN} \]
精确率
\[ Precision = \frac{TP}{TP + FP} \]
召回率
\[ Recall = \frac{TP}{TP + FN} \]
F 值
\[ Fmeasure = \frac{2}{\frac{1}{Precision} + \frac{1}{Recall}} \]
正则化
\[ \theta_0 := \theta_0 - \eta(\sum_{i=1}^{n}{(f_\theta(x^{(i)})-y^{(i)})x_j^{(i)}}) \]
\[ \theta_j := \theta_j - \eta(\sum_{i=1}^{n}{(f_\theta(x^{(i)})-y^{(i)})x_j^{(i)}}+\lambda\theta_j) \]
回归的正则化
\[ C(\theta) = \frac{1}{2}\sum_{i=1}^{n}{(y^{(i)}-f_\theta(x^{(i)}))^2} \]
\[ R(\theta) = \frac{\lambda}{2}\sum_{j=1}^{m}{\theta_j^2} \]
\[ E(\theta) = C(\theta) + R(\theta) \]
分类的正则化
\[ C(\theta) = -\sum_{i=1}^{n}{(y^{(i)}\log f_\theta(x^{(i)})+(1-y^{(i)})\log (1-f_\theta(x^{(i)})))} \]
\[ R(\theta) = \frac{\lambda}{2}\sum_{j=1}^{m}{\theta_j^2} \]
\[ E(\theta) = C(\theta) + R(\theta) \]
代码示例
1 | import numpy as np |
白话机器学习的数学——分类——逻辑回归
预测函数 - sigmoid 函数
\[ f_\theta(x) = \frac{1}{1 + exp(-\theta^Tx)} \]
决策边界
\[ y = \begin{cases} 1 & (\theta^Tx \geq 0)\\ 0 & (\theta^Tx < 0) \end{cases} \]
目标函数 - 似然函数
\[ L(\theta) = \prod_{i=1}^{n}{P(y^{(i)}=1|x^{(i)})^{y^{(i)}}P(y^{(i)}=0|x^{(i)})^{1-y^{(i)}}} \]
对数似然函数
\[ \log L(\theta) = \sum_{i=1}^{n}{(y^{(i)}\log f_\theta(x^{(i)})+(1-y^{(i)})\log (1-f_\theta(x^{(i)})))} \]
参数更新表达式
\[ \theta_j := \theta_j - \eta\sum_{i=1}^{n}{(f_\theta(x^{(i)}) - y^{(i)})x_j^{(i)}} \]
线性可分问题
\[ \theta^Tx = \theta_0x_0 + \theta_1x_1 + \theta_2x_2 = \theta_0 + \theta_1x_1 + \theta_2x_2 = 0 \]
\[ x_2 = -\frac{\theta_0 + \theta_1x_1}{\theta_2} \]
1 | import numpy as np |
线性不可分问题
\[ \theta^Tx = \theta_0x_0 + \theta_1x_1 + \theta_2x_2 + \theta_3x_1^2 = \theta_0 + \theta_1x_1 + \theta_2x_2 + \theta_3x_1^2 = 0 \]
\[ x_2 = -\frac{\theta_0 + \theta_1x_1 + \theta_3x_1^2}{\theta_2} \]
1 | import numpy as np |
随机梯度下降法的实现
1 | import numpy as np |