工业蒸汽量预测(最新版本下篇)
5.模型验证
5.1模型评估的概念与正则化
5.1.1 过拟合与欠拟合
### 获取并绘制数据集
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(666
x = np.random.uniform(-3.0, 3.0, size=100
X = x.reshape(-1, 1
y = 0.5 * x**2 + x + 2 + np.random.normal(0, 1, size=100
plt.scatter(x, y
plt.show(
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression(
lin_reg.fit(X, y
lin_reg.score(X, y
# 输出:0.4953707811865009
0.4953707811865009
准确率为 0.495,比较低,直线拟合数据的程度较低。
### 使用均方误差判断拟合程度
from sklearn.metrics import mean_squared_error
y_predict = lin_reg.predict(X
mean_squared_error(y, y_predict
# 输出:3.0750025765636577
3.0750025765636577
### 绘制拟合结果
y_predict = lin_reg.predict(X
plt.scatter(x, y
plt.plot(np.sort(x, y_predict[np.argsort(x], color='r'
plt.show(
5.1.2 回归模型的评估指标和调用方法
### 使用多项式回归拟合
# * 封装 Pipeline 管道
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
def PolynomialRegression(degree:
return Pipeline([
('poly', PolynomialFeatures(degree=degree,
('std_scaler', StandardScaler(,
('lin_reg', LinearRegression(
]
- 使用 Pipeline 拟合数据:degree = 2
poly2_reg = PolynomialRegression(degree=2
poly2_reg.fit(X, y
y2_predict = poly2_reg.predict(X
# 比较真值和预测值的均方误差
mean_squared_error(y, y2_predict
# 输出:1.0987392142417856
1.0987392142417856
- 绘制拟合结果
plt.scatter(x, y
plt.plot(np.sort(x, y2_predict[np.argsort(x], color='r'
plt.show(
- 调整 degree = 10
poly10_reg = PolynomialRegression(degree=10
poly10_reg.fit(X, y
y10_predict = poly10_reg.predict(X
mean_squared_error(y, y10_predict
# 输出:1.0508466763764164
plt.scatter(x, y
plt.plot(np.sort(x, y10_predict[np.argsort(x], color='r'
plt.show(
- 调整 degree = 100
poly100_reg = PolynomialRegression(degree=100
poly100_reg.fit(X, y
y100_predict = poly100_reg.predict(X
mean_squared_error(y, y100_predict
# 输出:0.6874357783433694
plt.scatter(x, y
plt.plot(np.sort(x, y100_predict[np.argsort(x], color='r'
plt.show(
- 分析
- degree=2:均方误差为 1.0987392142417856;
- degree=10:均方误差为 1.0508466763764164;
- degree=100:均方误差为 0.6874357783433694;
- degree 越大拟合的效果越好,因为样本点是一定的,我们总能找到一条曲线将所有的样本点拟合,也就是说将所有的样本点都完全落在这根曲线上,使得整体的均方误差为 0;
- 红色曲线并不是所计算出的拟合曲线,而此红色曲线只是原有的数据点对应的 y 的预测值连接出来的结果,而且有的地方没有数据点,因此连接的结果和原来的曲线不一样;
5.1.3 交叉验证
- 交叉验证迭代器
K折交叉验证: KFold 将所有的样例划分为 k 个组,称为折叠 (fold (如果 k = n,这等价于 Leave One Out(留一) 策略),都具有相同的大小(如果可能)。预测函数学习时使用 k - 1 个折叠中的数据,最后一个剩下的折叠会用于测试。
K折重复多次: RepeatedKFold 重复 K-Fold n 次。当需要运行时可以使用它 KFold n 次,在每次重复中产生不同的分割。
留一交叉验证: LeaveOneOut (或 LOO 是一个简单的交叉验证。每个学习集都是通过除了一个样本以外的所有样本创建的,测试集是被留下的样本。 因此,对于 n 个样本,我们有 n 个不同的训练集和 n 个不同的测试集。这种交叉验证程序不会浪费太多数据,因为只有一个样本是从训练集中删除掉的:
留P交叉验证: LeavePOut 与 LeaveOneOut 非常相似,因为它通过从整个集合中删除 p 个样本来创建所有可能的 训练/测试集。对于 n 个样本,这产生了 {n \choose p} 个 训练-测试 对。与 LeaveOneOut 和 KFold 不同,当 p > 1 时,测试集会重叠。
用户自定义数据集划分: ShuffleSplit 迭代器将会生成一个用户给定数量的独立的训练/测试数据划分。样例首先被打散然后划分为一对训练测试集合。
设置每次生成的随机数相同: 可以通过设定明确的 random_state,使得伪随机生成器的结果可以重复。
- 基于类标签、具有分层的交叉验证迭代器
StratifiedKFold是 k-fold 的变种,会返回 stratified(分层) 的折叠:每个小集合中,各个类别的样例比例大致和完整数据集中相同。
StratifiedShuffleSplit是 ShuffleSplit 的一个变种,会返回直接的划分,比如: 创建一个划分,但是划分中每个类的比例和完整数据集中的相同。
- 用于分组数据的交叉验证迭代器
GroupKFold是 k-fold 的变体,它确保同一个 group 在测试和训练集中都不被表示。 例如,如果数据是从不同的 subjects 获得的,每个 subject 有多个样本,并且如果模型足够灵活以高度人物指定的特征中学习,则可能无法推广到新的 subject 。 GroupKFold 可以检测到这种过拟合的情况。
LeaveOneGroupOut是一个交叉验证方案,它根据第三方提供的 array of integer groups (整数组的数组)来提供样本。这个组信息可以用来编码任意域特定的预定义交叉验证折叠。
LeavePGroupsOut类似于 LeaveOneGroupOut,但为每个训练/测试集删除与 P 组有关的样本。
GroupShuffleSplit迭代器是 ShuffleSplit 和 LeavePGroupsOut 的组合,它生成一个随机划分分区的序列,其中为每个分组提供了一个组子集。
- 时间序列分割
TimeSeriesSplit是 k-fold 的一个变体,它首先返回 k 折作为训练数据集,并且 (k+1 折作为测试数据集。 请注意,与标准的交叉验证方法不同,连续的训练集是超越前者的超集。 另外,它将所有的剩余数据添加到第一个训练分区,它总是用来训练模型。
from sklearn.model_selection import train_test_split,cross_val_score,cross_validate # 交叉验证所需的函数
from sklearn.model_selection import KFold,LeaveOneOut,LeavePOut,ShuffleSplit # 交叉验证所需的子集划分方法
from sklearn.model_selection import StratifiedKFold,StratifiedShuffleSplit # 分层分割
from sklearn.model_selection import GroupKFold,LeaveOneGroupOut,LeavePGroupsOut,GroupShuffleSplit # 分组分割
from sklearn.model_selection import TimeSeriesSplit # 时间序列分割
from sklearn import datasets # 自带数据集
from sklearn import svm # SVM算法
from sklearn import preprocessing # 预处理模块
from sklearn.metrics import recall_score # 模型度量
iris = datasets.load_iris( # 加载数据集
print('样本集大小:',iris.data.shape,iris.target.shape
# ===================================数据集划分,训练模型==========================
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.4, random_state=0 # 交叉验证划分训练集和测试集.test_size为测试集所占的比例
print('训练集大小:',X_train.shape,y_train.shape # 训练集样本大小
print('测试集大小:',X_test.shape,y_test.shape # 测试集样本大小
clf = svm.SVC(kernel='linear', C=1.fit(X_train, y_train # 使用训练集训练模型
print('准确率:',clf.score(X_test, y_test # 计算测试集的度量值(准确率)
# 如果涉及到归一化,则在测试集上也要使用训练集模型提取的归一化函数。
scaler = preprocessing.StandardScaler(.fit(X_train # 通过训练集获得归一化函数模型。(也就是先减几,再除以几的函数)。在训练集和测试集上都使用这个归一化函数
X_train_transformed = scaler.transform(X_train
clf = svm.SVC(kernel='linear', C=1.fit(X_train_transformed, y_train # 使用训练集训练模型
X_test_transformed = scaler.transform(X_test
print(clf.score(X_test_transformed, y_test # 计算测试集的度量值(准确度)
# ===================================直接调用交叉验证评估模型==========================
clf = svm.SVC(kernel='linear', C=1
scores = cross_val_score(clf, iris.data, iris.target, cv=5 #cv为迭代次数。
print(scores # 打印输出每次迭代的度量值(准确度)
print("Accuracy: %0.2f (+/- %0.2f" % (scores.mean(, scores.std( * 2 # 获取置信区间。(也就是均值和方差)
# ===================================多种度量结果======================================
scoring = ['precision_macro', 'recall_macro'] # precision_macro为精度,recall_macro为召回率
scores = cross_validate(clf, iris.data, iris.target, scoring=scoring,cv=5, return_train_score=True
sorted(scores.keys(
print('测试结果:',scores # scores类型为字典。包含训练得分,拟合次数,score-times (得分次数)
# ==================================K折交叉验证、留一交叉验证、留p交叉验证、随机排列交叉验证==========================================
# k折划分子集
kf = KFold(n_splits=2
for train, test in kf.split(iris.data:
print("k折划分:%s %s" % (train.shape, test.shape
break
# 留一划分子集
loo = LeaveOneOut(
for train, test in loo.split(iris.data:
print("留一划分:%s %s" % (train.shape, test.shape
break
# 留p划分子集
lpo = LeavePOut(p=2
for train, test in loo.split(iris.data:
print("留p划分:%s %s" % (train.shape, test.shape
break
# 随机排列划分子集
ss = ShuffleSplit(n_splits=3, test_size=0.25,random_state=0
for train_index, test_index in ss.split(iris.data:
print("随机排列划分:%s %s" % (train.shape, test.shape
break
# ==================================分层K折交叉验证、分层随机交叉验证==========================================
skf = StratifiedKFold(n_splits=3 #各个类别的比例大致和完整数据集中相同
for train, test in skf.split(iris.data, iris.target:
print("分层K折划分:%s %s" % (train.shape, test.shape
break
skf = StratifiedShuffleSplit(n_splits=3 # 划分中每个类的比例和完整数据集中的相同
for train, test in skf.split(iris.data, iris.target:
print("分层随机划分:%s %s" % (train.shape, test.shape
break
# ==================================组 k-fold交叉验证、留一组交叉验证、留 P 组交叉验证、Group Shuffle Split==========================================
X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10]
y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"]
groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3]
# k折分组
gkf = GroupKFold(n_splits=3 # 训练集和测试集属于不同的组
for train, test in gkf.split(X, y, groups=groups:
print("组 k-fold分割:%s %s" % (train, test
# 留一分组
logo = LeaveOneGroupOut(
for train, test in logo.split(X, y, groups=groups:
print("留一组分割:%s %s" % (train, test
# 留p分组
lpgo = LeavePGroupsOut(n_groups=2
for train, test in lpgo.split(X, y, groups=groups:
print("留 P 组分割:%s %s" % (train, test
# 随机分组
gss = GroupShuffleSplit(n_splits=4, test_size=0.5, random_state=0
for train, test in gss.split(X, y, groups=groups:
print("随机分割:%s %s" % (train, test
# ==================================时间序列分割==========================================
tscv = TimeSeriesSplit(n_splits=3
TimeSeriesSplit(max_train_size=None, n_splits=3
for train, test in tscv.split(iris.data:
print("时间序列分割:%s %s" % (train, test
样本集大小: (150, 4 (150,
训练集大小: (90, 4 (90,
测试集大小: (60, 4 (60,
准确率: 0.9666666666666667
0.9333333333333333
[0.96666667 1. 0.96666667 0.96666667 1. ]
Accuracy: 0.98 (+/- 0.03
测试结果: {'fit_time': array([0.000494 , 0.0005343 , 0.00048256, 0.00053048, 0.00047898], 'score_time': array([0.00132895, 0.00126219, 0.00118518, 0.00140405, 0.00118995], 'test_precision_macro': array([0.96969697, 1. , 0.96969697, 0.96969697, 1. ], 'train_precision_macro': array([0.97674419, 0.97674419, 0.99186992, 0.98412698, 0.98333333], 'test_recall_macro': array([0.96666667, 1. , 0.96666667, 0.96666667, 1. ], 'train_recall_macro': array([0.975 , 0.975 , 0.99166667, 0.98333333, 0.98333333]}
k折划分:(75, (75,
留一划分:(149, (1,
留p划分:(149, (1,
随机排列划分:(149, (1,
分层K折划分:(100, (50,
分层随机划分:(135, (15,
组 k-fold分割:[0 1 2 3 4 5] [6 7 8 9]
组 k-fold分割:[0 1 2 6 7 8 9] [3 4 5]
组 k-fold分割:[3 4 5 6 7 8 9] [0 1 2]
留一组分割:[3 4 5 6 7 8 9] [0 1 2]
留一组分割:[0 1 2 6 7 8 9] [3 4 5]
留一组分割:[0 1 2 3 4 5] [6 7 8 9]
留 P 组分割:[6 7 8 9] [0 1 2 3 4 5]
留 P 组分割:[3 4 5] [0 1 2 6 7 8 9]
留 P 组分割:[0 1 2] [3 4 5 6 7 8 9]
随机分割:[0 1 2] [3 4 5 6 7 8 9]
随机分割:[3 4 5] [0 1 2 6 7 8 9]
随机分割:[3 4 5] [0 1 2 6 7 8 9]
随机分割:[3 4 5] [0 1 2 6 7 8 9]
时间序列分割:[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38] [39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62
63 64 65 66 67 68 69 70 71 72 73 74 75]
时间序列分割:[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75] [ 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93
94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111
112]
时间序列分割:[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112] [113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130
131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148
149]
5.2 网格搜索
Grid Search:一种调参手段;穷举搜索:在所有候选的参数选择中,通过循环遍历,尝试每一种可能性,表现最好的参数就是最终的结果。其原理就像是在数组里找最大值。
5.2.1 简单的网格搜索
from sklearn.datasets import load_iris
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
iris = load_iris(
X_train,X_test,y_train,y_test = train_test_split(iris.data,iris.target,random_state=0
print("Size of training set:{} size of testing set:{}".format(X_train.shape[0],X_test.shape[0]
#### grid search start
best_score = 0
for gamma in [0.001,0.01,0.1,1,10,100]:
for C in [0.001,0.01,0.1,1,10,100]:
svm = SVC(gamma=gamma,C=C#对于每种参数可能的组合,进行一次训练;
svm.fit(X_train,y_train
score = svm.score(X_test,y_test
if score > best_score:#找到表现最好的参数
best_score = score
best_parameters = {'gamma':gamma,'C':C}
#### grid search end
print("Best score:{:.2f}".format(best_score
print("Best parameters:{}".format(best_parameters
Size of training set:112 size of testing set:38
Best score:0.97
Best parameters:{'gamma': 0.001, 'C': 100}
5.2.2 Grid Search with Cross Validation(具有交叉验证的网格搜索)
X_trainval,X_test,y_trainval,y_test = train_test_split(iris.data,iris.target,random_state=0
X_train,X_val,y_train,y_val = train_test_split(X_trainval,y_trainval,random_state=1
print("Size of training set:{} size of validation set:{} size of testing set:{}".format(X_train.shape[0],X_val.shape[0],X_test.shape[0]
best_score = 0.0
for gamma in [0.001,0.01,0.1,1,10,100]:
for C in [0.001,0.01,0.1,1,10,100]:
svm = SVC(gamma=gamma,C=C
svm.fit(X_train,y_train
score = svm.score(X_val,y_val
if score > best_score:
best_score = score
best_parameters = {'gamma':gamma,'C':C}
svm = SVC(**best_parameters #使用最佳参数,构建新的模型
svm.fit(X_trainval,y_trainval #使用训练集和验证集进行训练,more data always results in good performance.
test_score = svm.score(X_test,y_test # evaluation模型评估
print("Best score on validation set:{:.2f}".format(best_score
print("Best parameters:{}".format(best_parameters
print("Best score on test set:{:.2f}".format(test_score
Size of training set:84 size of validation set:28 size of testing set:38
Best score on validation set:0.96
Best parameters:{'gamma': 0.001, 'C': 10}
Best score on test set:0.92
from sklearn.model_selection import cross_val_score
best_score = 0.0
for gamma in [0.001,0.01,0.1,1,10,100]:
for C in [0.001,0.01,0.1,1,10,100]:
svm = SVC(gamma=gamma,C=C
scores = cross_val_score(svm,X_trainval,y_trainval,cv=5 #5折交叉验证
score = scores.mean( #取平均数
if score > best_score:
best_score = score
best_parameters = {"gamma":gamma,"C":C}
svm = SVC(**best_parameters
svm.fit(X_trainval,y_trainval
test_score = svm.score(X_test,y_test
print("Best score on validation set:{:.2f}".format(best_score
print("Best parameters:{}".format(best_parameters
print("Score on testing set:{:.2f}".format(test_score
Best score on validation set:0.97
Best parameters:{'gamma': 0.1, 'C': 10}
Score on testing set:0.97
交叉验证经常与网格搜索进行结合,作为参数评价的一种方法,这种方法叫做grid search with cross validation。sklearn因此设计了一个这样的类GridSearchCV,这个类实现了fit,predict,score等方法,被当做了一个estimator,使用fit方法,该过程中:(1)搜索到最佳参数;(2)实例化了一个最佳参数的estimator;
from sklearn.model_selection import GridSearchCV
#把要调整的参数以及其候选值 列出来;
param_grid = {"gamma":[0.001,0.01,0.1,1,10,100],
"C":[0.001,0.01,0.1,1,10,100]}
print("Parameters:{}".format(param_grid
grid_search = GridSearchCV(SVC(,param_grid,cv=5 #实例化一个GridSearchCV类
X_train,X_test,y_train,y_test = train_test_split(iris.data,iris.target,random_state=10
grid_search.fit(X_train,y_train #训练,找到最优的参数,同时使用最优的参数实例化一个新的SVC estimator。
print("Test set score:{:.2f}".format(grid_search.score(X_test,y_test
print("Best parameters:{}".format(grid_search.best_params_
print("Best score on train set:{:.2f}".format(grid_search.best_score_
Parameters:{'gamma': [0.001, 0.01, 0.1, 1, 10, 100], 'C': [0.001, 0.01, 0.1, 1, 10, 100]}
Test set score:0.97
Best parameters:{'C': 10, 'gamma': 0.1}
Best score on train set:0.98
5.2.3 学习曲线
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5:
plt.figure(
plt.title(title
if ylim is not None:
plt.ylim(*ylim
plt.xlabel("Training examples"
plt.ylabel("Score"
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes
train_scores_mean = np.mean(train_scores, axis=1
train_scores_std = np.std(train_scores, axis=1
test_scores_mean = np.mean(test_scores, axis=1
test_scores_std = np.std(test_scores, axis=1
plt.grid(
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r"
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g"
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score"
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score"
plt.legend(loc="best"
return plt
digits = load_digits(
X, y = digits.data, digits.target
title = "Learning Curves (Naive Bayes"
# Cross validation with 100 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0
estimator = GaussianNB(
plot_learning_curve(estimator, title, X, y, ylim=(0.7, 1.01, cv=cv, n_jobs=4
<module 'matplotlib.pyplot' from '/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/pyplot.py'>
title = "Learning Curves (SVM, RBF kernel, $\gamma=0.001$"
# SVC is more expensive so we do a lower number of CV iterations:
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0
estimator = SVC(gamma=0.001
plot_learning_curve(estimator, title, X, y, (0.7, 1.01, cv=cv, n_jobs=4
<module 'matplotlib.pyplot' from '/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/pyplot.py'>
5.2.4 验证曲线
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn. model_selection import validation_curve
digits = load_digits(
X, y = digits.data, digits.target
param_range = np.logspace(-6, -1, 5
train_scores, test_scores = validation_curve(
SVC(, X, y, param_name="gamma", param_range=param_range,
cv=10, scoring="accuracy", n_jobs=1
train_scores_mean = np.mean(train_scores, axis=1
train_scores_std = np.std(train_scores, axis=1
test_scores_mean = np.mean(test_scores, axis=1
test_scores_std = np.std(test_scores, axis=1
plt.title("Validation Curve with SVM"
plt.xlabel("$\gamma$"
plt.ylabel("Score"
plt.ylim(0.0, 1.1
plt.semilogx(param_range, train_scores_mean, label="Training score", color="r"
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2, color="r"
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="g"
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2, color="g"
plt.legend(loc="best"
plt.show(
5.3 工业蒸汽赛题模型验证
5.3.1 模型过拟合与欠拟合
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
import warnings
warnings.filterwarnings("ignore"
from sklearn.linear_model import LinearRegression #线性回归
from sklearn.neighbors import KNeighborsRegressor #K近邻回归
from sklearn.tree import DecisionTreeRegressor #决策树回归
from sklearn.ensemble import RandomForestRegressor #随机森林回归
from sklearn.svm import SVR #支持向量回归
import lightgbm as lgb #lightGbm模型
from sklearn.model_selection import train_test_split # 切分数据
from sklearn.metrics import mean_squared_error #评价指标
from sklearn.linear_model import SGDRegressor
# 下载需要用到的数据集
!wget http://tianchi-media.oss-cn-beijing.aliyuncs.com/DSW/Industrial_Steam_Forecast/zhengqi_test.txt
!wget http://tianchi-media.oss-cn-beijing.aliyuncs.com/DSW/Industrial_Steam_Forecast/zhengqi_train.txt
--2023-03-24 22:17:50-- http://tianchi-media.oss-cn-beijing.aliyuncs.com/DSW/Industrial_Steam_Forecast/zhengqi_test.txt
正在解析主机 tianchi-media.oss-cn-beijing.aliyuncs.com (tianchi-media.oss-cn-beijing.aliyuncs.com... 49.7.22.39
正在连接 tianchi-media.oss-cn-beijing.aliyuncs.com (tianchi-media.oss-cn-beijing.aliyuncs.com|49.7.22.39|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度: 466959 (456K [text/plain]
正在保存至: “zhengqi_test.txt.2”
zhengqi_test.txt.2 100%[===================>] 456.01K --.-KB/s in 0.03s
2023-03-24 22:17:51 (13.2 MB/s - 已保存 “zhengqi_test.txt.2” [466959/466959]
--2023-03-24 22:17:51-- http://tianchi-media.oss-cn-beijing.aliyuncs.com/DSW/Industrial_Steam_Forecast/zhengqi_train.txt
正在解析主机 tianchi-media.oss-cn-beijing.aliyuncs.com (tianchi-medi