零基础数据挖掘——金融风控(四)实践逻辑回归

1、前言

在重看我的项目的过程中发现自己对于相关知识点理解并不透彻,希望能理论联系实际,加深自己对基础知识的理解。项目来源于阿里天池学习赛——零基础入门金融风控-贷款违约预测,感兴趣的小伙伴可以自己去原文了解。

逻辑回归的优缺点:

  • 优点

    • 训练速度较快,分类的时候,计算量仅仅只和特征的数目相关;
    • 简单易理解,模型的可解释性非常好,从特征的权重可以看到不同的特征对最后结果的影响;
    • 适合二分类问题,不需要缩放输入特征;
    • 内存资源占用小,只需要存储各个维度的特征值;
  • 缺点

    • 逻辑回归需要预先处理缺失值和异常值;

    • 不能用Logistic回归去解决非线性问题,因为Logistic的决策面是线性的;

    • 对多重共线性数据较为敏感,且很难处理数据不平衡的问题;

    • 准确率并不是很高,因为形式非常简单,很难去拟合数据的真实分布;

2、特征工程部分

①处理时间特征[‘issueDate’]:

  • [‘issueDate’]转化为与2007-06-01的时间差[‘issueDateDT’],按天计算。
  • [‘issueDate’]转化为代表贷款月份的[‘issueDateM’],再转化为one-hot
  • [‘issueDate’]本身删除

②[’employmentLength’]转化为纯数字表示工作年份

③取[‘earliesCreditLine’]最后4个数字作为年份

④自定义编码处理[‘grade’]和[‘subGrade’]

⑤删除部分特征值

  • [‘postCode’]邮政编码前三位,本质上是类别变量,代表的是地区,与regionCode有重复
  • [’employmentTitle’]就业职称,显然应该是类别型变量,但是有248683个变量,应该是要分箱处理的,这里考虑删除
  • [‘title’]借款人提供的贷款名称,应该是一个很重要的变量,但是感觉数字不是线性的,同上条
  • [‘policyCode’]只有一个特征,没有意义
  • [‘id’]没有意义

⑥one-hot编码

  • [‘verificationStatus’]
  • [‘issueDateM’]
  • [‘purpose’]
  • [‘regionCode’]
import numpy as np
import pandas as pd
import datetime
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score

#导入数据
data_train = pd.read_csv('D:/myP/financial_risk/train.csv')
data_testA = pd.read_csv('D:/myP/financial_risk/testA.csv')

#将['issueDate']转化为距'2007-06-01'的天数
for data in [data_train, data_testA]:
    data['issueDate'] = pd.to_datetime(data['issueDate'],format='%Y-%m-%d')
    startdate = datetime.datetime.strptime('2007-06-01', '%Y-%m-%d')
    data['issueDateDT'] = data['issueDate'].apply(lambda x: x-startdate).dt.days

#将['issueDate']转化贷款申请的月份
for data in [data_train, data_testA]:
    data['issueDate'] = pd.to_datetime(data['issueDate'],format='%Y-%m-%d')
    data['issueDateM'] = data['issueDate'].dt.month

#将employmentLength转化为纯数字表示工作年份
def employmentLength_to_int(s):
    if pd.isnull(s):
        return s
    else:
        return np.int8(s.split()[0])
for data in [data_train, data_testA]:
    data['employmentLength'].replace(to_replace='10+ years', value='10 years', inplace=True)
    data['employmentLength'].replace('< 1 year', '0 years', inplace=True)
    data['employmentLength'] = data['employmentLength'].apply(employmentLength_to_int)

#取['earliesCreditLine']最后4个数字作为年份
for data in [data_train, data_testA]:
    data['earliesCreditLine'] = data['earliesCreditLine'].apply(lambda s: int(s[-4:]))
data['earliesCreditLine'].value_counts(dropna=False).sort_index()

#处理['grade']和['subGrade'],自定义编码
for data in [data_train, data_testA]:
    data['grade'] = data['grade'].map({'A':1,'B':2,'C':3,'D':4,'E':5,'F':6,'G':7})
    data['subGrade'] = data['subGrade'].map({'A1':1,'A2':2,'A3':3,'A4':4,'A5':5,'B1':6,'B2':7,'B3':8,'B4':9,'B5':10,'C1':11,'C2':12,'C3':13,'C4':14,'C5':15,'D1':16,'D2':17,'D3':18,'D4':19,'D5':20, 'E1':21,'E2':22,'E3':23,'E4':24,'E5':25, 'F1':26,'F2':27,'F3':28,'F4':29,'F5':30, 'G1':31,'G2':32,'G3':33,'G4':34,'G5':35})

#删除几个不需要的特征
delFea = ['postCode', 'employmentTitle','policyCode','id','issueDate']
for i in delFea:
    data_train.drop(i,axis = 1,inplace = True)
    data_testA.drop(i,axis = 1,inplace = True)

#对'term','verificationStatus','purpose','regionCode'独热编码
hot_features = ['term','verificationStatus','purpose','regionCode']
data_train = pd.get_dummies(data_train, columns=hot_features)
data_testA = pd.get_dummies(data_testA, columns=hot_features)

3、逻辑回归模型建立

3.1 直接删除缺失值

def LR():
    train = data_train.dropna()
    x_train, x_vali, y_train, y_vali = train_test_split(train.drop('isDefault', axis = 1), train['isDefault'], test_size=0.25)
    
    mm = MinMaxScaler(feature_range=(0,1))
    x_train = mm.fit_transform(x_train)
    x_vali = mm.transform(x_vali)
    
    lr = LogisticRegression(penalty='l2', C = 0.5, solver='liblinear')
    lr.fit(x_train, y_train)
    y_pred = lr.predict(x_vali)
    return y_pred, y_vali


y_pred, y_vali = LR()
print('准确率ACC:',accuracy_score(y_pred, y_vali))
print('精确率Precision:',precision_score(y_pred, y_vali))
print('召回率Recall:',recall_score(y_pred, y_vali))
print('AUC socre:',roc_auc_score(y_pred, y_vali))

准确率ACC: 0.8053850503354727
精确率Precision: 0.08817331948360291
召回率Recall: 0.5274276584413279
AUC socre: 0.6711248082703036

3.2 插值法处理缺失值

import numpy as np
import pandas as pd
import datetime
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score

data_train = pd.read_csv('D:/myP/financial_risk/trainforclass.csv')

features = ['n10','n4','n12','n9','n7','n6','n3','n13','n2','n1','n0','n5','n14','n8','employmentLength','n11']
for f in features: 
    data_train[f] = data_train[f].interpolate()
    
def LR():
    train = data_train.dropna()
    x_train, x_vali, y_train, y_vali = train_test_split(train.drop('isDefault', axis = 1), train['isDefault'], test_size=0.25)
    
    std = StandardScaler()
    x_train = std.fit_transform(x_train)
    x_vali = std.transform(x_vali)
    
    lr = LogisticRegression(penalty='l2', C = 0.5, solver='saga')
    lr.fit(x_train, y_train)
    y_pred = lr.predict(x_vali)
    return y_pred, y_vali

y_pred, y_vali = LR()
print('准确率ACC:',accuracy_score(y_pred, y_vali))
print('精确率Precision:',precision_score(y_pred, y_vali))
print('召回率Recall:',recall_score(y_pred, y_vali))
print('AUC socre:',roc_auc_score(y_pred, y_vali))

准确率ACC: 0.8010720221458613
精确率Precision: 0.08990942654519377
召回率Recall: 0.5334909377462569
AUC socre: 0.6719773001800352

3.3 删除异常值

3.3.1 3segama方法

#1判断异常值;
def find_outliers_by_3segama(data,fea):  ##增加一列名为fea+'_outliers'来判断异常值/正常值
    data_std = np.std(data[fea])
    data_mean = np.mean(data[fea])
    outliers_cut_off = data_std * 3
    lower_rule = data_mean - outliers_cut_off
    upper_rule = data_mean + outliers_cut_off
    data[fea+'_outliers'] = data[fea].apply(lambda x:str('异常值') if x > upper_rule or x < lower_rule else '正常值')
    return data
#2分析变量异常值和目标变量的关系
numerical_fea = ['interestRate','installment','dti','delinquency_2years','ficoRangeLow','ficoRangeHigh','openAcc','pubRec','pubRecBankruptcies','revolBal','revolUtil','totalAcc']
for fea in numerical_fea:
    data_train = find_outliers_by_3segama(data_train,fea)
#     print(data_train[fea+'_outliers'].value_counts())
#     print(data_train.groupby(fea+'_outliers')['isDefault'].sum())
# print('*'*10)
#3删除异常值
for fea in numerical_fea:
    data_train = data_train[data_train[fea+'_outliers']=='正常值']
data_train = data_train.reset_index(drop=True)  #重置索引
#4进一步删除用于判断异常值的列
out_features = [f for f in data_train.columns if '_outliers' in f]
data_train = data_train.drop(out_features, axis = 1)

准确率ACC: 0.8093603379689417
精确率Precision: 0.08324640802180423
召回率Recall: 0.5371254884932697
AUC socre: 0.6774368570694803

3.3.2 箱线图方法

def find_outliers_by_boxplot(data,fea):   
    data_q1 = data[fea].quantile(0.25)
    data_q3 = data[fea].quantile(0.75)
    iqr = data_q3 - data_q1
    lower_rule = data_q1 - 1.5*iqr
    upper_rule = data_q3 + 1.5*iqr
    data[fea+'_outliers'] = data[fea].apply(lambda x:str('异常值') if x > upper_rule or x < lower_rule else '正常值')
    return data
#2分析变量异常值和目标变量的关系
numerical_fea = ['interestRate','installment','dti','delinquency_2years','ficoRangeLow','ficoRangeHigh','openAcc','pubRec','pubRecBankruptcies','revolBal','revolUtil','totalAcc']
for fea in numerical_fea:
    data_train = find_outliers_by_boxplot(data_train,fea)
#     print(data_train[fea+'_outliers'].value_counts())
#     print(data_train.groupby(fea+'_outliers')['isDefault'].sum())
# print('*'*10)
#3删除异常值
for fea in numerical_fea:
    data_train = data_train[data_train[fea+'_outliers']=='正常值']
data_train = data_train.reset_index(drop=True)  #重置索引
#4进一步删除用于判断异常值的列
out_features = [f for f in data_train.columns if '_outliers' in f]
data_train = data_train.drop(out_features, axis = 1)

准确率ACC: 0.8139947622448759
精确率Precision: 0.06993249091223819
召回率Recall: 0.5313459009206488
AUC socre: 0.676247152956857

LR模型中solver参数值得关注(包括其他参数见knowledgedict):

solver 是 LogisticRegression 构造函数的参数,用它来指定逻辑回归损失函数的优化方法,可选项如下:

  • newton-cg:也是牛顿法家族的一种,利用损失函数二阶导数矩阵,即海森矩阵来迭代优化损失函数。
  • lbfgs:拟牛顿法的一种,利用损失函数二阶导数矩阵,即海森矩阵来迭代优化损失函数。
  • liblinear:使用了开源的 liblinear 库实现,内部使用了坐标轴下降法来迭代优化损失函数。
  • sag:即随机平均梯度下降(stochastic average gradient descent),是梯度下降法的变种,和普通梯度下降法的区别是每次迭代仅仅用一部分的样本来计算梯度,适合于样本数据多的时候。该优化算法是 2013 年,Mark Schmidt、Nicolas Le Roux 和 Francis Bach 三人在法国国家信息与自动化研究所(INRIA)技术报中发表的论文《Minimizing finite sums with the stochastic average gradient》。
  • saga:优化的,无偏估计的 sag 方法,它是 2014 年由 Aaron Defazio、Francis Bach 和 Simon Lacoste-Julien 发表的论文《SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives》。

对于少量的样本数据,‘liblinear’ 是很好的选择,而 ‘sag’ ‘saga’ 对大量样本训练速度更快。

对于多分类问题,只有 ‘newton-cg’、’sag’、’saga’ 和 ‘lbfgs’ 能够处理多项损失,而 ‘liblinear’ 面对多分类问题,得先把一种类别作为一个类别,剩余的所有类别作为另外一个类别。依次类推,遍历所有类别,进行分类。

‘newton-cg’, ‘lbfgs’, ‘sag’ 和 ‘saga’ 支持正则(penalty)参数 l2 和 none

‘liblinear’ 和 ‘saga’ 支持正则(penalty)参数 l1

‘saga’ 支持 elasticnet 正则参数。

模型的效果很差,准确率80%,精确率只有9%,说明有大量的不能通过的贷款通过了,主要原因可能①[’employmentTitle’]就业职称和[‘title’]借款人提供的贷款名称不太会处理,类型变量数据化了,也许知道编码方法之后可以解构(因为有大量的类别样本只有1个,怎么把数据分为频数最高的10类和其他?这样也能大幅提高分类性能);②可能逻辑回归本身就不适合解决这种非线性问题;

文章出处登录后可见!

已经登录?立即刷新

共计人评分,平均

到目前为止还没有投票!成为第一位评论此文章。

(0)
乘风的头像乘风管理团队
上一篇 2022年5月18日
下一篇 2022年5月18日

相关推荐