这个例子展示了如何使用本地可model-agnostic解释(石灰)技术的理解深度的预测神经网络分类表格数据。你可以使用石灰的技术理解哪些因素最重要的分类决定网络。
在本例中,您解释一个特性数据分类网络使用石灰。指定查询观察、石灰为每个特性生成合成数据集的统计数据与现实数据集。这种合成数据集通过深层神经网络来获取一个分类,和一个简单的,可说明的模型拟合。这个简单的模型可以用来理解的重要性最高的一些功能分类决定网络。在训练这可说明的模型,合成观察加权距离查询观察,观察的解释是“本地”。
这个示例使用石灰
(统计和机器学习的工具箱)和适合
(统计和机器学习的工具箱)生成一个合成数据集和一个简单的可说明的模型适合合成数据集。理解训练图像分类的预测神经网络,使用imageLIME
。有关更多信息,请参见理解网络预测使用石灰。
加载费舍尔虹膜数据集。这个数据包含150观察有四个输入特征代表植物的参数和一个代表植物物种分类响应。每一个观察是分为三个物种之一:setosa,杂色的,或者virginica。每个观察有四个测量:萼片宽,花萼长度、宽度花瓣,花瓣长度。
文件名= fullfile (toolboxdir (“统计数据”),“statsdemos”,“fisheriris.mat”);加载(文件名)
将数值型数据转换成表。
特点= [“花萼长度”,“花萼宽”,“花瓣长度”,“花瓣宽度”];预测= array2table(量,“VariableNames”、功能);trueLabels = array2table(分类(物种),“VariableNames”,“响应”);
创建一个表的训练数据的最后一列是响应。
data =[预测trueLabels];
计算数量的观察、特性和类。
numObservations =大小(预测,1);numFeatures =大小(预测,2);numClasses =长度(类别(数据{:5}));
分区数据集训练、验证和测试集。留出15%的数据验证和测试为15%。
确定每个分区的数量的观察。设置随机种子数据分割和CPU培训可再生的。
rng (“默认”);numObservationsTrain =地板(0.7 * numObservations);numObservationsValidation =地板(0.15 * numObservations);
创建一个数组的随机指标的观察和分区使用分区大小。
idx = randperm (numObservations);idxTrain = idx (1: numObservationsTrain);idxValidation = idx (numObservationsTrain + 1: numObservationsTrain + numObservationsValidation);idxTest = idx (numObservationsTrain + numObservationsValidation + 1:结束);
分区表的数据训练、验证和测试使用索引分区。
dataTrain =数据(idxTrain:);dataVal =数据(idxValidation:);人数(=数据(idxTest:);
创建一个简单的多层感知器,一个隐藏层有5个神经元和ReLU激活。包含数字的特征输入层接收数据标量代表功能,比如费舍尔虹膜数据集。
numHiddenUnits = 5;层= [featureInputLayer (numFeatures) fullyConnectedLayer (numHiddenUnits) reluLayer fullyConnectedLayer (numClasses) softmaxLayer classificationLayer);
列车网络使用随机梯度下降法和动力(个)。时代的最大数量设置为30和使用mini-batch大小为15,训练数据不包含许多观察。
选择= trainingOptions (“个”,…“MaxEpochs”30岁的…“MiniBatchSize”15岁的…“洗牌”,“every-epoch”,…“ValidationData”dataVal,…“ExecutionEnvironment”,“cpu”);
培训网络。
网= trainNetwork (dataTrain层,选择);
| = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = | | | |时代迭代时间| Mini-batch | |验证Mini-batch | |验证基地学习| | | | (hh: mm: ss) | | | | |损失损失精度精度率| | = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = | | 1 | 1 |就是40.00% | 31.82% | | 1.3060 | 1.2897 | 0.0100 | | 8 50 | |就是| | 86.67% 90.91% | 0.4223 | 0.3656 | 0.0100 | | 100 | |就是| | 93.33% 86.36% | 0.2947 | 0.2927 | 0.0100 | | 22 | 150 |就是| | 86.67% 81.82% | 0.2804 | 0.3707 | 0.0100 | | 200 | | 29日00:00:01 | | 86.67% 90.91% | 0.2268 | 0.2129 | 0.0100 | | 210 | | 00:00:01 | | 93.33% 95.45% | 0.2782 | 0.1666 | 0.0100 | | = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = |
从测试集分类观察使用训练网络。
predictedLabels = net.classify(人数();trueLabels =人数({,,}结束;
可视化结果使用混淆矩阵。
图confusionchart (trueLabels predictedLabels)
网络成功使用的四个植物特性来预测物种测试观察。
利用石灰来理解的重要性每个预测网络的分类决策。
调查每个观察的两个最重要的预测因子。
numImportantPredictors = 2;
使用石灰
创建一个合成数据集的每个特性的统计数据与现实数据集,创建一个石灰
对象使用深度学习模型黑箱
和预测中包含的数据预测
。使用一个低“KernelWidth”
所以价值石灰
使用权重,重点是查询附近的样本点。
黑箱= @ (x)分类(净,x);讲解员=石灰(黑箱预测,“类型”,“分类”,“KernelWidth”,0.1);
您可以使用石灰讲解员要理解深层神经网络最重要的特性。函数估计的重要性功能通过使用一个简单的线性模型,近似查询的神经网络在附近观察。
发现前两个观测指标的测试数据对应setosa类。
trueLabelsTest =人数({,,}结束;标签=“setosa”;idxSetosa =找到(trueLabelsTest = =标签,2);
使用适合
函数以适应一个简单的线性模型从指定的类前两个观察。
explainerObs1 =适合(讲解员人数((idxSetosa (1), 1:4), numImportantPredictors);explainerObs2 =适合(讲解员人数((idxSetosa (2), 1:4), numImportantPredictors);
策划的结果。
图次要情节(2,1,1)情节(explainerObs1);次要情节(2,1,2)情节(explainerObs2);
setosa类,最重要的预测因子是一个低花瓣长度值和萼片宽高值。
执行相同的分析类杂色的。
标签=“多色的”;idxVersicolor =找到(trueLabelsTest = =标签,2);explainerObs1 =适合(讲解员人数((idxVersicolor (1), 1:4), numImportantPredictors);explainerObs2 =适合(讲解员人数((idxVersicolor (2), 1:4), numImportantPredictors);图次要情节(2,1,1)情节(explainerObs1);次要情节(2,1,2)情节(explainerObs2);
杂色的阶级,高花瓣长度值是很重要的。
最后,考虑virginica类。
标签=“virginica”;idxVirginica =找到(trueLabelsTest = =标签,2);explainerObs1 =适合(讲解员人数((idxVirginica (1), 1:4), numImportantPredictors);explainerObs2 =适合(讲解员人数((idxVirginica (2), 1:4), numImportantPredictors);图次要情节(2,1,1)情节(explainerObs1);次要情节(2,1,2)情节(explainerObs2);
virginica类,高花瓣长度值和低萼片宽值是很重要的。
石灰块表明高花瓣长度值与多色的关联和virginica类和低花瓣长度值与setosa类。你可以调查结果进一步探索数据。
情节的花瓣长度中的每个图像数据集。
setosaIdx = ismember(数据{,,},“setosa”);versicolorIdx = ismember(数据{,,},“多色的”);virginicaIdx = ismember(数据{,,},“virginica”);图保存在情节(数据{setosaIdx,“花瓣长度”},“。”)情节(数据{versicolorIdx,“花瓣长度”},“。”)情节(数据{virginicaIdx,“花瓣长度”},“。”)举行从包含(“观察”)ylabel (“花瓣长度”)传说([“setosa”,“多色的”,“virginica”])
setosa类有花瓣长度值远低于其他类,匹配的结果石灰
模型。
适合
(统计和机器学习的工具箱)|石灰
(统计和机器学习的工具箱)|trainNetwork
|分类
|featureInputLayer
|imageLIME