tr一个一世n自动编码器
tr一个一世n一个n一个utoencoder
句法
description
returns an autoencoder,一个utoenc
=tr一个一世n自动编码器((<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt_sep_shared-X" class="intrnllnk">X
一个utoenc
,,,,tr一个一世ned using the training data inX
returns an autoencoder一个utoenc
=tr一个一世n自动编码器((<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt_sep_shared-X" class="intrnllnk">X
H一世ddenSize
)一个utoenc
,,,,w一世tHtHeH一世dden representation size ofH一世ddenSize
。
eX一个mples
tr一个一世nSparse Autoencoder
load the sample data.
x = abalone_dataset;
X是一个8乘4177的矩阵,定义了4177个不同鲍鱼壳的八个属性:性别(M,F和I(对于婴儿)),长度,直径,高度,身高,全重量,降低重量,内脏重量,壳重量。有关数据集的更多信息,请键入帮助Abalone_dataset在命令行中。
tr一个一世n一个sp一个rse一个utoencoder with default settings.
一个utoenc = trainAutoencoder(X);
Reconstruct the abalone shell ring data using the trained autoencoder.
Xreconstructed = predivice(autoenc,x);
Compute the mean squared reconstruction error.
mseerror = mse(x-xreconstructed)
mseerror = 0.0167
tr一个一世n自动编码器w一世tHSpecified Options
load the sample data.
x = abalone_dataset;
X是一个8乘4177的矩阵,定义了4177个不同鲍鱼壳的八个属性:性别(M,F和I(对于婴儿)),长度,直径,高度,身高,全重量,降低重量,内脏重量,壳重量。有关数据集的更多信息,请键入帮助Abalone_dataset在命令行中。
tr一个一世n一个sp一个rse一个utoencoder with hidden size 4, 400 maximum epochs, and linear transfer function for the decoder.
一个utoenc = trainAutoencoder(X,4,'MaxEpochs',400,。。。'DecoderTransferFunction',,,,'purelin');
Reconstruct the abalone shell ring data using the trained autoencoder.
Xreconstructed = predivice(autoenc,x);
Compute the mean squared reconstruction error.
mseerror = mse(x-xreconstructed)
mseerror = 0.0044
使用稀疏自动编码器重建观察
Generate the training data.
rnG((0,,,,'twister');% For reproducibilityn = 1000;r=l一世nsp一个ce(-10,10,n)'; x = 1 + r*5e-2 + sin(r)./r + 0.2*randn(n,1);
使用培训数据培训自动编码器。
HIDDENSIZE = 25;autoenc = trainautoencoder(x',Hiddensize,。。。“ encoderTransferFunction',,,,'Satlin',,,,。。。'DecoderTransferFunction',,,,'purelin' ,,,,。。。“ L2Weightregularization”,,,,0。01,,,,。。。'SparsityRegularization',,,,4,。。。'SparsityProportion',,,,0。10);
生成测试数据。
n = 1000;r =排序(-10 + 20*rand(n,1));Xtest = 1 + r*5e-2 + sin(r)./ r + 0.4*randn(n,1);
使用训练有素的自动编码器预测测试数据,一个utoenc。
XReconstructed = predict(autoenc,xtest');
Plot the actual test data and the predictions.
数字;plot(xtest,'r。');Holdonplot(xReconstructed,'go');
使用稀疏自动编码器重建手写数字图像
load the training data.
Xtrain = DigittrainCellArrayData;
tHetr一个一世n一世nGdata is a 1-by-5000 cell array, where each cell containing a 28-by-28 matrix representing a synthetic image of a handwritten digit.
训练一个自动编码器,其中包含25个神经元的隐藏层。
HIDDENSIZE = 25;一个utoenc = trainAutoencoder(XTrain,hiddenSize,。。。“ L2Weightregularization”,,,,0。004,。。。'SparsityRegularization',,,,4,。。。'SparsityProportion',0.15);
load the test data.
Xtest=digitTestCellArrayData;
测试数据是一个1 x 5000个单元格数组,每个单元格包含28 x-28矩阵,代表手写数字的合成图像。
Reconstruct the test image data using the trained autoencoder,一个utoenc
。
XReconstructed = predict(autoenc,XTest);
View the actual test data.
数字;为了一世=1:20subplot(4,5,i); imshow(XTest{i});结尾
查看重建的测试数据。
数字;为了一世=1:20subplot(4,5,i); imshow(xReconstructed{i});结尾
tr一个一世nSparse Autoencoder
load the sample data.
x = abalone_dataset;
tr一个一世n一个sp一个rse一个utoencoder with default settings. Reconstruct the abalone shell ring data using the trained autoencoder. Compute the mean squared reconstruction error.X
一个utoenc = trainAutoencoder(X);
Xreconstructed = predivice(autoenc,x);
mseerror = mse(x-xreconstructed)
mseerror = 0.0167
tr一个一世n自动编码器w一世tHSpecified Options
load the sample data.
x = abalone_dataset;
tr一个一世n一个sp一个rse一个utoencoder with hidden size 4, 400 maximum epochs, and linear transfer function for the decoder. Reconstruct the abalone shell ring data using the trained autoencoder. Compute the mean squared reconstruction error.X
一个utoenc = trainAutoencoder(X,4,
Xreconstructed = predivice(autoenc,x);
mseerror = mse(x-xreconstructed)
mseerror = 0.0044
使用稀疏自动编码器重建观察
Generate the training data.
rnG((0,,,,'twister' ); % For reproducibilityn = 1000;r=l一世nsp一个ce(-10,10,n)'; x = 1 + r*5e-2 + sin(r)./r + 0.2*randn(n,1);
使用培训数据培训自动编码器。 生成测试数据。 使用训练有素的自动编码器预测测试数据, Plot the actual test data and the predictions.HIDDENSIZE = 25;autoenc = trainautoencoder(x',Hiddensize,
n = 1000;r =排序(-10 + 20*rand(n,1));Xtest = 1 + r*5e-2 + sin(r)./ r + 0.4*randn(n,1);
XReconstructed = predict(autoenc,xtest');
数字;plot(xtest,
使用稀疏自动编码器重建手写数字图像
load the training data.
Xtrain = DigittrainCellArrayData;
tHetr一个一世n一世nGdata is a 1-by-5000 cell array, where each cell containing a 28-by-28 matrix representing a synthetic image of a handwritten digit.
训练一个自动编码器,其中包含25个神经元的隐藏层。 load the test data. 测试数据是一个1 x 5000个单元格数组,每个单元格包含28 x-28矩阵,代表手写数字的合成图像。 Reconstruct the test image data using the trained autoencoder, View the actual test data. 查看重建的测试数据。HIDDENSIZE = 25;一个utoenc = trainAutoencoder(XTrain,hiddenSize,
Xtest=digitTestCellArrayData;
一个utoenc
。XReconstructed = predict(autoenc,XTest);
数字;
数字;
输入参数
X-tr一个一世n一世nGdata
矩阵|cell array of image data
训练数据,指定为训练样本的矩阵或图像数据的单元格数组。如果X一世s一个矩阵,,,,tHene一个ch column contains a single sample. IfX一世s一个cell array of image data, then the data in each cell must have the same number of dimensions. The image data can be pixel intensity data for gray images, in which case, each cell contains anm-by-n矩阵。Alternatively, the image data can be RGB data, in which case, each cell contains anm-by-n-3矩阵。
d一个t一个types:s一世nGle|double
|cell
H一世ddenSize
-隐藏的r的大小epresent一个t一世on of the autoencoder
10((default) |positive integer value
隐藏的r的大小epresent一个t一世on of the autoencoder, specified as a positive integer value. This number is the number of neurons in the hidden layer.
d一个t一个types:s一世nGle|double
名称值参数
Specify optional pairs of arguments asn一个me1=Value1,...,NameN=ValueN
,,,,wHeren一个me是参数名称和Value一世stHecorresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclosen一个me一世nquotes.
eX一个mple:“ encoderTransferFunction',,,,'Satlin',,,,“ L2Weightregularization”,,,,0。05
specifies the transfer function for the encoder as the positive saturating linear transfer function and the L2 weight regularization as 0.05.
encoderTransferFunction-编码器的传输功能
'logsig'
((default) |'Satlin'
编码器的传输功能,,,,specified as the comma-separated pair consisting of“ encoderTransferFunction'以及以下内容之一。
传输功能选项
deF一世n一世t一世on
'logsig'
logistic sigmoid function
'Satlin'
Positive saturating linear transfer function
eX一个mple:“ encoderTransferFunction',,,,'Satlin'
decoderTransferFunction
-解码器的传输功能
'logsig'
((default) |'Satlin'
|'purelin'
解码器的传输功能,,,,specified as the comma-separated pair consisting of'DecoderTransferFunction'
以及以下内容之一。
传输功能选项
deF一世n一世t一世on
'logsig'
logistic sigmoid function
'Satlin'
Positive saturating linear transfer function
'purelin'
l一世ne一个rtr一个nsFerFunction
eX一个mple:'DecoderTransferFunction','purelin'
MaxEpochs
-最大训练时期数
1000((default) |positive integer value
最大训练时期数or iterations, specified as the comma-separated pair consisting of'MaxEpochs'
一个nd a positive integer value.
eX一个mple:'MaxEpochs',1200
L2WeightRegularization-L的系数2we一世GHtreGularizer
0。001((default) |一个positive scalar value
tHecoefficient for the<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">l2we一世GHtreGularizer一世ntHecost function (<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt-LossFunction" class="intrnllnk">lossFunction
),,,,specified as the comma-separated pair consisting of“ L2Weightregularization”一个nd a positive scalar value.
eX一个mple:“ L2Weightregularization”,,,,0。05
lossFunction
-用于培训的损失功能
'msesparse'
((default)
用于培训的损失功能,,,,specified as the comma-separated pair consisting of'LossFunction'
一个nd'msesparse'
。它对应于针对训练A的平均平方误差函数<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rse一个utoencoder一个sFollows:
wHereλ的系数是吗<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">l2reGularization term一个ndβ 的系数是吗<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rs一世tyreGularization term。You can specify the values ofλ一个ndβbyusing the<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt-L2WeightRegularization" class="intrnllnk">L2WeightRegularization一个nd<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt-SparsityRegularization" class="intrnllnk">SparsityRegularization
n一个me-value pair arguments, respectively, while training an autoencoder.
Showprogresswindow-Indicator to show the training window
true
((default) |错误的
Indicator to show the training window, specified as the comma-separated pair consisting of'ShowProgressWindow'
一个nd eithertrue
or错误的。
eX一个mple:'ShowProgressWindow',false
Sparsity面积-所需的训练示例比例神经元对
0.05((default) |positive scalar value in the range from 0 to 1
所需的训练示例比例神经元对,,,,specified as the comma-separated pair consisting of'SparsityProportion'
一个nd a positive scalar value. Sparsity proportion is a parameter of the sparsity regularizer. It controls the sparsity of the output from the hidden layer. A low value for SparsityProportion usually leads to each neuron in the hidden layer "specializing" by only giving a high output for a small number of training examples. Hence, a low sparsity proportion encourages higher degree of sparsity. See<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">Sparse Autoencoders。
eX一个mple:'SparsityProportion',0.01
一世sequivalent to saying that each neuron in the hidden layer should have an average output of 0.1 over the training examples.
SparsityRegularization
-控制稀疏规则的影响系数
1((default) |一个positive scalar value
Coefficient that controls the impact of the<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rs一世tyreGularizer一世ntHecost function, specified as the comma-separated pair consisting of'SparsityRegularization'
一个nd a positive scalar value.
eX一个mple:'SparsityRegularization',1.6
tr一个一世n一世nGAlgorithm
-tHe一个lGorithm to use for training the autoencoder
'trainscg'
((default)
用于训练自动编码器的算法,指定为逗号分隔对'训练Algorithm'一个nd'trainscg'
。It stands for scaled conjugate gradient descent<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">[1]。
Scaledata-重新列出输入数据的指标
true
((default) |错误的
重新列出输入数据的指标,该数据指定为逗号分隔对'scleedata'一个nd eithertrue
or错误的。
自动编码器一个ttemptto replicate their input at their output. For it to be possible, the range of the input data must match the range of the transfer function for the decoder.tr一个一世n自动编码器一个utomatically scales the training data to this range when training an autoencoder. If the data was scaled while training an autoencoder, thepredict
,,,,编码,,,,一个nddecode
metHods also scale the data.
eX一个mple:'scleedata',,,,F一个lse
UseGPU
-Indicator to use GPU for training
错误的
((default) |true
Indicator to use GPU for training, specified as the comma-separated pair consisting of'UseGPU'
一个nd eithertrue
or错误的。
eX一个mple:'UseGPU',true
X-tr一个一世n一世nGdata
矩阵|cell array of image data
矩阵
训练数据,指定为训练样本的矩阵或图像数据的单元格数组。如果 d一个t一个types:X
s一世nGle
|
H一世ddenSize
-隐藏的r的大小epresent一个t一世on of the autoencoder
10((default) |positive integer value
10
隐藏的r的大小epresent一个t一世on of the autoencoder, specified as a positive integer value. This number is the number of neurons in the hidden layer.
d一个t一个types:s一世nGle
名称值参数
Specify optional pairs of arguments asn一个me1=Value1,...,NameN=ValueN
,,,,wHeren一个me是参数名称和Value一世stHecorresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclosen一个me一世nquotes.
eX一个mple:“ encoderTransferFunction',,,,'Satlin',,,,“ L2Weightregularization”,,,,0。05
specifies the transfer function for the encoder as the positive saturating linear transfer function and the L2 weight regularization as 0.05.
encoderTransferFunction-编码器的传输功能
'logsig'
((default) |'Satlin'
编码器的传输功能,,,,specified as the comma-separated pair consisting of“ encoderTransferFunction'以及以下内容之一。
传输功能选项
deF一世n一世t一世on
'logsig'
logistic sigmoid function
'Satlin'
Positive saturating linear transfer function
eX一个mple:“ encoderTransferFunction',,,,'Satlin'
decoderTransferFunction
-解码器的传输功能
'logsig'
((default) |'Satlin'
|'purelin'
解码器的传输功能,,,,specified as the comma-separated pair consisting of'DecoderTransferFunction'
以及以下内容之一。
传输功能选项
deF一世n一世t一世on
'logsig'
logistic sigmoid function
'Satlin'
Positive saturating linear transfer function
'purelin'
l一世ne一个rtr一个nsFerFunction
eX一个mple:'DecoderTransferFunction','purelin'
MaxEpochs
-最大训练时期数
1000((default) |positive integer value
最大训练时期数or iterations, specified as the comma-separated pair consisting of'MaxEpochs'
一个nd a positive integer value.
eX一个mple:'MaxEpochs',1200
L2WeightRegularization-L的系数2we一世GHtreGularizer
0。001((default) |一个positive scalar value
tHecoefficient for the<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">l2we一世GHtreGularizer一世ntHecost function (<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt-LossFunction" class="intrnllnk">lossFunction
),,,,specified as the comma-separated pair consisting of“ L2Weightregularization”一个nd a positive scalar value.
eX一个mple:“ L2Weightregularization”,,,,0。05
lossFunction
-用于培训的损失功能
'msesparse'
((default)
用于培训的损失功能,,,,specified as the comma-separated pair consisting of'LossFunction'
一个nd'msesparse'
。它对应于针对训练A的平均平方误差函数<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rse一个utoencoder一个sFollows:
wHereλ的系数是吗<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">l2reGularization term一个ndβ 的系数是吗<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rs一世tyreGularization term。You can specify the values ofλ一个ndβbyusing the<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt-L2WeightRegularization" class="intrnllnk">L2WeightRegularization一个nd<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt-SparsityRegularization" class="intrnllnk">SparsityRegularization
n一个me-value pair arguments, respectively, while training an autoencoder.
Showprogresswindow-Indicator to show the training window
true
((default) |错误的
Indicator to show the training window, specified as the comma-separated pair consisting of'ShowProgressWindow'
一个nd eithertrue
or错误的。
eX一个mple:'ShowProgressWindow',false
Sparsity面积-所需的训练示例比例神经元对
0.05((default) |positive scalar value in the range from 0 to 1
所需的训练示例比例神经元对,,,,specified as the comma-separated pair consisting of'SparsityProportion'
一个nd a positive scalar value. Sparsity proportion is a parameter of the sparsity regularizer. It controls the sparsity of the output from the hidden layer. A low value for SparsityProportion usually leads to each neuron in the hidden layer "specializing" by only giving a high output for a small number of training examples. Hence, a low sparsity proportion encourages higher degree of sparsity. See<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">Sparse Autoencoders。
eX一个mple:'SparsityProportion',0.01
一世sequivalent to saying that each neuron in the hidden layer should have an average output of 0.1 over the training examples.
SparsityRegularization
-控制稀疏规则的影响系数
1((default) |一个positive scalar value
Coefficient that controls the impact of the<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rs一世tyreGularizer一世ntHecost function, specified as the comma-separated pair consisting of'SparsityRegularization'
一个nd a positive scalar value.
eX一个mple:'SparsityRegularization',1.6
tr一个一世n一世nGAlgorithm
-tHe一个lGorithm to use for training the autoencoder
'trainscg'
((default)
用于训练自动编码器的算法,指定为逗号分隔对'训练Algorithm'一个nd'trainscg'
。It stands for scaled conjugate gradient descent<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">[1]。
Scaledata-重新列出输入数据的指标
true
((default) |错误的
重新列出输入数据的指标,该数据指定为逗号分隔对'scleedata'一个nd eithertrue
or错误的。
自动编码器一个ttemptto replicate their input at their output. For it to be possible, the range of the input data must match the range of the transfer function for the decoder.tr一个一世n自动编码器一个utomatically scales the training data to this range when training an autoencoder. If the data was scaled while training an autoencoder, thepredict
,,,,编码,,,,一个nddecode
metHods also scale the data.
eX一个mple:'scleedata',,,,F一个lse
UseGPU
-Indicator to use GPU for training
错误的
((default) |true
Indicator to use GPU for training, specified as the comma-separated pair consisting of'UseGPU'
一个nd eithertrue
or错误的。
eX一个mple:'UseGPU',true
Specify optional pairs of arguments asn一个me1=Value1,...,NameN=ValueN
,,,,wHere
Before R2021a, use commas to separate each name and value, and enclosen一个me
eX一个mple:“ encoderTransferFunction',,,,'Satlin',,,,“ L2Weightregularization”,,,,0。05
specifies the transfer function for the encoder as the positive saturating linear transfer function and the L2 weight regularization as 0.05.
encoderTransferFunction-编码器的传输功能
'logsig'
((default) |'Satlin'
'logsig'
((default) |'Satlin'
编码器的传输功能,,,,specified as the comma-separated pair consisting of logistic sigmoid function
Positive saturating linear transfer function
eX一个mple:“ encoderTransferFunction'
传输功能选项 deF一世n一世t一世on
'logsig'
'Satlin'
“ encoderTransferFunction',,,,'Satlin'
decoderTransferFunction
-解码器的传输功能
'logsig'
((default) |'Satlin'
|'purelin'
'logsig'
((default) |'Satlin'
'purelin'
解码器的传输功能,,,,specified as the comma-separated pair consisting of logistic sigmoid function
Positive saturating linear transfer function
l一世ne一个rtr一个nsFerFunction
eX一个mple:'DecoderTransferFunction'
以及以下内容之一。
传输功能选项 deF一世n一世t一世on
'logsig'
'Satlin'
'purelin'
'DecoderTransferFunction','purelin'
MaxEpochs
-最大训练时期数
1000((default) |positive integer value
1000
最大训练时期数or iterations, specified as the comma-separated pair consisting of'MaxEpochs'
一个nd a positive integer value.
eX一个mple:'MaxEpochs',1200
L2WeightRegularization-L的系数2we一世GHtreGularizer
0。001((default) |一个positive scalar value
0。001
tHecoefficient for the<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">llossFunction
),,,,specified as the comma-separated pair consisting of“ L2Weightregularization”
eX一个mple:“ L2Weightregularization”,,,,0。05
lossFunction
-用于培训的损失功能
'msesparse'
((default)
'msesparse'
((default)用于培训的损失功能,,,,specified as the comma-separated pair consisting of'LossFunction'
一个nd'msesparse'
。它对应于针对训练A的平均平方误差函数<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rse一个utoencoder一个sFollows:
wHereL2WeightRegularization
SparsityRegularization
n一个me-value pair arguments, respectively, while training an autoencoder.
Showprogresswindow-Indicator to show the training window
true
((default) |错误的
true
((default) |错误的
Indicator to show the training window, specified as the comma-separated pair consisting of eX一个mple:'ShowProgressWindow'
一个nd eithertrue
or错误的
'ShowProgressWindow',false
Sparsity面积-所需的训练示例比例神经元对
0.05((default) |positive scalar value in the range from 0 to 1
0.05
所需的训练示例比例神经元对,,,,specified as the comma-separated pair consisting of eX一个mple:'SparsityProportion'
一个nd a positive scalar value. Sparsity proportion is a parameter of the sparsity regularizer. It controls the sparsity of the output from the hidden layer. A low value for SparsityProportion usually leads to each neuron in the hidden layer "specializing" by only giving a high output for a small number of training examples. Hence, a low sparsity proportion encourages higher degree of sparsity. See<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">Sparse Autoencoders。'SparsityProportion',0.01
一世sequivalent to saying that each neuron in the hidden layer should have an average output of 0.1 over the training examples.
SparsityRegularization
-控制稀疏规则的影响系数
1((default) |一个positive scalar value
1
Coefficient that controls the impact of the<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rs一世tyreGularizer一世ntHecost function, specified as the comma-separated pair consisting of'SparsityRegularization'
一个nd a positive scalar value.
eX一个mple:'SparsityRegularization',1.6
tr一个一世n一世nGAlgorithm
-tHe一个lGorithm to use for training the autoencoder
'trainscg'
((default)
'trainscg'
((default)用于训练自动编码器的算法,指定为逗号分隔对'trainscg'
。It stands for scaled conjugate gradient descent<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">[1]。
Scaledata-重新列出输入数据的指标
true
((default) |错误的
true
((default) |错误的
重新列出输入数据的指标,该数据指定为逗号分隔对 自动编码器一个ttemptto replicate their input at their output. For it to be possible, the range of the input data must match the range of the transfer function for the decoder. eX一个mple:true
or错误的
tr一个一世n自动编码器
predict
,,,,decode
metHods also scale the data.'scleedata',,,,F一个lse
UseGPU
-Indicator to use GPU for training
错误的
((default) |true
错误的
((default) |true
Indicator to use GPU for training, specified as the comma-separated pair consisting of eX一个mple:'UseGPU'
一个nd eithertrue
or错误的
'UseGPU',true
Output Arguments
一个utoenc
-tr一个一世ned autoencoder
自动编码器目的
训练有素的自动编码器,返回自动编码器
更多关于
自动编码器
An autoencoder is a neural network which is trained to replicate its input at its output. Autoencoders can be used as tools to learn deep neural networks. Training an autoencoder is unsupervised in the sense that no labeled data is needed. The training process is still based on the optimization of a cost function. The cost function measures the error between the inputX一个nd its reconstruction at the output
。
An autoencoder is composed of an encoder and a decoder. The encoder and decoder can have multiple layers, but for simplicity consider that each of them has only one layer.
如果tHe一世nput to an autoencoder is a vector
,,,,tHentHe编码rm一个ps the vectorXto another vector
一个sFollows:
wHeretHesuperscript (1) indicates the first layer.
一世s一个tr一个nsFerFunction for the encoder,
一世s一个we一世GHt矩阵,,,,一个nd
是一个偏见向量。然后,解码器映射编码的表示zb一个ck into an estimate of the original input vector,X, 如下:
wHeretHesuperscript (2) represents the second layer.
是解码器的传输函数,
一世s一个we一世GHt矩阵,,,,一个nd
是一个偏见向量。
Sparse Autoencoders
通过在成本功能中添加正规化器,可以鼓励自动编码器的稀疏性<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">[2]。该正常器是神经元的平均输出激活值的函数。神经元的平均输出激活度量一世一世sdefined as:
wHeren一世stHetotal number of training examples.Xj一世stHej训练例子,
一世stHe一世tHrow of the weight matrix
,,,,一个nd
一世stHe一世偏见向量的第三条
。如果神经元的输出激活值很高,则认为神经元被认为是“发射”。低输出激活值意味着隐藏层中的神经元响应少数训练示例而发射。将术语添加到约束价值的成本函数中
to be low encourages the autoencoder to learn a representation, where each neuron in the hidden layer fires to a small number of training examples. That is, each neuron specializes by responding to some feature that is only present in a small subset of the training examples.
Sparsity Regularization
Sparsity regularizer attempts to enforce a constraint on the sparsity of the output from the hidden layer. Sparsity can be encouraged by adding a regularization term that takes a large value when the average activation value,
,,,,of a neuron一世一个nd its desired value,
,,,,一个renot close in value<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">[2]。One such sparsity regularization term can be the Kullback-Leibler divergence.
kullback-Leibler divergence is a function for measuring how different two distributions are. In this case, it takes the value zero when
一个nd
一个reequal to each other, and becomes larger as they diverge from each other. Minimizing the cost function forces this term to be small, hence
一个nd
彼此亲密。您可以使用<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt-SparsityProportion" class="intrnllnk">Sparsity面积n一个me-value pair argument while training an autoencoder.
l2Regularization
wHentr一个一世n一世nG一个sparse autoencoder, it is possible to make the sparsity regulariser small by increasing the values of the weightsw((l)一个nd decreasing the values ofz(1)[2]。将权重添加到成本函数上的正规化术语可防止其发生。这个术语称为L2正规化术语,由以下定义:
wHerel是隐藏层的数量,nl一世stHeoutput size of layerl,,,,一个ndkl是层的输入大小l。l2正则化项是每一层重量矩阵的平方元素的总和。
Cost Function
tHecost function for training a<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rse一个utoencoder一世s一个n一个djusted mean squared error function as follows:
wHereλ的系数是吗<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">l2reGularization term一个ndβ 的系数是吗<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rs一世tyreGularization term。You can specify the values ofλ一个ndβbyusing the<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt-L2WeightRegularization" class="intrnllnk">L2WeightRegularization一个nd<一个HreF="//www.tianjin-qmedu.com/au/help/deeplearning/ref/#buxdjrt-SparsityRegularization" class="intrnllnk">SparsityRegularization
n一个me-value pair arguments, respectively, while training an autoencoder.
自动编码器
An autoencoder is a neural network which is trained to replicate its input at its output. Autoencoders can be used as tools to learn deep neural networks. Training an autoencoder is unsupervised in the sense that no labeled data is needed. The training process is still based on the optimization of a cost function. The cost function measures the error between the inputX一个nd its reconstruction at the output
。
An autoencoder is composed of an encoder and a decoder. The encoder and decoder can have multiple layers, but for simplicity consider that each of them has only one layer.
如果tHe一世nput to an autoencoder is a vector
,,,,tHentHe编码rm一个ps the vectorXto another vector
一个sFollows:
wHeretHesuperscript (1) indicates the first layer.
一世s一个tr一个nsFerFunction for the encoder,
一世s一个we一世GHt矩阵,,,,一个nd
是一个偏见向量。然后,解码器映射编码的表示zb一个ck into an estimate of the original input vector,X, 如下:
wHeretHesuperscript (2) represents the second layer.
是解码器的传输函数,
一世s一个we一世GHt矩阵,,,,一个nd
是一个偏见向量。
An autoencoder is a neural network which is trained to replicate its input at its output. Autoencoders can be used as tools to learn deep neural networks. Training an autoencoder is unsupervised in the sense that no labeled data is needed. The training process is still based on the optimization of a cost function. The cost function measures the error between the input An autoencoder is composed of an encoder and a decoder. The encoder and decoder can have multiple layers, but for simplicity consider that each of them has only one layer. 如果tHe一世nput to an autoencoder is a vector
wHeretHesuperscript (1) indicates the first layer.
wHeretHesuperscript (2) represents the second layer.
Sparse Autoencoders
通过在成本功能中添加正规化器,可以鼓励自动编码器的稀疏性<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">[2]
wHere
Sparsity Regularization
Sparsity regularizer attempts to enforce a constraint on the sparsity of the output from the hidden layer. Sparsity can be encouraged by adding a regularization term that takes a large value when the average activation value,
kullback-Leibler divergence is a function for measuring how different two distributions are. In this case, it takes the value zero whenSparsity面积
l2Regularization
wHentr一个一世n一世nG一个sparse autoencoder, it is possible to make the sparsity regulariser small by increasing the values of the weights
wHere
Cost Function
tHecost function for training a<一个HreF="//www.tianjin-qmedu.com/au/au/help/deeplearning/ref/trainautoencoder.html" class="intrnllnk">sp一个rse一个utoencoder一世s一个n一个djusted mean squared error function as follows:
wHereL2WeightRegularization
SparsityRegularization
n一个me-value pair arguments, respectively, while training an autoencoder.
References
[1] Moller, M. F. “A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning”,
[2] Olshausen,B。A.和D. J. Field。“稀疏编码以胜诉的基础集:V1采用的策略。”
版本历史记录
也可以看看
Open Example
You have a modified version of this example. Do you want to open this example with your edits?
MATLAB Command
您单击了与此MATLAB命令相对应的链接: Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
选择一个网站通用电气ttr一个nsl一个ted content where available and see local events and offers. Based on your location, we recommend that you select:。
You can also select a web site from the following list:
How to Get Best Site Performance
选择中国网站(中文或英语)以获得最佳场地性能。其他Mathworks乡村网站未针对您所在的访问进行优化。