Main Content

SeriesNetwork

系列网络深度学习

Description

A series network is a neural network for deep learning with layers arranged one after the other. It has a single input layer and a single output layer.

Creation

There are several ways to create aSeriesNetworkobject:

Note

To learn about other pretrained networks, such asgooglenetandresnet50, seePretrained Deep Neural Networks.

Properties

expand all

Network layers, specified as aLayerarray.

Network input layer names, specified as a cell array of character vectors.

Data Types:cell

Network output layer names, specified as a cell array of character vectors.

Data Types:cell

Object Functions

activations Compute deep learning network layer activations
classify Classify data using a trained deep learning neural network
predict Predict responses using a trained deep learning neural network
predictAndUpdateState Predict responses using a trained recurrent neural network and update the network state
classifyAndUpdateState Classify data using a trained recurrent neural network and update the network state
resetState Reset the state of a recurrent neural network
plot Plot neural network layer graph

Examples

collapse all

Load a pretrained AlexNet convolutional neural network and examine the layers and classes.

Load the pretrained AlexNet network usingalexnet. The outputnetis aSeriesNetworkobject.

net = alexnet
net = SeriesNetwork with properties: Layers: [25×1 nnet.cnn.layer.Layer]

Using theLayersproperty, view the network architecture. The network comprises of 25 layers. There are 8 layers with learnable weights: 5 convolutional layers, and 3 fully connected layers.

net.Layers
ans x1 = 25层阵列层:1“数据”的形象Input 227x227x3 images with 'zerocenter' normalization 2 'conv1' Convolution 96 11x11x3 convolutions with stride [4 4] and padding [0 0 0 0] 3 'relu1' ReLU ReLU 4 'norm1' Cross Channel Normalization cross channel normalization with 5 channels per element 5 'pool1' Max Pooling 3x3 max pooling with stride [2 2] and padding [0 0 0 0] 6 'conv2' Grouped Convolution 2 groups of 128 5x5x48 convolutions with stride [1 1] and padding [2 2 2 2] 7 'relu2' ReLU ReLU 8 'norm2' Cross Channel Normalization cross channel normalization with 5 channels per element 9 'pool2' Max Pooling 3x3 max pooling with stride [2 2] and padding [0 0 0 0] 10 'conv3' Convolution 384 3x3x256 convolutions with stride [1 1] and padding [1 1 1 1] 11 'relu3' ReLU ReLU 12 'conv4' Grouped Convolution 2 groups of 192 3x3x192 convolutions with stride [1 1] and padding [1 1 1 1] 13 'relu4' ReLU ReLU 14 'conv5' Grouped Convolution 2 groups of 128 3x3x192 convolutions with stride [1 1] and padding [1 1 1 1] 15 'relu5' ReLU ReLU 16 'pool5' Max Pooling 3x3 max pooling with stride [2 2] and padding [0 0 0 0] 17 'fc6' Fully Connected 4096 fully connected layer 18 'relu6' ReLU ReLU 19 'drop6' Dropout 50% dropout 20 'fc7' Fully Connected 4096 fully connected layer 21 'relu7' ReLU ReLU 22 'drop7' Dropout 50% dropout 23 'fc8' Fully Connected 1000 fully connected layer 24 'prob' Softmax softmax 25 'output' Classification Output crossentropyex with 'tench' and 999 other classes

You can view the names of the classes learned by the network by viewing theClassesproperty of the classification output layer (the final layer). View the first 10 classes by selecting the first 10 elements.

net.Layers(end).Classes(1:10)
ans =10×1 categorical arraytench goldfish great white shark tiger shark hammerhead electric ray stingray cock hen ostrich

Specify the example file'digitsnet.prototxt'to import.

protofile ='digitsnet.prototxt';

Import the network layers.

layers = importCaffeLayers(protofile)
layers = 1x7 Layer array with layers: 1 'testdata' Image Input 28x28x1 images 2 'conv1' Convolution 20 5x5x1 convolutions with stride [1 1] and padding [0 0] 3 'relu1' ReLU ReLU 4 'pool1' Max Pooling 2x2 max pooling with stride [2 2] and padding [0 0] 5 'ip1' Fully Connected 10 fully connected layer 6 'loss' Softmax softmax 7 'output' Classification Output crossentropyex with 'class1', 'class2', and 8 other classes

Load the data as anImageDatastoreobject.

digitDatasetPath = fullfile(matlabroot,'toolbox','nnet',...'nndemos','nndatasets','DigitDataset'); imds = imageDatastore(digitDatasetPath,...'IncludeSubfolders',true,...'LabelSource','foldernames');

The datastore contains 10,000 synthetic images of digits from 0 to 9. The images are generated by applying random transformations to digit images created with different fonts. Each digit image is 28-by-28 pixels. The datastore contains an equal number of images per category.

Display some of the images in the datastore.

figure numImages = 10000; perm = randperm(numImages,20);fori = 1:20 subplot(4,5,i); imshow(imds.Files{perm(i)}); drawnow;end

Divide the datastore so that each category in the training set has 750 images and the testing set has the remaining images from each label.

numTrainingFiles = 750; [imdsTrain,imdsTest] = splitEachLabel(imds,numTrainingFiles,'randomize');

splitEachLabelsplits the image files indigitDatainto two new datastores,imdsTrainandimdsTest.

Define the convolutional neural network architecture.

layers = [...imageInputLayer([28 28 1]) convolution2dLayer(5,20) reluLayer maxPooling2dLayer(2,'Stride', 2) fullyConnectedLayer (10)ftmaxLayer classificationLayer];

Set the options to the default settings for the stochastic gradient descent with momentum. Set the maximum number of epochs at 20, and start the training with an initial learning rate of 0.0001.

options = trainingOptions('sgdm',...“MaxEpochs”,20,...'InitialLearnRate',1e-4,...'Verbose',false,...“阴谋”,'training-progress');

Train the network.

net = trainNetwork(imdsTrain,layers,options);

Run the trained network on the test set, which was not used to train the network, and predict the image labels (digits).

YPred = classify(net,imdsTest); YTest = imdsTest.Labels;

Calculate the accuracy. The accuracy is the ratio of the number of true labels in the test data matching the classifications fromclassifyto the number of images in the test data.

accuracy = sum(YPred == YTest)/numel(YTest)
accuracy = 0.9420

Extended Capabilities

Introduced in R2016a