Main Content

freezeParameters

Convert learnable network parameters inONNXParametersto nonlearnable

    Description

    example

    帕拉ms = freezeParameters(帕拉ms,names)freezes the network parameters specified bynamesin theONNXParametersobject帕拉ms. The function moves the specified parameters from帕拉ms.Learnablesin the input argument帕拉msto帕拉ms.Nonlearnablesin the output argument帕拉ms.

    Examples

    collapse all

    Import thesqueezenetconvolution neural network as a function and fine-tune the pretrained network with transfer learning to perform classification on a new collection of images.

    This example uses several helper functions. To view the code for these functions, seeHelper Functions.

    Unzip and load the new images as an image datastore.imageDatastoreautomatically labels the images based on folder names and stores the data as anImageDatastoreobject. An image datastore enables you to store large image data, including data that does not fit in memory, and efficiently read batches of images during training of a convolutional neural network. Specify the mini-batch size.

    unzip('MerchData.zip'); miniBatchSize = 8; imds = imageDatastore('MerchData',...'IncludeSubfolders',true,...'LabelSource','foldernames',...'ReadSize', miniBatchSize);

    This data set is small, containing 75 training images. Display some sample images.

    numImages = numel(imds.Labels); idx = randperm(numImages,16); figurefori = 1:16 subplot(4,4,i) I = readimage(imds,idx(i)); imshow(I)end

    Extract the training set and one-hot encode the categorical classification labels.

    XTrain = readall(imds); XTrain = single(cat(4,XTrain{:})); YTrain_categ = categorical(imds.Labels); YTrain = onehotencode(YTrain_categ,2)';

    Determine the number of classes in the data.

    classes = categories(YTrain_categ); numClasses = numel(classes)
    numClasses = 5

    squeezenetis a convolutional neural network that is trained on more than a million images from the ImageNet database. As a result, the network has learned rich feature representations for a wide range of images. The network can classify images into 1000 object categories, such as keyboard, mouse, pencil, and many animals.

    Import the pretrainedsqueezenetnetwork as a function.

    squeezenetONNX() params = importONNXFunction('squeezenet.onnx','squeezenetFcn')
    A function containing the imported ONNX network has been saved to the file squeezenetFcn.m. To learn how to use this function, type: help squeezenetFcn.
    帕拉ms = ONNXParameters with properties: Learnables: [1×1 struct] Nonlearnables: [1×1 struct] State: [1×1 struct] NumDimensions: [1×1 struct] NetworkFunctionName: 'squeezenetFcn'

    帕拉msis anONNXParametersobject that contains the network parameters.squeezenetFcn是一个模型函数包含网络弓itecture.importONNXFunctionsavessqueezenetFcnin the current folder.

    Calculate the classification accuracy of the pretrained network on the new training set.

    accuracyBeforeTraining = getNetworkAccuracy(XTrain,YTrain,params); fprintf('%.2f accuracy before transfer learning\n',accuracyBeforeTraining);
    0.01 accuracy before transfer learning

    The accuracy is very low.

    Display the learnable parameters of the network by typing帕拉ms.Learnables. These parameters, such as the weights (W) and bias (B) of convolution and fully connected layers, are updated by the network during training. Nonlearnable parameters remain constant during training.

    The last two learnable parameters of the pretrained network are configured for 1000 classes.

    conv10_W: [1×1×512×1000 dlarray]

    conv10_B: [1000×1 dlarray]

    The parametersconv10_Wandconv10_Bmust be fine-tuned for the new classification problem. Transfer the parameters to classify five classes by initializing the parameters.

    params.Learnables。conv10_W =兰德(1512 5);params.Learnables。conv10_B = rand(5,1);

    Freeze all the parameters of the network to convert them to nonlearnable parameters. Because you do not need to compute the gradients of the frozen layers, freezing the weights of many initial layers can significantly speed up network training.

    帕拉ms = freezeParameters(params,'all');

    Unfreeze the last two parameters of the network to convert them to learnable parameters.

    帕拉ms = unfreezeParameters(params,'conv10_W'); params = unfreezeParameters(params,'conv10_B');

    Now the network is ready for training. Initialize the training progress plot.

    plots ="training-progress";ifplots =="training-progress"figure lineLossTrain = animatedline; xlabel("Iteration") ylabel("Loss")end

    Specify the training options.

    velocity = []; numEpochs = 5; miniBatchSize = 16; numObservations = size(YTrain,2); numIterationsPerEpoch = floor(numObservations./miniBatchSize); initialLearnRate = 0.01; momentum = 0.9; decay = 0.01;

    Train the network.

    iteration = 0; start = tic; executionEnvironment ="cpu";% Change to "gpu" to train on a GPU.% Loop over epochs.forepoch = 1:numEpochs% Shuffle data.idx = randperm(numObservations); XTrain = XTrain(:,:,:,idx); YTrain = YTrain(:,idx);% Loop over mini-batches.fori = 1:numIterationsPerEpoch iteration = iteration + 1;% Read mini-batch of data.idx = (i-1)*miniBatchSize+1:i*miniBatchSize; X = XTrain(:,:,:,idx); Y = YTrain(:,idx);% If training on a GPU, then convert data to gpuArray.if(executionEnvironment =="auto"&& canUseGPU) || executionEnvironment =="gpu"X = gpuArray(X);end% Evaluate the model gradients and loss using dlfeval and the% modelGradients function.[gradients,loss,state] = dlfeval(@modelGradients,X,Y,params); params.State = state;% Determine the learning rate for the time-based decay learning rate schedule.learnRate = initialLearnRate/(1 + decay*iteration);% Update the network parameters using the SGDM optimizer.[params.Learnables,velocity] = sgdmupdate(params.Learnables,gradients,velocity);% Display the training progress.ifplots =="training-progress"D = duration(0,0,toc(start),'Format','hh:mm:ss'); addpoints(lineLossTrain,iteration,double(gather(extractdata(loss)))) title("Epoch: "+ epoch +", Elapsed: "+ string(D)) drawnowendendend

    Calculate the classification accuracy of the network after fine-tuning.

    accuracyAfterTraining = getNetworkAccuracy(XTrain,YTrain,params); fprintf('%.2f accuracy after transfer learning\n',accuracyAfterTraining);
    1.00 accuracy after transfer learning

    Helper Functions

    This section provides the code of the helper functions used in this example.

    ThegetNetworkAccuracyfunction evaluates the network performance by calculating the classification accuracy.

    functionaccuracy = getNetworkAccuracy(X,Y,onnxParams) N = size(X,4); Ypred = squeezenetFcn(X,onnxParams,'Training',false); [~,YIdx] = max(Y,[],1); [~,YpredIdx] = max(Ypred,[],1); numIncorrect = sum(abs(YIdx-YpredIdx) > 0); accuracy = 1 - numIncorrect/N;end

    ThemodelGradientsfunction calculates the loss and gradients.

    function[grad, loss, state] = modelGradients(X,Y,onnxParams) [y,state] = squeezenetFcn(X,onnxParams,'Training',true); loss = crossentropy(y,Y,'DataFormat','CB'); grad = dlgradient(loss,onnxParams.Learnables);end

    ThesqueezenetONNXfunction generates an ONNX model of thesqueezenetnetwork.

    functionsqueezenetONNX() exportONNXNetwork(squeezenet,'squeezenet.onnx');end

    Input Arguments

    collapse all

    网络参数, specified as anONNXParametersobject.帕拉mscontains the network parameters of the imported ONNX™ model.

    Names of the parameters to freeze, specified as'all'or a string array. Freeze all learnable parameters by settingnamesto'all'. Freezeklearnable parameters by defining the parameter names in the 1-by-kstring arraynames.

    Example:'all'

    Example:["gpu_0_sl_pred_b_0", "gpu_0_sl_pred_w_0"]

    Data Types:char|string

    Output Arguments

    collapse all

    网络参数, returned as anONNXParametersobject.帕拉mscontains the network parameters updated byfreezeParameters.

    Version History

    Introduced in R2020b