万博1manbetx支持的网络,层和类
万博1manbetx支持Pretrained网络
GPU编码器™ supports code generation for series and directed acyclic graph (DAG) convolutional neural networks (CNNs or ConvNets). You can generate code for any trained convolutional neural network whose layers are supported for code generation. See万博1manbetx支撑层. You can train a convolutional neural network on either a CPU, a GPU, or multiple GPUs by using the Deep Learning Toolbox™ or use one of the pretrained networks listed in the table and generate CUDA®code.
Network Name | 描述 | 库丁 | TensorRT | ARM®有限公司mpute Library for Mali GPU |
---|---|---|---|---|
Alexnet |
Alexnetconvolutional neural network. For the pretrained AlexNet model, see The syntax |
Yes |
Yes |
Yes |
Caffe Network |
Caffe的卷积神经网络模型。有关从Caffe导入预算网络的信息,请参阅 |
Yes |
Yes |
Yes |
Darknet-19 |
Darknet-19 convolutional neural network. For more information, see The syntax |
Yes |
Yes |
Yes |
Darknet-53 |
Darknet-53 convolutional neural network. for more information, see The syntax |
Yes |
Yes |
Yes |
DeepLab V3+ |
DeepLab V3+卷积神经网络。有关更多信息,请参阅 |
Yes |
Yes |
No |
DenseNet-201 |
DenseNet-201 convolutional neural network. For the pretrained DenseNet-201 model, see The syntax |
Yes |
Yes |
Yes |
有效网络-B0 |
有效网络-B0convolutional neural network. For the pretrained EfficientNet-b0 model, see The syntax |
Yes | Yes | Yes |
GoogLeNet |
GoogLeNet convolutional neural network. For the pretrained GoogLeNet model, see The syntax |
Yes |
Yes |
Yes |
Inception-Resnet-V2 |
Inception-Resnet-V2convolutional neural network. For the pretrained Inception-ResNet-v2 model, see |
Yes |
Yes |
No |
Inception-v3 |
Inception-v3 convolutional neural network. For the pretrained Inception-v3 model, see The syntax |
Yes |
Yes |
Yes |
Mobilenet-v2 |
Mobilenet-V2卷积神经网络。对于预审前的Mobilenet-V2模型,请参见 The syntax |
Yes |
Yes |
Yes |
NASNet-Large |
NASNet-Large convolutional neural network. For the pretrained NASNet-Large model, see |
Yes |
Yes |
No |
NASNET-MOBILE |
NASNET-MOBILEconvolutional neural network. For the pretrained NASNet-Mobile model, see |
Yes |
Yes |
No |
ResNet |
ResNet-18, ResNet-50, and ResNet-101 convolutional neural networks. For the pretrained ResNet models, see The syntax |
Yes |
Yes |
Yes |
SegNet |
Multi-class pixelwise segmentation network. For more information, see |
Yes |
Yes |
No |
SqueezeNet |
Small deep neural network. For the pretrained SqueezeNet models, see The syntax |
Yes |
Yes |
Yes |
VGG-16 |
VGG-16 convolutional neural network. For the pretrained VGG-16 model, see The syntax |
Yes |
Yes |
Yes |
VGG-19 |
VGG-19卷积神经网络。有关验证的VGG-19型号,请参见 The syntax |
Yes |
Yes |
Yes |
Xception |
X Ception卷积神经网络。对于预验证的X受体模型,请参见 The syntax |
Yes |
Yes |
Yes |
Yolo V2 |
You only look once version 2 convolutional neural network based object detector. For more information, see |
Yes |
Yes |
Yes |
万博1manbetx支撑层
GPU Coder为表中指定的目标深度学习库支持以万博1manbetx下层为代码生成。
输入层
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
An image input layer inputs 2-D images to a network and applies data normalization. 有限公司de generation does not support |
Yes |
Yes |
Yes |
|
序列输入层将序列数据输入到网络。 Cudnn库支持向量和二维图像序列。万博1manbetxTensorrt库仅支持向量输入序列。万博1manbetx For vector sequence inputs, the number of features must be a constant during code generation. For image sequence inputs, the height, width, and the number of channels must be a constant during code generation. 有限公司de generation does not support |
Yes |
Yes |
No |
|
A feature input layer inputs feature data to a network and applies data normalization. |
Yes |
Yes |
Yes |
卷积和完全连接层
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
A 2-D convolutional layer applies sliding convolutional filters to the input. |
Yes |
Yes |
Yes |
|
完全连接的层将输入乘以重量矩阵,然后添加偏置向量。 |
Yes |
Yes |
No |
|
2D分组的卷积层将输入通道分为组,并应用滑动卷积过滤器。使用分组的卷积层进行通道可分离(也称为深度可分离)卷积。 对于具有2D分组的卷积层,ARM MALI GPU的代码生成不支持万博1manbetx |
Yes |
Yes |
Yes |
|
A transposed 2-D convolution layer upsamples feature maps. |
Yes |
Yes |
Yes |
Sequence Layers
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
A bidirectional LSTM (BiLSTM) layer learns bidirectional long-term dependencies between time steps of time series or sequence data. These dependencies can be useful when you want the network to learn from the complete time series at each time step. 对于代码生成, 对于代码生成, |
Yes |
Yes |
No |
|
A flatten layer collapses the spatial dimensions of the input into the channel dimension. |
Yes |
No |
No |
|
A GRU layer learns dependencies between time steps in time series and sequence data. 有限公司de generation supports only the |
Yes |
Yes |
No |
|
An LSTM layer learns long-term dependencies between time steps in time series and sequence data. 对于代码生成, 对于代码生成, |
Yes |
Yes |
No |
|
A sequence folding layer converts a batch of image sequences to a batch of images. Use a sequence folding layer to perform convolution operations on time steps of image sequences independently. |
Yes |
No |
No |
|
序列输入层将序列数据输入到网络。 Cudnn库支持向量和二维图像序列。万博1manbetxTensorrt库仅支持向量输入序列。万博1manbetx For vector sequence inputs, the number of features must be a constant during code generation. For image sequence inputs, the height, width, and the number of channels must be a constant during code generation. 有限公司de generation does not support |
Yes |
Yes |
No |
|
A sequence unfolding layer restores the sequence structure of the input data after sequence folding. |
Yes |
No |
No |
|
A word embedding layer maps word indices to vectors. |
Yes |
Yes |
No |
Activation Layers
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
A clipped ReLU layer performs a threshold operation, where any input value less than zero is set to zero and any value above theclipping ceilingis set to that clipping ceiling. |
Yes |
Yes |
Yes |
|
ELU激活层在正输入和负输入上执行指数非线性。 |
Yes |
Yes |
No |
|
A leaky ReLU layer performs a threshold operation, where any input value less than zero is multiplied by a fixed scalar. |
Yes |
Yes |
Yes |
|
A ReLU layer performs a threshold operation to each element of the input, where any value less than zero is set to zero. |
Yes |
Yes |
Yes |
|
A |
Yes |
Yes |
No |
|
Swish激活层在层输入上应用SWISH函数。 |
Yes |
Yes |
No |
|
A hyperbolic tangent (tanh) activation layer applies the tanh function on the layer inputs. |
Yes |
Yes |
Yes |
Normalization, Dropout, and Cropping Layers
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
A batch normalization layer normalizes each input channel across a mini-batch. |
Yes |
Yes |
Yes |
|
2-D农作物层适用于输入。 |
Yes |
Yes |
Yes |
|
A channel-wise local response (cross-channel) normalization layer carries out channel-wise normalization. |
Yes |
Yes |
Yes |
|
辍学层随机将输入元素随机设置为零,并具有给定的概率。 |
Yes |
Yes |
Yes |
|
A group normalization layer normalizes a mini-batch of data across grouped subsets of channels for each observation independently. |
Yes |
Yes |
No |
|
演员或评论家网络的缩放层。 For code generation, values for the |
Yes |
Yes |
Yes |
Pooling and Unpooling Layers
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
An average pooling layer performs down-sampling by dividing the input into rectangular pooling regions and computing the average values of each region. |
Yes |
Yes |
Yes |
|
A global average pooling layer performs down-sampling by computing the mean of the height and width dimensions of the input. |
Yes |
Yes |
Yes |
|
A global max pooling layer performs down-sampling by computing the maximum of the height and width dimensions of the input. |
Yes |
Yes |
Yes |
|
A max pooling layer performs down-sampling by dividing the input into rectangular pooling regions, and computing the maximum of each region. If equal max values exists along the off-diagonal in a kernel window, implementation differences for the |
Yes |
Yes |
Yes |
|
最大不化层不致密最大池层的输出。 If equal max values exists along the off-diagonal in a kernel window, implementation differences for the |
Yes |
Yes |
No |
有限公司mbination Layers
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
An addition layer adds inputs from multiple neural network layers element-wise. |
Yes |
Yes |
Yes |
|
串联层采用输入并沿指定的维度串联。 |
Yes |
Yes |
No |
|
A depth concatenation layer takes inputs that have the same height and width and concatenates them along the third dimension (the channel dimension). |
Yes |
Yes |
Yes |
Object Detection Layers
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
An anchor box layer stores anchor boxes for a feature map used in object detection networks. |
Yes |
Yes |
Yes |
|
2-D深度到空间层将数据从深度维度置于2D空间数据的块中。 | Yes |
Yes |
Yes |
|
焦点损耗层使用焦点损失预测对象类。 | Yes |
Yes |
Yes |
|
A space to depth layer permutes the spatial blocks of the input into the depth dimension. Use this layer when you need to combine feature maps of different size without discarding any feature data. | Yes |
Yes |
Yes |
|
An SSD merge layer merges the outputs of feature maps for subsequent regression and classification loss computation. |
Yes |
Yes |
No |
|
A box regression layer refines bounding box locations by using a smooth L1 loss function. Use this layer to create a Fast or Faster R-CNN object detection network. | Yes |
Yes |
Yes |
|
区域建议网络(RPN)分类层将图像区域分类为objectorbackgroundby using a cross entropy loss function. Use this layer to create a Faster R-CNN object detection network. | Yes |
Yes |
Yes |
|
Create output layer for YOLO v2 object detection network. |
Yes |
Yes |
Yes |
|
Create reorganization layer for YOLO v2 object detection network. |
Yes |
Yes |
Yes |
|
Create transform layer for YOLO v2 object detection network. |
Yes |
Yes |
Yes |
输出层
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
A classification layer computes the cross entropy loss for multi-class classification problems with mutually exclusive classes. |
Yes |
Yes |
Yes |
|
A Dice pixel classification layer provides a categorical label for each image pixel or voxel using generalized Dice loss. |
Yes |
Yes |
Yes |
|
焦点损耗层使用焦点损失预测对象类。 | Yes |
Yes |
Yes |
|
All output layers including custom classification or regression output layers created by using 有关如何定义自定义分类输出层并指定损失函数的示例,请参见Define Custom Classification Output Layer(Deep Learning Toolbox). 有关如何定义自定义回归输出层并指定损失函数的示例,请参见定义自定义回归输出层(Deep Learning Toolbox). |
Yes |
Yes |
Yes |
|
像素分类层为每个图像像素或体素提供了一个分类标签。 |
Yes |
Yes |
Yes |
|
A box regression layer refines bounding box locations by using a smooth L1 loss function. Use this layer to create a Fast or Faster R-CNN object detection network. | Yes |
Yes |
Yes |
|
回归层为回归问题计算半均值的误差损失。 |
Yes |
Yes |
Yes |
|
区域建议网络(RPN)分类层将图像区域分类为objectorbackgroundby using a cross entropy loss function. Use this layer to create a Faster R-CNN object detection network. | Yes |
Yes |
Yes |
|
Sigmoid层将Sigmoid函数应用于输入。 |
Yes |
Yes |
Yes |
|
A softmax layer applies a softmax function to the input. |
Yes |
Yes |
Yes |
Custom Keras Layers
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
Clips the input between the upper and lower bounds. |
Yes |
Yes |
No |
|
Flatten activations into 1-D assuming C-style (row-major) order. |
Yes |
Yes |
Yes |
|
空间数据的全球平均合并层。 |
Yes |
Yes |
Yes |
|
Parametric rectified linear unit. |
Yes |
Yes |
No |
|
Sigmoid activation layer. |
Yes |
Yes |
Yes |
|
Hyperbolic tangent activation layer. |
Yes |
Yes |
Yes |
|
Flatten a sequence of input image into a sequence of vector, assuming C-style (or row-major) storage ordering of the input layer. |
Yes |
Yes |
No |
|
Zero padding layer for 2-D input. |
Yes |
Yes |
Yes |
Custom ONNX Layers
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
Clips the input between the upper and lower bounds. |
Yes |
Yes |
No |
|
Layer that performs element-wise scaling of the input followed by an addition. |
Yes |
Yes |
Yes |
|
Flattens a MATLAB 2D image batch in the way ONNX does, producing a 2D output array with |
Yes |
Yes |
No |
|
将输入张量的空间尺寸变平至通道尺寸。 |
Yes |
Yes |
Yes |
|
空间数据的全球平均合并层。 |
Yes |
Yes |
Yes |
|
实现ONNX身份操作员的图层。 |
Yes |
Yes |
Yes |
|
Parametric rectified linear unit. |
Yes |
Yes |
No |
|
Sigmoid activation layer. |
Yes |
Yes |
Yes |
|
Hyperbolic tangent activation layer. |
Yes |
Yes |
Yes |
|
Verify fixed batch size. |
Yes |
Yes |
Yes |
Custom Layers
Layer Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
|
您为问题定义的自定义层,带有或没有可学习的参数。 To learn how to define custom deep learning layers, seeDefine Custom Deep Learning Layers(Deep Learning Toolbox)andDefine Custom Deep Learning Layer for Code Generation(Deep Learning Toolbox). For an example on how to generate code for a network with custom layers, see有限公司de Generation For Object Detection Using YOLO v3 Deep Learning. The outputs of the custom layer must be fixed-size arrays. Using 库丁targets support both row-major and column-major code generation for custom layers. TensorRT targets support only column-major code generation. 对于代码生成,自定义层必须包含 有限公司de generation for a sequence network containing custom layer and LSTM or GRU layer is not supported. You can pass
For unsupported functionZ = predict(layer, X)ifcoder.target('matlab') Z = doPredict(X);elseifisdlarray(X) X1 = extractdata(X); Z1 = doPredict(X1); Z = dlarray(Z1);elsez = dopredict(x);endendend |
Yes |
Yes |
No |
万博1manbetx支持的课程
GPU编码器为表中指定的目标深度学习库支持以下类别的万博1manbetx代码生成。
Name | 描述 | 库丁 | TensorRT | ARM Compute Library for Mali GPU |
---|---|---|---|---|
DAGNetwork (Deep Learning Toolbox) |
Directed acyclic graph (DAG) network for deep learning
|
Yes |
Yes |
Yes |
dlnetwork (Deep Learning Toolbox) |
深入学习网络定制培训循环
|
Yes |
Yes |
No |
|
PointPillars network to detect objects in lidar point clouds
|
Yes |
Yes |
No |
SeriesNetwork (Deep Learning Toolbox) |
Series network for deep learning
|
Yes |
Yes |
Yes |
ssdObjectDetector (Computer Vision Toolbox) |
使用基于SSD的检测器检测对象。
|
Yes |
Yes |
No |
yolov2ObjectDetector (Computer Vision Toolbox) |
使用Yolo V2对象检测器检测对象
|
Yes |
Yes |
Yes |
|
Detect objects using YOLO v3 object detector
|
Yes |
Yes |
No |
|
Detect objects using YOLO v4 object detector
|
Yes |
Yes |
No |
See Also
Functions
Objects
coder.gpuConfig
|coder.CodeConfig
|coder.EmbeddedCodeConfig
|coder.gpuEnvConfig
|Coder.CudnnConfig
|coder.TensorRTConfig
Related Topics
- Pretrained Deep Neural Networks(Deep Learning Toolbox)
- 开始转移学习(Deep Learning Toolbox)
- Create Simple Deep Learning Network for Classification(Deep Learning Toolbox)
- 加载预告片的网络以生成代码
- 有限公司de Generation for Deep Learning Networks by Using cuDNN
- 有限公司de Generation for Deep Learning Networks by Using TensorRT
- 有限公司de Generation for Deep Learning Networks Targeting ARM Mali GPUs