Main Content

火车cgf

反对jugate gradient backpropagation with Fletcher-Reeves updates

句法GydF4y2Ba

net.trainfcn ='traincgf'GydF4y2Ba
[[net,tr] = train(net,...)

Description

火车cgfis a network training function that updates weight and bias values according to conjugate gradient backpropagation with Fletcher-Reeves updates.

net.trainfcn ='traincgf'GydF4y2Ba设置网络GydF4y2BaTrainfcnGydF4y2Ba财产。GydF4y2Ba

[[net,tr] = train(net,...)训练网络GydF4y2Ba火车cgf。GydF4y2Ba

训练是根据GydF4y2Ba火车cgf火车ing parameters, shown here with their default values:

net.trainParam.epochs 1000GydF4y2Ba

训练的最大时期数量GydF4y2Ba

net.trainparam.showGydF4y2Ba 25GydF4y2Ba

显示之间的时期(GydF4y2Ba南GydF4y2Ba没有显示)GydF4y2Ba

net.trainparam.showCommandLine 错误的GydF4y2Ba

生成命令行输出GydF4y2Ba

net.trainparam.showWindow true

Show training GUI

net.trainparam.goalGydF4y2Ba 0GydF4y2Ba

性能目标GydF4y2Ba

net.trainparam.timeGydF4y2Ba inf

最大的时间在几秒钟内训练GydF4y2Ba

net.trainparam.min_gradGydF4y2Ba 1E-10GydF4y2Ba

最低性能梯度GydF4y2Ba

net.trainparam.max_failGydF4y2Ba 6GydF4y2Ba

最大验证失败GydF4y2Ba

net.trainparam.searchfcnGydF4y2Ba 'srchcha'GydF4y2Ba

使用行搜索例程的名称GydF4y2Ba

Parameters related to line search methods (not all used for all methods):

net.trainparam.scal_tolGydF4y2Ba 20GydF4y2Ba

Divide into三角洲GydF4y2Ba确定线性搜索的公差。GydF4y2Ba

net.trainparam.alphaGydF4y2Ba 0。001GydF4y2Ba

确定足够减少的比例因素GydF4y2BaperfGydF4y2Ba

net.trainParam.beta 0。1GydF4y2Ba

Scale factor that determines sufficiently large step size

net.trainParam.delta 0.01GydF4y2Ba

间隔位置步骤的初始步骤大小GydF4y2Ba

net.trainParam.gama 0。1GydF4y2Ba

Parameter to avoid small reductions in performance, usually set to0。1GydF4y2Ba(seesrch_cha)GydF4y2Ba

net.trainparam.low_limGydF4y2Ba 0。1GydF4y2Ba

步骤大小变化的下限GydF4y2Ba

net.trainparam.up_limGydF4y2Ba 0。5

步骤大小变化的上限GydF4y2Ba

net.trainparam.maxstepGydF4y2Ba 100GydF4y2Ba

Maximum step length

net.trainparam.minstepGydF4y2Ba 1.0E-6GydF4y2Ba

最小步长GydF4y2Ba

net.trainParam.bmax 26GydF4y2Ba

Maximum step size

Network Use

You can create a standard network that uses火车cgf和GydF4y2BafeedforwardnetGydF4y2Baorcascadeforwardnet。GydF4y2Ba

准备一个定制的网络培训GydF4y2Ba火车cgf,,,,GydF4y2Ba

  1. 放GydF4y2Banet.trainFcn至GydF4y2Ba'traincgf'GydF4y2Ba。this setsnet.trainParam至GydF4y2Ba火车cgf的默认参数。GydF4y2Ba

  2. 放GydF4y2Banet.trainParam属性到所需值。GydF4y2Ba

In either case, calling火车GydF4y2Ba和the resulting network trains the network with火车cgf。GydF4y2Ba

Examples

collapse all

this example shows how to train a neural network using the火车cgf火车function.

Here a neural network is trained to predict body fat percentages.

[X,T] = BodyFat_Dataset;net = feedforwardnet(10,GydF4y2Ba'traincgf'GydF4y2Ba); net = train(net, x, t);

图神经网络训练(26-FEB-2022 11:03:19)包含一个类型Uigridlayout的对象。GydF4y2Ba

y = net(x);GydF4y2Ba

更多关于GydF4y2Ba

collapse all

反对jugate Gradient Algorithms

所有共轭梯度算法首先是在第一次迭代的最陡峭下降方向(梯度负)搜索。GydF4y2Ba

pGydF4y2Ba 0GydF4y2Ba =GydF4y2Ba -GydF4y2Ba GGydF4y2Ba 0GydF4y2Ba

A line search is then performed to determine the optimal distance to move along the current search direction:

XGydF4y2Ba kGydF4y2Ba +GydF4y2Ba 1GydF4y2Ba =GydF4y2Ba XGydF4y2Ba kGydF4y2Ba α kGydF4y2Ba pGydF4y2Ba kGydF4y2Ba

然后确定下一个搜索方向,以使其与先前的搜索方向共轭。确定新搜索方向的一般过程是将新的陡峭下降方向与先前的搜索方向相结合:GydF4y2Ba

pGydF4y2Ba kGydF4y2Ba =GydF4y2Ba -GydF4y2Ba GGydF4y2Ba kGydF4y2Ba +GydF4y2Ba β kGydF4y2Ba pGydF4y2Ba kGydF4y2Ba -GydF4y2Ba 1GydF4y2Ba

共轭梯度算法的各种版本以常数β的方式区分GydF4y2BakGydF4y2Ba计算。对于Fletcher-Reeves更新该过程是GydF4y2Ba

β kGydF4y2Ba =GydF4y2Ba GGydF4y2Ba kGydF4y2Ba tGydF4y2Ba GGydF4y2Ba kGydF4y2Ba GGydF4y2Ba kGydF4y2Ba -GydF4y2Ba 1GydF4y2Ba tGydF4y2Ba GGydF4y2Ba kGydF4y2Ba -GydF4y2Ba 1GydF4y2Ba

这是当前梯度与先前梯度的标准平方的标准平方的比率。GydF4y2Ba

See [FlRe64] or [HDB96GydF4y2Ba] for a discussion of the Fletcher-Reeves共轭梯度算法。GydF4y2Ba

the conjugate gradient algorithms are usually much faster than variable learning rate backpropagation, and are sometimes faster thanTrainrpGydF4y2Ba,,,,一个lthough the results vary from one problem to another. The conjugate gradient algorithms require only a little more storage than the simpler algorithms. Therefore, these algorithms are good for networks with a large number of weights.

尝试GydF4y2Ba神经网络设计GydF4y2Ba示范GydF4y2BaNND12CGGydF4y2Ba[[GydF4y2BaHDB96GydF4y2Ba] for an illustration of the performance of a conjugate gradient algorithm.

Algorithms

火车cgf只要其重量,净输入和传输功能具有衍生功能,就可以训练任何网络。GydF4y2Ba

反向传播用于计算性能的导数GydF4y2BaperfGydF4y2Ba和respect to the weight and bias variablesXGydF4y2Ba。Each variable is adjusted according to the following:

x = x + a*dx;GydF4y2Ba

在哪里GydF4y2BaDXGydF4y2Ba是搜索方向。参数GydF4y2Ba一个GydF4y2Ba选择以最大程度地减少沿搜索方向的性能。行搜索功能GydF4y2BasearchFcn用于定位最小点。第一个搜索方向是性能梯度的否定方向。在接下来的迭代中,搜索方向是根据公式从新梯度和先前的搜索方向计算的GydF4y2Ba

DX=-gX + dX_old*Z;

在哪里GydF4y2BaGXGydF4y2Ba是梯度。参数GydF4y2BazGydF4y2Ba可以以几种不同的方式计算。对于共轭梯度的Fletcher-Reeves变化,它是根据GydF4y2Ba

z = normnew_sqr/norm_sqr;GydF4y2Ba

在哪里GydF4y2Banorm_sqris the norm square of the previous gradient andnormnew_sqrGydF4y2Ba是当前梯度的常规平方。请参阅第78页的量表(GydF4y2Ba非线性优化简介GydF4y2Ba)有关该算法的更详细讨论。GydF4y2Ba

训练stops when any of these conditions occurs:

  • the maximum number ofepochs(repetitions) is reached.

  • 最大数量GydF4y2Ba时间GydF4y2Ba超过。GydF4y2Ba

  • 性能最小化GydF4y2Ba目标GydF4y2Ba。GydF4y2Ba

  • the performance gradient falls belowmin_gradGydF4y2Ba。GydF4y2Ba

  • 验证性能(验证错误)的增加超过GydF4y2Bamax_fail自上次减少以来(使用验证时)。GydF4y2Ba

References

Scales, L.E.,非线性优化简介GydF4y2Ba,纽约,施普林格 - 弗拉格,1985年GydF4y2Ba

版本历史记录GydF4y2Ba

Introduced before R2006a

也可以看看GydF4y2Ba

|GydF4y2Ba|GydF4y2Ba|GydF4y2Ba|GydF4y2Ba|GydF4y2Ba|GydF4y2Ba|GydF4y2Ba|GydF4y2Ba