火车cgf
反对jugate gradient backpropagation with Fletcher-Reeves updates
句法GydF4y2Ba
net.trainfcn ='traincgf'GydF4y2Ba
[[net,tr] = train(net,...)
Description
火车cgf
is a network training function that updates weight and bias values according to conjugate gradient backpropagation with Fletcher-Reeves updates.
net.trainfcn ='traincgf'GydF4y2Ba
设置网络GydF4y2BaTrainfcnGydF4y2Ba
财产。GydF4y2Ba
[[net,tr] = train(net,...)
训练网络GydF4y2Ba火车cgf
。GydF4y2Ba
训练是根据GydF4y2Ba火车cgf
火车ing parameters, shown here with their default values:
net.trainParam.epochs |
1000GydF4y2Ba |
训练的最大时期数量GydF4y2Ba |
net.trainparam.showGydF4y2Ba |
25GydF4y2Ba |
显示之间的时期(GydF4y2Ba |
net.trainparam.showCommandLine |
错误的GydF4y2Ba |
生成命令行输出GydF4y2Ba |
net.trainparam.showWindow |
true |
Show training GUI |
net.trainparam.goalGydF4y2Ba |
0GydF4y2Ba |
性能目标GydF4y2Ba |
net.trainparam.timeGydF4y2Ba |
inf |
最大的时间在几秒钟内训练GydF4y2Ba |
net.trainparam.min_gradGydF4y2Ba |
1E-10GydF4y2Ba |
最低性能梯度GydF4y2Ba |
net.trainparam.max_failGydF4y2Ba |
6GydF4y2Ba |
最大验证失败GydF4y2Ba |
net.trainparam.searchfcnGydF4y2Ba |
'srchcha'GydF4y2Ba |
使用行搜索例程的名称GydF4y2Ba |
Parameters related to line search methods (not all used for all methods):
net.trainparam.scal_tolGydF4y2Ba |
20GydF4y2Ba |
Divide into |
net.trainparam.alphaGydF4y2Ba |
0。001GydF4y2Ba |
确定足够减少的比例因素GydF4y2Ba |
net.trainParam.beta |
0。1GydF4y2Ba |
Scale factor that determines sufficiently large step size |
net.trainParam.delta |
0.01GydF4y2Ba |
间隔位置步骤的初始步骤大小GydF4y2Ba |
net.trainParam.gama |
0。1GydF4y2Ba |
Parameter to avoid small reductions in performance, usually set to |
net.trainparam.low_limGydF4y2Ba |
0。1GydF4y2Ba |
步骤大小变化的下限GydF4y2Ba |
net.trainparam.up_limGydF4y2Ba |
0。5 |
步骤大小变化的上限GydF4y2Ba |
net.trainparam.maxstepGydF4y2Ba |
100GydF4y2Ba |
Maximum step length |
net.trainparam.minstepGydF4y2Ba |
1.0E-6GydF4y2Ba |
最小步长GydF4y2Ba |
net.trainParam.bmax |
26GydF4y2Ba |
Maximum step size |
Network Use
You can create a standard network that uses火车cgf
和GydF4y2BafeedforwardnetGydF4y2Ba
orcascadeforwardnet
。GydF4y2Ba
准备一个定制的网络培训GydF4y2Ba火车cgf
,,,,GydF4y2Ba
放GydF4y2Ba
net.trainFcn
至GydF4y2Ba'traincgf'GydF4y2Ba
。this setsnet.trainParam
至GydF4y2Ba火车cgf
的默认参数。GydF4y2Ba放GydF4y2Ba
net.trainParam
属性到所需值。GydF4y2Ba
In either case, calling火车GydF4y2Ba
和the resulting network trains the network with火车cgf
。GydF4y2Ba
Examples
更多关于GydF4y2Ba
Algorithms
火车cgf
只要其重量,净输入和传输功能具有衍生功能,就可以训练任何网络。GydF4y2Ba
反向传播用于计算性能的导数GydF4y2BaperfGydF4y2Ba
和respect to the weight and bias variablesXGydF4y2Ba
。Each variable is adjusted according to the following:
x = x + a*dx;GydF4y2Ba
在哪里GydF4y2BaDXGydF4y2Ba
是搜索方向。参数GydF4y2Ba一个GydF4y2Ba
选择以最大程度地减少沿搜索方向的性能。行搜索功能GydF4y2BasearchFcn
用于定位最小点。第一个搜索方向是性能梯度的否定方向。在接下来的迭代中,搜索方向是根据公式从新梯度和先前的搜索方向计算的GydF4y2Ba
DX=-gX + dX_old*Z;
在哪里GydF4y2BaGXGydF4y2Ba
是梯度。参数GydF4y2BazGydF4y2Ba
可以以几种不同的方式计算。对于共轭梯度的Fletcher-Reeves变化,它是根据GydF4y2Ba
z = normnew_sqr/norm_sqr;GydF4y2Ba
在哪里GydF4y2Banorm_sqr
is the norm square of the previous gradient andnormnew_sqrGydF4y2Ba
是当前梯度的常规平方。请参阅第78页的量表(GydF4y2Ba非线性优化简介GydF4y2Ba)有关该算法的更详细讨论。GydF4y2Ba
训练stops when any of these conditions occurs:
the maximum number of
epochs
(repetitions) is reached.最大数量GydF4y2Ba
时间GydF4y2Ba
超过。GydF4y2Ba性能最小化GydF4y2Ba
目标GydF4y2Ba
。GydF4y2Bathe performance gradient falls below
min_gradGydF4y2Ba
。GydF4y2Ba验证性能(验证错误)的增加超过GydF4y2Ba
max_fail
自上次减少以来(使用验证时)。GydF4y2Ba
References
Scales, L.E.,非线性优化简介GydF4y2Ba,纽约,施普林格 - 弗拉格,1985年GydF4y2Ba
版本历史记录GydF4y2Ba
也可以看看GydF4y2Ba
traingdmGydF4y2Ba
|GydF4y2BaTraingdaGydF4y2Ba
|GydF4y2BatraingdxGydF4y2Ba
|GydF4y2Ba火车lm
|GydF4y2Ba火车cgb
|GydF4y2BaTrainscgGydF4y2Ba
|GydF4y2Ba火车cgp
|GydF4y2BaTrainossGydF4y2Ba
|GydF4y2Ba火车bfg