MIMO Neural Network Performance

3次观看(最近30天)
Ji Hyeon Cho
Ji Hyeon Cho 2021年6月23日
编辑: Ji Hyeon Cho2021年6月23日
Hi, I am developing MIMO Neural Network for Integrated Model-Predictive Control.
我在开发模型方面有一些问题。
First, I can't optimize all of outputs from MIMO Neural Network.
Though I normalized all targets and inputs, MIMO Neural Network's Each Performance does not get better.
甚至,我做了所有layerWeigh的初始化函数ts and InputWeights, Biases same.
我选择网络性能参数作为“标准”。
Second, so I think the way to copy weights and biases from MISO Neural net to MIMO Neural net.
but it has difference performance of each target. (after copying, I didn't train MIMO Neural net.)
is there anyone who can solve this problems?
这是示例代码1。
rng(0,'twister')
x1 = [linspace(1,10,100);linspace(11,20,100)];
x2 = [linspace(11,20,100);linspace(21,30,100)];
t1 = linspace(101,200,100);
t2 = linspace(201,300,100)*100;
fori = 1:2
x1(i,:) =(x1(i,:) - min(x1(i,:)))/(max(x1(i,:)) - min(x1(x1(i,::cyce)));
x2(i,:) =(x2(i,:) - min(x2(i,:)))/(max(x2(i,:)) - min(x2(i,::cymant));
end
q = linspace(0.001,0.01,10);
r = linspace(0.001,0.1,10);
fori = 1:10
fors =1:10
net = fitnet(10,'Trainscg');
net.performparam.normalization ='standard';
net.numLayers=4;
net.layerconnect(4,3)= 1;
net.layers {3} .transferfcn ='tansig';
net.layers {3} .initfcn ='initnw';
net.layers{4}.initFcn ='initnw';
net.layers{3}.size = 10;
net.biasConnect(3)=1;
net.biasConnect(4)= 1;
net.numInputs=2;
net.outputConnect(4)=1;
net.inputConnect(3,2)=1;
net.inputConnect(3,1)=1;
net.dividefcn ='';
net.inputs{1}.processFcns={};
net.inputs{2}.processFcns={};
net.outputs{2}.processFcns={};
net.outputs {4} .processfcns = {};
net.layerweaights {4,3} .learnfcn ='learngdm';
net.InputWeights {3,2} .learnfcn ='learngdm';
net.inputWeights{3,1}.learnFcn ='learngdm';
net.InputWeights {1,1} .learnParam.lr = q(i);
net.inputWeights{1,1}.learnParam.mc =r(s);
net.InputWeights {3,2} .learnparam.lr = q(i);
net.inputWeights{3,2}.learnParam.mc =r(s);
net.layerWeights{4,3}.learnParam.lr =q(i);
net.layerweaights {4,3} .learnparam.mc = r(s);
net.layerWeights{2,1}.learnParam.lr =q(i);
net.layerWeights{2,1}.learnParam.mc =r(s);
net.biases{3}.learnFcn ='learngdm';
net.biases {4} .learnfcn ='learngdm';
net.biases{1}.learnParam.lr =q(i);
net.biases{1}.learnParam.mc =r(s);
net.biases{2}.learnParam.lr =q(i);
net.biases {2} .learnparam.mc = r(s);
net.biases{3}.learnParam.lr =q(i);
net.biases {4} .learnparam.lr = q(i);
net.biases {3} .learnparam.mc = r(s);
net.biases{4}.learnParam.mc =r(s);
net = train(net,{x1; x2},{t1; t2});
p = net({x1;x2});
first_error1(i,s)= sqrt(均值((t1-p {1,:})。^2))/平均值(t1)*100;
first_error2(i,s) = sqrt(mean((t2-p{2,:}).^2))/mean(t2)*100;
disp(first_error1(i,s))
disp(first_error2(i,s))
end
end
示例代码2是
rng(0,'twister')
x1 = [linspace(1,10,100);linspace(11,20,100)];
x2 = [linspace(11,20,100);linspace(21,30,100)];
t1 = linspace(101,200,100);
t2 = linspace(201,300,100)*100;
fori = 1:2
x1(i,:) =(x1(i,:) - min(x1(i,:)))/(max(x1(i,:)) - min(x1(x1(i,::cyce)));
x2(i,:) =(x2(i,:) - min(x2(i,:)))/(max(x2(i,:)) - min(x2(i,::cymant));
end
net1 = fitnet(10,'Trainscg');
net1.divideFcn ='';
net1.inputs {1} .processfcns = {};
net1.outputs{2}.processFcns={};
net2 = fitnet(10,'Trainscg');
net2.numinputs = 2;
net2.inputConnect(1,1) = 1;
net2.inputConnect(1,2)= 1;
net2.divideFcn ='';
net2.inputs{1}.processFcns={};
net2.inputs {2} .processfcns = {};
net2.outputs{2}.processFcns={};
net1 = train(net1,x1,t1);
net2 = train(net2,{x1;x2},t2);
p = net1(x1);
net1_error1 = sqrt(平均值((t1-p)。^2))/平均值(t1)*100;
p = net2({x1; x2});
net2_error2 = sqrt(mean((t2-cell2mat(p)).^2))/mean(t2)*100;
net = fitnet(10,'Trainscg');
net.trainparam.showwindow = false;
net.performparam.normalization ='standard';
net.numLayers=4;
net.layerconnect(4,3)= 1;
net.layers {3} .transferfcn ='tansig';
net.layers {3} .initfcn ='initnw';
net.layers{4}.initFcn ='initnw';
net.layers{3}.size = 10;
net.biasConnect(3)=1;
net.biasConnect(4)= 1;
net.numInputs=2;
net.outputConnect(4)=1;
net.inputConnect(3,1)=1;
net.inputConnect(3,2)=1;
net.dividefcn ='';
net.inputs{1}.processFcns={};
net.inputs{2}.processFcns={};
net.outputs{2}.processFcns={};
net.outputs {4} .processfcns = {};
net.layerweaights {4,3} .learnfcn ='learngdm';
net.InputWeights {3,2} .learnfcn ='learngdm';
net.InputWeights {3,2} .learnparam.lr = 0.01;
net.inputWeights{3,2}.learnParam.mc =0.09;
net.layerWeights{4,3}.learnParam.lr =0.01;
net.layerweaights {4,3} .learnparam.mc = 0.09;
net.biases{3}.learnFcn ='learngdm';
net.biases {4} .learnfcn ='learngdm';
net.biases {1} .learnparam.lr = 0.01;
net.biases{1}.learnParam.mc =0.09;
net.biases{2}.learnParam.lr =0.01;
net.biases{2}.learnParam.mc =0.09;
net.biases {3} .learnparam.lr = 0.01;
net.biases {4} .learnparam.lr = 0.01;
net.biases{3}.learnParam.mc =0.09;
net.biases {4} .learnparam.mc = 0.09;
net = configure(net,{x1;x2},{t2;t1});% During Output Connecting, Output's order gets different
net.IW{1,1}=net1.IW{1,1};
net.iw {3,1} = net2.iw {1,1};
net.IW{3,2}=net2.IW{1,2};
net.LW{2,1} = net1.LW{2,1};
net.lw {4,3} = net2.lw {2,1};
net.b(1) = net1.b(1);
net.b(2) = net1.b(2);
net.b(3) = net2.b(1);
net.b(4) = net2.b(2);
p = net({x1;x2});
net_error1 = sqrt(均值((t1-p {2,:})。^2))/平均值(t2)*100;
net_error2 = sqrt(mean((t2-p{1,:}).^2))/mean(t2)*100;

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

开始狩猎!