> 英语 >
英语翻译
Learning in such a network proceeds the same way as for perceptrons:example inputs are presented to the network,and if the network computes an output vector that matches the target,nothing is done.If there is an error (a difference between the output and target),then the weights are adjustec to reduce this error.The trick is to assess the blame for an error and divide it among the contributing weights.In perceptrons,this is easy because there is only one weight between each input and the output.But in multilayer networks.There are many weights connecting each input to an output,and each of these weights contributes to more than one output.
在这样一个网络学习收益,感知器相同的方式:例如输入提交给网络,如果网络计算的输出向量相匹配的目标,不采取任何行动.如果有一个错误(一产出和目标之间的差异),则权重adjustec减少这种误差.诀窍是评估错误引咎鸿沟在造成重了.在感知,这很容易,因为只有一间每个输入和输出的重量.但在多层网络.有连接每个输入到输出许多重量,而这些重量每有助于多个输出.
The back-propagation algorithm is a sensibly approach to dividing the contribution of each weight.As in the perceptron learning algorithm,we try to minimize the error between each target output and the output actually computed by the network.At the output layer,the weight update rule is very similar to the rule for the perceptron.There are two differences:the activation of the hidden unit aj is used instead of the input value; and the rule contains a term for the gradient of the activation function.If Erri is the error (Ti-Oi) at the output node,then the weight update rule for the link from unit j to unit i is
反向传播算法是一种明智的方法来划分,每个重量的贡献.正如在感知学习算法,我们尽量减少各目标之间的输出和实际的网络计算的输出错误.在输出层,重量更新规则非常类似的感知规则.有两点不同:隐藏的单元欧塞尔激活,而不是输入值使用;和规则包含了激活功能梯度的一个术语.如果Erri是在输出节点错误(钛爱),那么从单位重量的链接j到我单位更新规则
Wj,i
人气:193 ℃ 时间:2019-12-12 00:38:53
解答
看到BP了,如果不出所料应该是神经网络的内容不要相信翻译软件,帮你重翻了一遍:术语:weight 权重hidden unit 隐层Learning in such a network proceeds the same way as for perceptrons: example inputs are prese...
推荐
猜你喜欢
© 2024 79432.Com All Rights Reserved.
电脑版|手机版