论文部分内容阅读
本文把人工神经网络的互连权视为广义的自旋变量,网络的学习问题看作互连权空间的优化问题.进而将通常在人工神经网络组态空间的连续时间动力学方程组推广到人工神经网络的互连权空间,并在方程组中引入类似Metropolis的MonteCarlo算法机制改进此方程组以提高寻优能力,提出了一个人工神经网络的演化方程学习算法.该算法在很大程度上摆脱了局域极值的束缚,得到了最优或接近最优的互连权.本文集中研究单层反馈人工神经网络,将互连权组态空间的能量函数取为二次函数,研究了由该学习算法得到的网络的最大存储容量ac.结果表明,当神经元数较少时,得到了接近赝逆模型的结果,并且随神经元数的增大存储容量缓慢降低.
In this paper, the interconnection rights of artificial neural networks are regarded as generalized spin variables, and the problem of network learning is considered as the optimization of interconnected space. Furthermore, the continuous time dynamics equations, which are usually configured in artificial neural networks, are generalized to the interconnected right space of artificial neural networks, and a Metropolis-like Monte Carlo algorithm, which is similar to Metropolis, is introduced into the system of equations to improve the optimization ability. An artificial neural network evolution equation learning algorithm is proposed. The algorithm is largely free from the shackles of local maxima and obtains the optimal or near-optimal interconnection weights. This paper focuses on the single-layer feedback artificial neural network, the energy function of interconnection rights configuration space as a quadratic function, study the maximum storage capacity obtained by the learning algorithm ac. The results show that when the number of neurons is small, the result is close to that of the pseudo- inverse model, and the storage capacity decreases slowly as the number of neurons increases.