论文部分内容阅读
运用多元函数逼近工具,对三层前向人工神经网络逼近连续和可积函数的本质逼近阶进行了定量研究.证明了当激活函数满足一定条件时,对任意的连续或可积函数,能具体构造有明确隐层单元下界的三层网络使之对被逼近函数任意逼近.给出该类神经网络逼近的上、下界估计和本质逼近阶估计,刻画所构造网络的逼近性能与网络隐层拓扑结构之间的关系.特别地,当被逼近函数为二阶Lipschitz函数时,所建立的神经网络其逼近速度完全取决于被逼近函数的光滑性.所获结果对逼近连续或可积函数类的前向神经网络具体构造及逼近能力刻画有重要的理论指导意义.
Using multivariate function approximation tools, the approximation order of approximation of continuous and integrable functions of three-layer feedforward artificial neural networks is studied quantitatively. It is proved that for any continuous or integrable function, when the activation function satisfies certain conditions, A three-layer network with explicit hidden layer element lower bounds is constructed to be approximated by the approximated function. The upper and lower bound estimates and the approximation of intrinsic approximation for the approximation of such neural networks are given, and the approximation performance of the constructed networks and the hidden layer topology In particular, when the approximated function is a second-order Lipschitz function, the approximation speed of the established neural network depends entirely on the smoothness of the approximated function. The obtained results for the approximation of a continuous or integrable function class The specific construction of the forward neural network and its approximation ability have important theoretical guidance.