论文部分内容阅读
Convergence properties of trust-region methods for unconstrained nonconvex optimization is considered in the case where information on the objective function’s local curvature is incomplete, in the sense that it may be restricted to a fixed set of test directions and may not be available at every iteration. It is shown that convergence to local weak minimizers can still be obtained under some additional but algorithmically realistic conditions. These theoretical results are then applied to recursive multigrid trust-region methods, which suggests a new class of algorithms with guaranteed second-order convergence properties.