论文部分内容阅读
This article concentrates on ground vision guided autonomous landing of a fixed-wing Unmanned Aerial Vehicle (UAV) within Global Navigation Satellite System (GNSS) denied environments.Cascaded deep learning models are developed and employed into image detection and its accuracy promoting for UAV autolanding,respectively.Firstly,we design a target bounding box detection network BboxLocate-Net to extract its image coordinate of the flying object.Secondly,the detected coordinate is fused into spatial localization with an extended Kalman filter estimator.Thirdly,a point regression network PointRefine-Net is developed for promoting detection accuracy once the flying vehicle's motion continuity is checked unacceptable.The proposed approach definitely accomplishes the closed-loop mutual inspection of spatial positioning and image detection,and automatically improves the inaccurate coordinates within a certain range.Experimental results demonstrate and verify that our method outperforms the previous works in terms of accuracy,robustness and real-time criterions.Specifically,the newly developed BboxLocate-Net attaches over 500 fps,almost five times the published state-of-the-art in this field,with comparable localization accuracy.