随着无人机和无人艇技术的迅速发展,无人机-无人艇协同应用逐渐成为未来海上军事任务和智慧交通的不可或缺组成部分。然而,考虑到无人艇在不同海况条件下的高动态特性,传统的雷达等手段虽然可以满足高精度和实时性的要求,但成本较高,而传统的视觉方法虽然成本低、易于实现,但稳定度较低。本文基于相机成像原理,提出一种无人机与无人艇的单目定距方法。通过对着陆区域的多个标志物进行识别定距后,对无人艇着陆区域平面进行拟合重构,从而获取无人机与无人艇的距离,标志物越多,算法失效的概率越低。搭建数值模拟器对算法进行了测试,研究了算法在无人艇大倾角摇晃下的精度和稳定性。结果显示,算法在无人艇摇晃的情况下可以得到较为准确的距离,所有工况计算出X,Y,Z方向上的平均相对误差分别为7.47%、6.48%和8.04%。
With the rapid advancement of unmanned drone and vessel technologies, the collaborative application of unmanned drones and vessel is gradually becoming an indispensable component of future maritime military missions and smart transportation. However, considering the high dynamic characteristics of unmanned vessels under different sea conditions, traditional methods such as radar, while meeting the requirements for high accuracy and real-time performance, incur high costs. On the other hand, traditional vision-based approaches, though cost-effective and easy to implement, suffer from lower stability. This paper proposes a monocular method for unmanned drone and vessel system based on the principles of pinhole camera model. By identifying and ranging multiple landmarks in the landing area, a plane fitting reconstruction of the landing area for the unmanned vessel is performed, enabling the determination of the distance between the unmanned drone and vessel. The more landmarks identified, the lower the probability of algorithm failure. A numerical simulator was constructed to test the algorithm, evaluating its accuracy and stability under the large-angle rotation of the unmanned vessel. The results demonstrate that the algorithm can obtain accurate distances, with average relative errors in the X, Y and Z directions of 7.47%, 6.48% and 8.04%, respectively.
2024,46(16): 81-89 收稿日期:2023-09-25
DOI:10.3404/j.issn.1672-7649.2024.16.014
分类号:U664.82
基金项目:海南省自然科学基金资助项目(521QN275);国家自然科学基金资助项目(42206192,52031006)
作者简介:邓涛(1997 – ),男,硕士研究生,研究方向为智能船舶
参考文献:
[1] 徐博, 王朝阳. 基于无人艇跨域异构编队协同导航研究进展与未来趋势[J]. 中国舰船研究, 2022, 17(4): 1-11,56.
[2] 吴蔚楠, 崔乃刚, 郭继峰, 等. 多异构无人机任务规划的分布式一体化求解方法[J]. 吉林大学学报 (工学版), 2018, 48(6): 1827-1837.
[3] MURPHY R R, STEIMLe E, GRIFFIN C, et al. Cooperative use of unmanned sea surface and micro aerial vehicles at Hurricane Wilma[J]. Journal of Field Robotics, 2008, 25(3): 164-180.
[4] 周思全, 化永朝, 董希旺, 等. 面向空地协同作战的无人机-无人车异构时变编队跟踪控制[J]. 航空兵器, 2019, 26(4): 54-59.
[5] GOMES J, URBANO P, CHRISTENSEN A L. Evolution of swarm robotics systems with novelty search[J]. Swarm Intelligence, 2013, 7: 115-144.
[6] 徐小斌, 段海滨, 曾志刚, 等. 无人机/无人艇协同控制研究进展[J]. 航空兵器, 2021, 27(6): 1-6.
[7] 李凌昊, 张晓晨, 王浩, 等. 海上异构无人装备一体化协同作战架构[J]. 舰船科学技术, 2019, 41(12): 50-53
[8] KANELLAKIS C, NIKOLAKOPOULOS G. Survey on computer vision for UAVs: Current developments and trends[J]. Journal of Intelligent & Robotic Systems, 2017, 87: 141-168.
[9] AL-KAFF A, MARTIN D, GARCIA F, et al. Survey of computer vision algorithms and applications for unmanned aerial vehicles[J]. Expert Systems with Applications, 2018, 92: 447-463.
[10] HOFMANN-W B, LICHTENEGGER H, W E. GNSS–global navigation satellite systems: GPS, GLONASS, Galileo, and more[M]. Springer Science & Business Media, 2007.
[11] LEUTENEGGER S, CHLI M, SIEGWART R Y. BRISK: Binary robust invariant scalable keypoints[C]//2011 International conference on computer vision. Ieee, 2011: 2548-2555.
[12] ALCANTARILLA P F, BARTOLI A, DAVISON A J. KAZE features[C]//Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VI 12. Springer Berlin Heidelberg, 2012: 214-227.
[13] ALCANTARILLA P F, SOLUTIONS T. Fast explicit diffusion for accelerated features in nonlinear scale spaces[J]. IEEE Trans. Patt. Anal. Mach. Intell, 2011, 34(7): 1281-1298.
[14] TRIGGS B, MCLAUCHLAN P F, HARTLEY R I, et al. Bundle adjustment—a modern synthesis[C]//Vision Algorithms: Theory and Practice: International Workshop on Vision Algorithms Corfu, Greece, September 21–22, 1999 Proceedings. Springer Berlin Heidelberg, 2000: 298-372.
[15] NISTéR D, NARODITSKY O, BERGEN J. Visual odometry[C]//Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE, 2004.
[16] FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: Fast semi-direct monocular visual odometry[C]//2014 IEEE international conference on robotics and automation (ICRA). IEEE, 2014: 15-22.
[17] GUI J, GU D, WANG S, et al. A review of visual inertial odometry from filtering and optimisation perspectives[J]. Advanced Robotics, 2015, 29(20): 1289-1301.
[18] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90.
[19] GOFORTH H, LUCEY S. GPS-denied UAV localization using pre-existing satellite imagery[C]//2019 International conference on robotics and automation (ICRA). IEEE, 2019: 2974-2980.
[20] D'HAEYER J P F. Gaussian filtering of images: A regularization approach[J]. Signal processing, 1989, 18(2): 169-181.
[21] LIM J. Mavros_Controllers—Aggressive Trajectory Tracking Using Mavros for PX4 Enabled Vehicles[J]. 2019.
[22] ZHANG Z. Flexible camera calibration by viewing a plane from unknown orientations[C]//Proceedings of the seventh IEEE international conference on computer vision. IEEE, 1999, 1: 666-673.