针对深海工作站自主对接回收作业潜器的视觉定位问题,提出一种多类型标志的级联式导引定位策略,设计基于单目视觉定位的水下对接导引定位系统,采用改进k-means聚类算法进行多阈值自适应图像分割,实现对坞站预设的导引光源和AprilTag识别定位。分别在有、无自然光照条件下,开展水下导引定位与对接的水池试验,在水下存在大量悬浮颗粒和光照不均匀等条件下仍然取得较好的实际识别定位效果,验证了本文所提出方法的合理性和稳定性。
This paper put forward a multi-type marker cascaded guidance and positioning strategy to solve the vision positioning problem on underwater operating vehicles’ docking and recovery to deep-sea manned platform. It designs a guidance and positioning system based on monocular vision, and adopts the modified k-means clustering algorithm to adaptive multi-thresholds segmentation. To recognize and positioning the docking station, a set of guide lights and a set of AprilTags are preinstalled. Pool tests for underwater guidance and positioning of ROV docking and recovery were conducted with respect to light or dim conditions. Test results show that it can positioning well under interference conditions, i.e., suspended particles in water or uneven illumination during operating. It further suggests that the method proposed in this paper is feasible and stable.
2024,46(12): 77-83 收稿日期:2023-06-16
DOI:10.3404/j.issn.1672-7649.2024.12.014
分类号:U674.941
作者简介:倪天(1987-),男,硕士,高级工程师,研究方向为深海作业总体设计与控制技术
参考文献:
[1] YANG Canjun. Terminal underwater docking of an autonomous underwater vehicle using one camera and one light[J]. Marine Technology Society Journal, 2016(6): 58-68.
[2] YAHYA M F, ARSHAD M R. Tracking of multiple light sources using computer vision for underwater docking[J]. Procedia Computer Science, 2015, 76: 192–197.
[3] JIANG N, WANG J, KONG L, et al. Optimization of underwater marker detection based on yolov3–sciencedirect[J]. Procedia Computer Science, 2021,187: 52–59.
[4] TRSLIC P, ROSSI M, ROBINSON L, et al. Vision based autonomous docking for work class ROVs[J]. Ocean Engineering, 2020, 196, 106840.
[5] LI Y, JIANG Y, CAO J, et al. AUV docking experiments based on vision positioning using two cameras[J]. Ocean Engineering, 2015, 110: 163–173.
[6] SU X, XIANG X, DONG D, et al. Visual LOS Guided Docking of Over–actuated Underwater Vehicle[C]//Global Oceans 2020: Singapore–U. s. Gulf Coast, 2020.
[7] LIU S, OZAY M, XU H, et al. A generative model of underwater images for active landmark detection and docking[C]//2019 IEEE/rsj International Conference on Intelligent Robots and Systems (IROS), 2019.
[8] HE S, LIU Y, XIANG J. A low cost visual positioning system for small scale tracking experiments on underwater vehicles[C]//2020 IEEE 29th International Symposium on Industrial Electronics (ISIE), 2020.
[9] MYINT M, YONEMORI K, YANOU A, et al. Visual-servo-based autonomous docking system for underwater vehicle using dual-eyes camera 3D–pose tracking[C]//2015 IEEE/sice International Symposium on System Integration, 2015.
[10] LWIN K N, YONEMORI K, MYINT M, et al. Autonomous docking experiment in the sea for visual-servo type undewater vehicle using three-dimensional marker and dual-eyes cameras[C]//2016 55th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), 2016.
[11] YAN Zheping, GONG Peng, ZHANG Wei, et al. Autonomous underwater vehicle vision guided docking experiments based on l-shaped light array[J]. IEEE Access, 2019, 7: 72567–72576.
[12] JUNG J, LEE Y, KIM D, et al. AUV SLAM using forward/downward looking cameras and artificial landmarks[C]//Underwater Technology, 2017.
[13] ZHANG Tao, LI Dejun, LIN Mingwei, et al. AUV terminal docking experiments based on vision guidance[C]//Oceans 2016 Mts/ieee Monterey, 2016: 1–5.
[14] JOSHI B, MODASSHIR M, MANDERSON T, et al. DeepURL: deep pose estimation framework for underwater relative localization[C]// IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020.
[15] ARVIND K, SARTAJ S S. Comparative analysis of fuzzy c– means and k-means clustering in the case of image segmentation[C]//International Conference on Computing for Sustainable Global Development, 2021.
[16] HU Meng, ERIC C C T, GUO Yanting, et al. An improved k-Means algorithm with spatial constraints for image segmentation[C]//2021 International Conference on Machine Learning and Cybernetics (ICMLC), 2021: 1–7.
[17] THAQIFAH A A, AIMI SALIHAH A N, ZEEHAIDA M. A robust segmentation of malaria parasites detection using fast k-means and enhanced k-means clustering algorithms[C]//2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), 2021: 128–133.
[18] WANG Guanyu, HAN Jun, WANG Xiaoding, et al. Improvement on vision guidance in AUV docking[C]//Oceans, 2016.
[19] A simple, robust and fast method for the perspective-n-point Problem[Z]. 2018: 31–37.
[20] VINCENT L, FRANCESC M-N, PASCAL F. EPnP: an accurate O(n) solution to the PnP problem[J]. International Journal of Computer Vision, 2009, 81(2): 155-166.
[21] LI Shiqi, XU Chi, XIE Ming. A Robust O(n) solution to the perspective-n-point problem[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1444-1450.