针对无人艇海面航行场景中单目图像测距难的问题,本文采用Yolov8模型检测海面目标,设计基于类别先验划分的双模型测距算法。针对中小型海面目标测距引入目标画面偏航角修正和海天线修正,针对货船目标测距引入基于检测框高的回归策略;构建多场景海面目标测距数据集,对测距算法进行定量评估。实验结果表明,中小型船舶测距中,引入画面偏航角修正测距的相对误差均值减小3.53%,引入海天线优化测距的相对误差均值减小30.89%;货船目标测距中,拟合法测距误差收敛,在实船测试中基本满足无人艇多传感器感知融合的要求。
Aiming at the problem of difficult ranging of monocular image in the navigation scene of unmanned ship on the sea surface, this paper adopts the Yolov8 model to detect the sea surface target, and designs the dual-model ranging algorithm based on the category a priori division. The target screen yaw angle correction and sea antenna correction are introduced for small and medium-sized surface target ranging, and the regression strategy based on the detection frame height is introduced for cargo ship target ranging. A multi-scene surface target ranging dataset is constructed to quantitatively evaluate the ranging algorithm. The experimental results show that, in the ranging of small and medium-sized ships, the mean relative error of ranging with the introduction of screen yaw angle correction is reduced by 3.53%, and the mean relative error of ranging with the introduction of sea antenna optimization is reduced by 30.89%. In the ranging of cargo ship targets, the proposed law ranging error converges, and it basically meets the requirements of multi-sensor perception fusion of unmanned ships in the real-vessel test.
2024,46(13): 126-131 收稿日期:2023-09-05
DOI:10.3404/j.issn.1672-7649.2024.13.022
分类号:TP391.41
基金项目:预研项目(JCKY2021206B015)
作者简介:张发枝(1998-),男,硕士研究生,研究方向为模式识别与图像处理、视觉测距
参考文献:
[1] 孔维玮, 冯伟强, 诸葛文章, 等. 美军大中型水面无人艇发展现状及启示[J]. 指挥控制与仿真, 2022, 44(5): 14-18.
[2] 刘艳宾, 陈光伟. 多传感器信息融合的船舶机械设备状态智能检测研究[J]. 舰船科学技术, 2022, 44(23): 173-176.
LIU Yanbin, CHEN Guangwei. Research on intelligent detection of ship mechanical equipment state based on multi-sensor information fusion[J]. Ship Science and Technology, 2022, 44(23): 173-176.
[3] 苏萍, 朱晓辉. 基于单目视觉的水面目标识别与测距方法研究[J]. 计算机技术与发展, 2021, 31(2): 80-84.
[4] 李文强. 海面无人艇视觉环境感知系统研究[D]. 哈尔滨: 哈尔滨工程大学, 2019.
[5] 张鹏, 黄亮, 杨露菁, 等. 基于海天线标定的海上单目测距方法[J/OL]. 电光与控制: 1-9.
[6] 赵明绘, 王建华, 郑翔, 等. 基于单目视觉的无人水面艇水面目标测距方法[J]. 传感器与微系统, 2021, 40(2): 47-50+54.
[7] EIGEN D , PUHRSCH C , FERGUS R . Depth map prediction from a single image using a multi-scale deep network[J]. MIT Press, 2014.
[8] RANFTL R, BOCHKOVSKIY A, KOLTUN V. Vision transformers for dense prediction[J]. arXiv Preprint, 2103.13413, 2021.
[9] ZHOU T , BROWN M , SNAVELY N , et al. Unsupervised learning of depth and ego-motion from video[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2017.
[10] KI-YEONG P, SUN-YOUNG H. Robust range estimation with a monocular camera for vision-based forward collision warning system[J]. The Scientific World Journal, 2014, (2014-12-9), 2014, 923632.
[11] ZHANG Z. A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(11): 1330-1334.
[12] KIM J H, KIM N, WON C S. High-speed drone detection based on Yolo-V8[C]//ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2023: 1-2.
[13] ELFWING S, UCHIBE E, DOYA K. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning[J]. Neural Networks, 2018, 107: 3-11.
[14] 戴永寿, 刘博文, 李立刚, 等. 基于局部Otsu分割与Hough变换的海天线检测[J]. 光电工程, 2018, 45(7): 57-65.
DAI Yongshou, LIU Bowen, LI Ligang, et al. Sea antenna detection based on local Otsu segmentation with Hough transform[J]. Opto-Electronic Engineering, 2018, 45(7): 57-65.