水面无人艇作为一种机动灵活的水面自主平台,因其可在危险或极端环境下替代人执行水文勘察、巡逻搜救等任务,近年来引发了国内外研究热潮。路径规划与避障作为无人艇自主作业的前提,受到重点关注。本文利用科学计量软件Citespace对无人艇的路径规划与避障技术进行分析,系统总结水上无人艇的感知模块、路径规划与避障经典算法及新兴算法,并对该领域的未来发展进行展望。
Unmanned surface vehicles (USV), as a mobile and flexible surface autonomous platform, can replace people to perform hydrological surveys, patrols, search and rescue and other tasks in dangerous or extreme environments. In recent years, it has aroused the research upsurge of researchers at home and abroad. Path planning and obstacle avoidance are the prerequisites for autonomous operation of unmanned surface vehicles, and their research and development status should be paid attention to. In this paper, Citespace is used to quantitatively analyze the path planning and obstacle avoidance technology of unmanned aerial vehicle, systematically summarized the previous research and showed the development trend, the perception module of the unmanned surface vehicles, the classic algorithm of path planning and obstacle avoidance and the emerging hotspot algorithm, and the future development is prospected in this field.
2023,45(16): 59-63 收稿日期:2022-9-18
DOI:10.3404/j.issn.1672-7649.2023.16.012
分类号:TP242
基金项目:装备预研领域基金(61403120109)
作者简介:周治国(1977-),男,博士,副教授,研究方向为智能信息感知与导航
参考文献:
[1] 刘佳, 王杰. 无人水面艇避障路径规划算法综述[J]. 计算机应用与软件, 2020, 37(8): 1–10+20
[2] 李杰, 陈超美. CiteSpace: 科技文本挖掘及可视化(第二版)[M]. 北京: 首都经济贸易大学出版社, 2017.
[3] 王成才, 商志刚, 何宇帆, 陈嘉真, 井方才, 王冬海. 无人船信息融合与避障关键技术综述[J]. 中国电子科学研究院学报, 2019, 14(12): 1228-1232.
[4] 侯瑞超, 唐智诚, 王博, 等. 水面无人艇智能化技术的发展现状和趋势[J]. 中国造船, 2020, 61(S1): 211–220
[5] THOMPSON, DAVID J. Maritime object detection, tracking, and classification using lidar and vision-based sensor fusion[D]. Embry-Riddle Aeronautical University, Daytona Beach, FL, USA, 2017.
[6] MOU X, WANG H. Wide-baseline stereo-based obstacle mapping for unmanned surface vehicles[J]. Sensor, 2018, 18(4): 1085
[7] 张伟, 廖煜雷, 姜峰, 等. 无人水面艇技术发展回顾与趋势分析[J]. 无人系统技术, 2019, 2(6): 1–9
[8] JOOHYUN W, NAKWAN K. Collision avoidance for an unmanned surface vehicle using deep reinforcement learning[J]. Ocean Engineering, 2020, 199
[9] DIJKSTRA E W. A note on two problems in connexion with graphs[J]. Numerische Mathematik, 1959, 1(1): 269–271
[10] HART P E, NILSSON N J, RAPHAEL B. A formal basis for the heuristic dectermination of minimum cost paths[J]. IEEE Transactions on Systems Science and Cybernetics, 1968, 4(2): 100–107
[11] DORIGO M, CARO G D, GAMBARDELLA L M. Ant algorithms for discrete optimization[J]. Artificial Life, 1999, 5(2): 137–172
[12] HOLLAND J H. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence[M]. MIT Press, 1992.
[13] 张丹红, 陈文文, 张华, 等. A~*算法与蚁群算法相结合的无人艇巡逻路径规划[J]. 华中科技大学学报(自然科学版), 2020, 48(6): 13–18
[14] KHATIB, OUSSAMA. Real-time obstacle avoidance for manipulators and mobile robots[C]//IEEE International Conference on Robotics and Automation, 1985.
[15] FOX D, BURGARD W, THRUN S. The dynamic window approach to collision avoidance[J]. IEEE Robotics & Automation Magazine, 1997, 4(1): 23–33
[16] 汪流江, 方德翔, 李文刚, 等. 联合A*与动态窗口法的路径规划算法[J]. 系统工程与电子技术, 2021, 43(12): 3694–3702
[17] FIORINI P, SHILLERT Z. Motion planning in dynamic environ- ments using velocity obstacles[J]. International Journal of Robotics Research, 1998, 17(7): 760–772
[18] WATKINS C J C H, DAYAN P. Q-Learning[J]. Machine Learning, 1992, 8(3/4): 279–292
[19] 王程博, 张新宇, 邹志强, 等. 基于Q-Learning的无人驾驶船舶路径规划[J]. 船海工程, 2018, 47(05): 168–171
[20] 封佳祥, 江坤颐, 周彬, 等. 多任务约束条件下基于强化学习的水面无人艇路径规划算法[J]. 舰船科学技术, 2019, 41(23): 140–146
[21] SUTTON R S. Generalization in reinforcement learning : successful examples using sparse coarse coding[J]. Advances in Neural Information Processing Systems, 1996, 8.
[22] ZHANG R, TANG P, SU Y, et al. An adaptive obstacle avoidance algorithm for unmanned surface vehicle in complicated marine environments[J]. 自动化学报:英文版, 2014, 1(4): 385–396
[23] WILLIAMS R J. Simple statistical gradient-following algorithms for connectionist reinforcement learning[J]. Machine Learning, 1992, 8(3–4): 229–256
[24] BARTO, ANDREW, et al. Neuron like elements that can solve difficult learning control problems[J]. IEEE Transactions on Systems, Man, & Cybernetics, 1983, 13(5): 834–846
[25] 刘全, 翟建伟, 章宗长, 等. 深度强化学习综述[J]. 计算机学报, 2018, 41(1): 1–27
[26] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Playing atari with deep reinforcement learning[J]. Computer Science, 2013.
[27] VAN H H, GUEZ A, SILVER D. Deep reinforcement learning with double Q-learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence. Phoenix, USA, 2016: 2094-2100
[28] WANG Z, FREITAS N D, LANCTOT M. Dueling network architectures for deep reinforcement learning[C]//Proceedings of the International Conference on Machine Learning. New York, USA, 2016: 1995-2003.
[29] LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning[J]. Computer Science, 2015.
[30] SCHULMAN J, WOLSKI F, DHARIWAL P, et al. Proximal policy optimization algorithms[J]. arXiv preprint arXiv: 1707.06347, 2017.
[31] 随博文, 黄志坚, 姜宝祥, 等. 基于深度Q网络的水面无人艇路径规划算法[J]. 上海海事大学学报, 2020, 41(03): 1–5+116
[32] XU X, LU Y, LIU X, et al. Intelligent collision avoidance algorithms for USVs via deep reinforcement learning under COLREGs - ScienceDirect[J]. Ocean Engineering, 2020, 217.
[33] MEYER E, HEIBERG A, et al. COLREG-compliant collision avoidance for unmanned surface vehicle using deep reinforcement learning[J]. IEEE Access, 2020, 8: 165344–165364