Journal of Engineering Research
Journal of Engineering Research. 2025; 4: (2) ; 10.12208/j.jer.20250046 .
总浏览量: 205
辽宁科技大学电子与信息工程学院 辽宁鞍山
*通讯作者: 廖明煜,单位:辽宁科技大学电子与信息工程学院 辽宁鞍山;
在解决群机器人协同围捕的问题中,链阵方法是一种新颖且高效的策略。该方法无需提前进行任务分配,且主动环绕式运动显著减小了目标逃脱的可能性,具有很强的灵活性和创新性。然而该方法目前还存在一定不足,如机器人组成多条链阵从两个方向对目标进行围捕,导致决策过程复杂。针对链阵方法的以上问题提出一种基于递减招募的单向链阵围捕策略。首先优化多条链阵为单条链阵,并统一链首机器人的运动方向,减少计算复杂度。随后根据围捕目标的个数预测所需围捕者数量,发布招募信息,优化资源的配置。最后通过最新加入链阵的机器人实时发布递减招募,招募到的机器人可根据自身位置随时加入链阵,保证了围捕方法的完整性和灵活性,从而提高了围捕效率。仿真结果表明,单向链阵法不但继承了链阵法的优点,且围捕效率更高、使用更简单。
The chain-formation method is a novel and efficient strategy in solving the problem of cooperative hunting by swarm robots. The method is highly flexible and innovative as it does not require pre-assigned tasks, and the active encircling motion significantly reduces the possibility of target escape. However, the method still has some drawbacks, such as robots forming multiple chain arrays to surround the target from two directions which complicates the decision-making process. A unidirectional chain array roundup strategy based on decreasing recruitment is proposed to address the above problems of the chain array method. Firstly, multiple chain arrays are optimized into a single chain array, and the motion direction of the robots at the head of the chain is unified to reduce the computational complexity. Subsequently, the number of required roundups is predicted based on the number of roundup targets, and recruitment information is released to optimize the allocation of resources. Finally, decreasing recruitment is released in real time by the latest robot joining the chain array, and the recruited robots can join the chain array at any time according to their own positions, which ensures the completeness and flexibility of the roundup method and thus improves the roundup efficiency. The simulation results show that the unidirectional chain array method not only inherits the advantages of the chain array method, but also has higher roundup efficiency and is simpler to use.
[1] Yunes A, Murat M. Exploring advancements and emerging trends in robotic swarm coordination and control of swarm flying robots: A review[J]. Proceedings of the Institution of Mechanical Engineers,2025,239(1):180-204.
[2] 韩慧妍, 石树熙, 况立群, 等. 改进MADDPG算法的未知环境下多智能体单目标协同探索[J/OL]. 计算机工程与应用,1-11[2024-12-28].
[3] Sivaraman D, Ongwattanakul S, Moonjaita C, et al. A pack hunting strategy for heterogeneous robots in rescue opera-tions[J]Bioinspiration & biomimetics,2024.
[4] Nisha K, Kevin L, Carlo J B, et al. Towards Reliable Identifi-cation and Tracking of Drones Within a Swarm[J]. Journal of Intelligent & Robotic Systems,2024,110(2):
[5] Song G, Xu J, Deng L, et al. Robust distributed fixed-time cooperative hunting control for multi-quadrotor with obstacles avoidance[J]. ISA transactions,2024,15173-85.
[6] Zhao Z, Wan Y, Chen Y. Deep Reinforcement Learning-Driven Collaborative Rounding-Up for Multiple Unmanned Aerial Vehi-cles in Obstacle Environments[J]. Drones, 2024, 8 (9): 464-464.
[7] Jiawei X, Yasong L, Zhikun L, et al. Cooperative multi-target hunting by unmanned surface vehicles based on multi-agent rein-forcement learning [J]. Defence Technology, 2023, 29 80-94.
[8] Jia Q, Xu H, Li G, et al. Research on Synergy Pursuit Strategy of Multiple Underwater Robots[J]. Journal of Intelligent & Ro-botic Systems,2020,97(3):673-694.
[9] Fu X, Zhang Y, Zhu J, et al. Bioinspired cooperative control method of a pursuer group vs. a faster evader in a limited area. Appl Intell53, 6736–6752 (2023).
[10] Yang C, Liu J, Cai X, et al. Deep Reinforcement Learn-ing-Enhanced Target Encirclement of Multiple Unmanned Surface Vessels[J]. Unmanned Systems,2024(prepublish).
[11] Zhao Z, Wan Y, Chen Y. Deep Reinforcement Learn-ing-Driven Collaborative Rounding-Up for Multiple Unmanned Aerial Vehicles in Obstacle Environments[J]. Drones,2024,8(9): 464-464.
[12] Lu R, Yuxin J, Zijia N, et al. Optimal strategies for large-scale pursuers against one evader: A mean field game-based hierarchical control approach[J]. Systems & Control Letters,2024,183105697.
[13] Makkapati R V, Tsiotras P. Optimal Evading Strategies and Task Allocation in Multi-player Pursuit–Evasion Problems[J]. Dynamic Games and Applications, 2019, 9(4):1168-1187.
[14] Dianbiao D, Yahui Z, Zhize D, et al. Multi-target dynamic hunting strategy based on improved K-means and auction algo-rithm[J]. Information Sciences,2023,640
[15] Jiawei X, Yasong L, Zhikun L, et al. Cooperative multi-target hunting by unmanned surface vehicles based on multi-agent rein-forcement learning[J]. Defence Technology,2023,2980-94.
[16] 范衠,孙福赞,马培立,等.基于共识主动性的群体机器人目标搜索与围捕[J].北京理工大学学报,2022,42(02): 158-167.
[17] He S, Wang L, Liu M, et al. Dynamic Multi-Target Self-Organization Hunting Control of Multi-Agent Systems[J]. Applied Sciences,2024,14(9).
[18] Cai W, Chen H, Zhang M. A survey on collaborative hunting with robotic swarm: Key technologies and application scenarios[J]. Neurocomputing, 2024, 598128008 -128008.
[19] 徐望宝, 孙明炎. 群机器人自组织围捕多个入侵者的链阵方法[J]. 控制理论与应用, 2023, 40(01): 94-102.
[20] 徐望宝. 移动机器人局部路径规划的人工力矩方法[D].大连理工大学,2014.