融合有效方差置信上界的Q学习智能干扰决策算法
CSTR:
作者:
作者单位:

(空军工程大学 信息与导航学院,西安 710077)

作者简介:

饶宁(1997—),男,硕士研究生; 许华(1976—),男,教授,博士生导师

通讯作者:

许华,13720720010@139.com

中图分类号:

TN975

基金项目:


Q-learning intelligent jamming decision algorithm based on efficient upper confidence bound variance
Author:
Affiliation:

(Information and Navigation College, Air Force Engineering University, Xi’an 710077, China)

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    为进一步提升基于值函数强化学习的智能干扰决策算法的收敛速度,增强战场决策的有效性,设计了一种融合有效方差置信上界思想的改进Q学习智能通信干扰决策算法。该算法在Q学习算法的框架基础上,利用有效干扰动作的价值方差设置置信区间,从干扰动作空间中剔除置信度较低的干扰动作,减少干扰方在未知环境中不必要的探索成本,加快其在干扰动作空间的搜索速度,并同步更新所有干扰动作的价值,进而加速学习最优干扰策略。通过将干扰决策场景建模为马尔科夫决策过程进行仿真实验,所构造的干扰实验结果表明:当通信方使用干扰方未知的干扰躲避策略变更通信波道时,与现有基于强化学习的干扰决策算法相比,该算法在无通信方的先验信息条件下,收敛速度更快,可达到更高的干扰成功率,获得更大的干扰总收益。此外,该算法还适用于“多对多”协同对抗环境,可利用动作剔除方法降低联合干扰动作的空间维度,相同实验条件下,其干扰成功率比传统Q学习决策算法高50%以上。

    Abstract:

    To further improve the convergence speed of the intelligent jamming decision-making algorithm based on value function in reinforcement learning and enhance its effectiveness, an improved Q-learning intelligent communication jamming decision algorithm was designed integrating the efficient upper confidence bound variance. Based on the framework of Q-learning algorithm, the proposed algorithm utilizes the value variance of effective jamming action to set the confidence interval. It can eliminate the jamming action with low confidence from the jamming action space, reduce the unnecessary exploration cost in the unknown environment, speed up its searching speed in the interference action space, and synchronously update the value of all actions, thus accelerating the optimal strategy learning process. The jamming decision-making scenario was modeled as the Markov decision process for simulation. Results show that when the correspondent used interference avoidance strategy against the jammer to change the communication channel, the proposed algorithm could achieve faster convergence speed, higher jamming success rate, and greater total jamming rewards, under the condition of no prior information, compared with the existing decision-making algorithms based on reinforcement learning. Besides, the algorithm could be applied to the “many-to-many” cooperative countermeasure environment. The action elimination method was used to reduce the dimension of joint jamming action, and the jamming success rate of the proposed algorithm was 50% higher than those of the traditional Q-learning decision algorithms under the same conditions.

    参考文献
    相似文献
    引证文献
引用本文

饶宁,许华,宋佰霖.融合有效方差置信上界的Q学习智能干扰决策算法[J].哈尔滨工业大学学报,2022,54(5):162. DOI:10.11918/202010082

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2020-10-26
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2022-04-25
  • 出版日期:
文章二维码