Abstract:In the research of autonomous combat maneuver of fighter based on Deep Reinforcement Learning (DRL), the fighter’s autonomous maneuver to attack area is the precondition for the fighter to attack the target effectively. Because of the large active airspace and the uneven exploration ability in all directions, the direct use of DRL to acquire maneuvering strategy is confronted with the problems of large training interaction space, difficulty in setting sample distribution in attack area, and difficulty in the convergence of training process. To solve this problem, a dual network intelligent decision method based on deep Q-network (DQN) was proposed. In this method, a conical space was set up in front of the fighter to make full use of the forward exploratory performance of the fighter. With the establishment of an angle capture network, DRL was used to fit the strategy of adjusting the deviation angle to keep the attack area in the conical space, and a distance capture network was established to fit the maneuvering strategy of the fighter to the attack area based on DRL. Simulation results show that the traditional DRL method cannot effectively solve the decision-making problem of the fighter’s maneuvering to the attack area by using the active airspace of the fighter as the interactive space, whereas the success rate of the dual network decision method was 83.2% in 1 000 tests of the fighter’s autonomous maneuvering to the attack area. Therefore, the proposed method can effectively solve the decision problem of autonomous maneuvering of fighter aircraft to the attack area.