引用本文: | 武晓栋,金志刚,陈旭阳,刘凯.对抗学习辅助增强的增量式入侵检测系统[J].哈尔滨工业大学学报,2024,56(9):31.DOI:10.11918/202403066 |
| WU Xiaodong,JIN Zhigang,CHEN Xuyang,LIU Kai.Adversarial learning-augmented incremental intrusion detection system[J].Journal of Harbin Institute of Technology,2024,56(9):31.DOI:10.11918/202403066 |
|
摘要: |
为解决增量式入侵检测系统(intrusion detection system,IDS)在检测新类攻击过程中存在的新类过拟合、旧类泛化能力弱、灾难性遗忘问题,提出了对抗辅助增强的增量式IDS。在增量训练过程中,利用对抗样本的正则化能力约束检测模型在新类攻击上的过拟合,设计同时存储旧类攻击的原样本及对抗样本的双分布模拟缓存器以增强检测模型对旧类的泛化能力,引入加权交叉熵损失缓解灾难性遗忘问题。在CSE-CIC-IDS2018数据集和UNSW-NB15数据集上的实验结果表明:对抗样本直接参与训练会导致模型的识别性能恶化,而对抗样本以数据分布分离的形式参与训练则增强了模型的识别性能;对抗样本在缓存器中的存储有效抑制了模型对旧类泛化能力的丢失;加权交叉熵损失对学习权重的调整缓解了新类及缓存器内数据间的不平衡所导致的灾难性遗忘。所提方法为识别动态复杂网络环境中的真实攻击提供了可行方案,具有潜在的应用价值。 |
关键词: 入侵检测 深度学习 增量学习 对抗学习 灾难性遗忘 |
DOI:10.11918/202403066 |
分类号:TP393.08 |
文献标识码:A |
基金项目:国家自然科学基金(52171337) |
|
Adversarial learning-augmented incremental intrusion detection system |
WU Xiaodong,JIN Zhigang,CHEN Xuyang,LIU Kai
|
(School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China)
|
Abstract: |
To address the issues of overfitting on new classes, limited generalization ability on old classes, and catastrophic forgetting of incremental learning-based incremental intrusion detection system (IDS) when dealing with attacks of new classes, an adversarial assistance-augmented incremental IDS is proposed. In the incremental training, the regularization of adversarial samples is leveraged to mitigate the overfitting on new classes. A dual distribution simulation buffer that stores both clean and adversarial samples of old classes is proposed to enhance the generalization ability to old classes. In addition, weighted cross-entropy loss is introduced into the training process to alleviate the catastrophic forgetting. Experimental results on the CSE-CIC-IDS2018 dataset and the UNSW-NB15 dataset show that direct participation of adversarial samples in training leads to deterioration of the recognition performance, while participation in the form of detached data distribution enhances the recognition performance of the model. The storage of adversarial samples in the buffer effectively suppresses the loss of the models generalization ability for old classes, and the adjustment of learning weights by weighted cross-entropy loss alleviates the catastrophic forgetting caused by the imbalance between the new classes and the data in the buffer. The proposed method offers a viable strategy for detecting real attacks within dynamic and complex networks, presenting substantial practical applicability. |
Key words: intrusion detection deep learning incremental learning adversarial learning catastrophic forgetting |