期刊检索

  • 2024年第56卷
  • 2023年第55卷
  • 2022年第54卷
  • 2021年第53卷
  • 2020年第52卷
  • 2019年第51卷
  • 2018年第50卷
  • 2017年第49卷
  • 2016年第48卷
  • 2015年第47卷
  • 2014年第46卷
  • 2013年第45卷
  • 2012年第44卷
  • 2011年第43卷
  • 2010年第42卷
  • 第1期
  • 第2期

主管单位 中华人民共和国
工业和信息化部
主办单位 哈尔滨工业大学 主编 李隆球 国际刊号ISSN 0367-6234 国内刊号CN 23-1235/T

期刊网站二维码
微信公众号二维码
引用本文:胡薰尹,管业鹏.基于3D-LCRN视频异常行为识别方法[J].哈尔滨工业大学学报,2019,51(11):183.DOI:10.11918/j.issn.0367-6234.201812005
HU Xunyin,GUAN Yepeng.3D-LCRN based Video Abnormal Behavior Recognition[J].Journal of Harbin Institute of Technology,2019,51(11):183.DOI:10.11918/j.issn.0367-6234.201812005
【打印本页】   【HTML】   【下载PDF全文】   查看/发表评论  下载PDF阅读器  关闭
过刊浏览    高级检索
本文已被:浏览 1471次   下载 989 本文二维码信息
码上扫一扫!
分享到: 微信 更多
基于3D-LCRN视频异常行为识别方法
胡薰尹1,管业鹏1,2
(1.上海大学 通信与信息工程学院,上海 200444; 2.新型显示技术及应用集成教育部重点实验室(上海大学),上海 200072)
摘要:
自动准确识别监控视频中的异常行为在安防领域具有广泛的应用前景.本文提出一种基于3D-LCRN(3D Long-short-term Convolutional Recurrent Network)视觉时序模型的视频异常行为识别方法.首先,基于视频图像帧间的结构相似性,结合光照感应与光照补偿机制进行背景建模,获取对光照突变与背景运动不敏感的矫正光流场与矫正运动历史图.同时,针对异常与正常行为视频数据失衡问题,计算三通道矫正光流运动历史图COFMHI(corrected optical flow motion history image),随机提取视觉词块进行聚类,对样本数量与维度进行双向扩充,充分获取样本的微分和积分运动信息.在此基础上,采用3D-CNN深度学习网络模型对COFMHI进行学习,获取局部短时序时空-域特征,结合可学习贡献因子加权的LSTM网络以压制无关、冗余、具有混淆性的视频片段,进一步提取由短时序-长时序,由局部-全局的多层次时-空域特征用于异常行为识别.通过与同类方法的客观定量对比,实验结果表明,本文方法在光照突变与背景运动等复杂场景下具有优异的异常行为识别性能,进一步表明该方法有效、可行.
关键词:  矫正光流运动历史图  样本扩充  3D-LCRN  3D-CNN  LSTM  异常行为识别
DOI:10.11918/j.issn.0367-6234.201812005
分类号:TP391.7
文献标识码:A
基金项目:
3D-LCRN based Video Abnormal Behavior Recognition
HU Xunyin1,GUAN Yepeng1,2
(1.School of Communication & Information Engineering, Shanghai University, Shanghai 200444, China; 2.Key Laboratory of Advanced Display and System Application (Shanghai University), Ministry of Education, Shanghai 200072, China)
Abstract:
Automatically anomaly recognition in surveillance videos is a crucial issue for social security. A 3D-LCRN visual time series model was proposed for abnormal behavior recognition on video surveillance. Firstly, a structural similarity background modeling method was proposed to obtain corrected optical flow and corrected motion history image, which was insensitive to illumination variation and background moving against background interference in complex scenes. Secondly, a new sample expansion method was proposed to solve the imbalance between normal training samples and abnormal ones, which enriched the spatial and temporal information of samples from both dimensionality and quantity. On dimensionality, the method stacked corrected optical flow and corrected motion history image to generate the corrected optical flow motion history image. In quantity, COFMHI was randomly cropped and clustered into center visual words by K-means. Finally, COFMHI was used as 3D-CNN input to extract local short-time spatial-temporal features of behavior. In order to suppress irrelevant, redundant and confusing video clips, a learnable contribution factor weighted LSTM was used to deeply extract the global long-time spatial-temporal features for abnormal behavior recognition. Through 3D-LCRN, abundant spatial-temporal features were extracted from both local to global and short-time to long-time levels. Experimental results show that the proposed method has excellent performance of abnormal behavior recognition in complex scenes such as illumination variation and background moving in comparison with the state-of-art methods.
Key words:  corrected optical flow motion history image  sample expansion  3D-LCRN  3D-CNN  LSTM  abnormal behavior recognition

友情链接LINKS