引用本文: | 李娇,杨艳春,党建武,王阳萍.NSST与引导滤波相结合的多聚焦图像融合算法[J].哈尔滨工业大学学报,2018,50(11):145.DOI:10.11918/j.issn.0367-6234.201805006 |
| LI Jiao,YANG Yanchun,DANG Jianwu,WANG Yangping.NSST and guided filtering for multi-focus image fusion algorithm[J].Journal of Harbin Institute of Technology,2018,50(11):145.DOI:10.11918/j.issn.0367-6234.201805006 |
|
本文已被:浏览 1688次 下载 1146次 |
码上扫一扫! |
|
NSST与引导滤波相结合的多聚焦图像融合算法 |
李娇1,2,杨艳春1,2,党建武1,2,王阳萍1,2
|
(1.兰州交通大学 电子与信息工程学院, 兰州 730070; 2. 兰州交通大学 甘肃省人工智能与图形图像处理工程研究中心, 兰州 730070)
|
|
摘要: |
为进一步提高融合图像的对比度和清晰度,提出一种非下采样剪切波变换(简称NSST变换)与引导滤波相结合的多聚焦图像融合算法.首先,利用NSST变换对多聚焦源图像进行多尺度、多方向分解;然后针对低频子带系数,通过计算局部区域改进拉普拉斯能量和进行加权映射,构建初始融合权重,利用引导滤波修正初始融合权重,提出一种基于局部区域改进拉普拉斯能量和的引导滤波加权融合规则;针对高频子带系数,结合人眼视觉特性,通过计算显著信息、局部区域平均梯度、边缘信息和局部区域改进拉普拉斯能量和来构建初始融合权重,利用引导滤波修正初始融合权重,提出一种基于人眼视觉特征的引导滤波加权融合规则;最后,进行NSST逆变换,获得融合图像. 4组多聚焦源图像的仿真实验结果表明,无论是从主观评价还是客观评价上,与其余4种融合算法相比,本文算法均较好地保留多聚焦源图像的边缘轮廓、细节和纹理等信息,也无细节信息缺失,提高融合图像的对比度和清晰度.
|
关键词: 多聚焦图像融合 非下采样剪切波变换 人眼视觉特征 引导滤波 空间一致性 |
DOI:10.11918/j.issn.0367-6234.201805006 |
分类号:TP391 |
文献标识码:A |
基金项目:长江学者和创新团队发展计划(IRT_16R36);国家自然科学基金(7,6, 61462059);兰州交通大学青年科学基金(2014006) |
|
NSST and guided filtering for multi-focus image fusion algorithm |
LI Jiao1,2,YANG Yanchun1,2,DANG Jianwu1,2,WANG Yangping1,2
|
(1.School of Electronic and Information Engineering, LanzhouJiaotong University, Lanzhou 730070, China; 2.Gansu Provincial Engineering Research Center for Artificial Intelligence and Graphics & Image Processing Lanzhou Jiaotong University, Lanzhou 730070, China)
|
Abstract: |
In order to further improve the contrast and sharpness of the fused image, a multi-focus image fusion algorithm based on the non-subsampled shearlet transform (NSST) and guided filtering is proposed in this paper. Firstly, multi-scale and multi-directional decomposition of multi-focus source images are performed by using NSST transform. Then, the low-frequency sub-band coefficients are used to construct the initial fusion weights by calculating the local region Sum-Modified-Laplacian energy sum. The initial fusion weights are corrected by the guided filter. A weighted fusion rule based on local region Sum-Modified-Laplacian energy sum and guided filtering is proposed. For the high-frequency sub-band coefficients, combined with human visual characteristics, the initial fusion weights are constructed by the significant information, the local region average gradient, edge information, and local region Sum-Modified-Laplacian energy sum; the initial fusion weights are modified according to guided filtering, and a guided filtering weighted fusion rule based on human visual characteristics is proposed. Finally, the inverse NSST is used to produce the fused image. The simulation results of the four groups of multi-focus source images demonstrate that, not matter it is the subjective evaluation or the objective evaluation, the proposed algorithm not only preserves the details such as edge contour, and texture of the source images, but also improves the contrast and clarity of the fused image.
|
Key words: multi-focus image fusion non-subsampled shearlet transform human visual characteristics guided filter spatial consistency |