Bridge-type attention forensics network for image inpainting
CSTR:
Author:
Affiliation:

(School of Electrical and Information Engineering, Tianjin University, Tianjin 300072,China)

Clc Number:

TN911.73

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    To enhance the reliability of multimedia information and mitigate the negative impact of image forgery events on society, there is an urgent need to develop image inpainting forensics to detect and locate tampered regions of images. This paper proposes a bridge-type attention forensics network (BAFNet) for image inpainting. The network receives tampered images directly and outputs the tampered regions end-to-end. The network adopts an encoder-decoder architecture as the basic framework. Firstly, the encoder selects two backbones, Swin Transformer and RepVGG, to extract multi-domain inpainting features. Then, a bridge-type attention module is used to connect the same-level stages of the two backbones, enhancing the encoder’s modeling capability in both local and global dimensions. Finally, a semantic alignment fusion module is built between the encoder and the decoder to eliminate semantic inconsistencies between the features extracted by the two backbones, thereby improving the forensic performance of the network. Experimental results on different inpainting forensic datasets demonstrate that the proposed model, compared with other mainstream forensic models, can more accurately locate the inpainting areas. In particular, on the challenging DeepFillV2 dataset and Diffusion dataset, the proposed BAFNet achieves IoU scores of 91.37% and 82.34%, respectively, which improves the IoU metrics by 8.77% and 10.46% compared to the mainstream forensic network MVSS-Net. In addition, combining the results of several experiments, BAFNet achieves a good balance between forensic performance and model complexity.

    Reference
    Related
    Cited by
Get Citation
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:April 01,2024
  • Revised:
  • Adopted:
  • Online: April 07,2025
  • Published:
Article QR Code