Please submit manuscripts in either of the following two submission systems

    ScholarOne Manuscripts

  • ScholarOne
  • 勤云稿件系统

  • 登录

Search by Issue

  • 2024 Vol.31
  • 2023 Vol.30
  • 2022 Vol.29
  • 2021 Vol.28
  • 2020 Vol.27
  • 2019 Vol.26
  • 2018 Vol.25
  • 2017 Vol.24
  • 2016 vol.23
  • 2015 vol.22
  • 2014 vol.21
  • 2013 vol.20
  • 2012 vol.19
  • 2011 vol.18
  • 2010 vol.17
  • 2009 vol.16
  • No.1
  • No.2

Supervised by Ministry of Industry and Information Technology of The People's Republic of China Sponsored by Harbin Institute of Technology Editor-in-chief Yu Zhou ISSNISSN 1005-9113 CNCN 23-1378/T

期刊网站二维码
微信公众号二维码
Related citation:Rawad Melhem,Assef Jafar,Riad Hamadeh.Improving Deep Attractor Network by BGRU and GMM for Speech Separation[J].Journal of Harbin Institute Of Technology(New Series),2021,28(3):90-96.DOI:10.11916/j.issn.1005-9113.2019044.
【Print】   【HTML】   【PDF download】   View/Add Comment  Download reader   Close
←Previous|Next→ Back Issue    Advanced Search
This paper has been: browsed 776times   downloaded 367times 本文二维码信息
码上扫一扫!
Shared by: Wechat More
Improving Deep Attractor Network by BGRU and GMM for Speech Separation
Author NameAffiliation
Rawad Melhem Higher Institute for Applied Sciences and Technology, Damascus, Syria 
Assef Jafar Higher Institute for Applied Sciences and Technology, Damascus, Syria 
Riad Hamadeh Higher Institute for Applied Sciences and Technology, Damascus, Syria 
Abstract:
Deep Attractor Network (DANet) is the state-of-the-art technique in speech separation field, which uses Bidirectional Long Short-Term Memory (BLSTM), but the complexity of the DANet model is very high. In this paper, a simplified and powerful DANet model is proposed using Bidirectional Gated neural network (BGRU) instead of BLSTM. The Gaussian Mixture Model (GMM) other than the k-means was applied in DANet as a clustering algorithm to reduce the complexity and increase the learning speed and accuracy. The metrics used in this paper are Signal to Distortion Ratio (SDR), Signal to Interference Ratio (SIR), Signal to Artifact Ratio (SAR), and Perceptual Evaluation Speech Quality (PESQ) score. Two speaker mixture datasets from TIMIT corpus were prepared to evaluate the proposed model, and the system achieved 12.3 dB and 2.94 for SDR and PESQ scores respectively, which were better than the original DANet model. Other improvements were 20.7% and 17.9% in the number of parameters and time training respectively. The model was applied on mixed Arabic speech signals and the results were better than that in English.
Key words:  attractor network  speech separation  gated recurrent units
DOI:10.11916/j.issn.1005-9113.2019044
Clc Number:TN912.3
Fund:

LINKS