All Issue

2021 Vol.40, Issue 5 Preview Page

Research Article

30 September 2021. pp. 503-514
Abstract
References
1
P. Smaragdis,"Blind separation of convolve mixtures in the frequency domain," Neurocomput. 22, 21-34 (1998). 10.1016/S0925-2312(98)00047-2
2
T. Kim, H. T. Attias, S.-Y. Lee, and T.-W. Lee, "Blind source separation exploiting higher order frequency dependencies," IEEE Trans. ASLP. 15, 70-79 (2007). 10.1109/TASL.2006.872618
3
N. Ono, "Stable and fast update rules for independent vector analysis based on auxiliary function technique," Proc. IEEE Workshop Appl. Signal Process. Audio Acoust. 189-192 (2011). 10.1109/ASPAA.2011.6082320
4
N. Ono and S. Miyabe, "Auxiliary-function-based independent component analysis for super-Gaussian sources," Proc. Int. Conf. Latent Variable Anal. Signal Separation, 165-172 (2010). 10.1007/978-3-642-15995-4_21
5
D. Kitamura, N. Ono, H. Sawada, H. Kameoka, and H. Saruwatari, "Determined blind source separation unifying independent vector analysis and nonnegative matrix factorization," IEEE/ACM Trans. ASLP. 24, 1626-1641 (2016). 10.1109/TASLP.2016.2577880
6
T. Nakatani, T. Yoshioka, K. Kinoshita, M. Miyoshi, and B. H. Juang, "Blind speech dereverberation with multi-channel linear prediction based on short time fourier transform representation," Proc. ICASSP. 85- 88 (2008). 10.1109/ICASSP.2008.4517552
7
T. Yoshioka and T. Nakatani, "Generalization of multi- channel linear prediction methods for blind MIMO impulse response shortening," IEEE Trans. Audio, Speech Lang. Process. 20, 2707-2720 (2012). 10.1109/TASL.2012.2210879
8
T. Nakatani, C. Boeddeker, K. kinoshita, R. Ikeshita, M. Delcroix, and R. Haeb-Umbach, "Jointly optimal denoising, dereverberation, and source separation," IEEE/ACM Trans. ASLP. 28, 2276-2282 (2020). 10.1109/TASLP.2020.3013118
9
R. Ikeshitam N. Ito, Nakatani, and H. Sawada, "A unifying framework for blind source separation based on a joint diagonalizability constraint," Proc. Eur. Signal Process. Conf. 1-5 (2019). 10.23919/EUSIPCO.2019.890308730625339
10
R. Ikeshita, N. Ito, T.Nakatani, and H. Sawada, "Independent low-rank matrix analysis with decorrelation learning," Proc. IEEE WASPAA. 288-292 (2019). 10.1109/WASPAA.2019.8937171
11
K. Sekiguchi, Y. Bando, A. Nugraha, K. Yoshiim, and T. Kawahara, "Fast multichannel nonnegative matrix factorization with directivity-aware jointly-diagonalizable spatial covariance matrices for blind source separation," IEEE/ACM Trans. ASLP. 28, 2610-2625 (2020). 10.1109/TASLP.2020.3019181
12
M. T. Akhtar, T.-P. Jung, S. Makeig, and G. Cauwenberghs, "Recursive independent component analysis for online blind source separation," IEEE Int. Symp. Circuits Syst. 6, 2813-2816 (2012). 10.1109/ISCAS.2012.6271896
13
T. Taniguchi, N. Ono, A. Kawamata, and S. Sagayama, "An auxiliary-function approach to online independent vector analysis for real-time blind source separation," Proc. HSCMA. 107-111 (2014). 10.1109/HSCMA.2014.6843261PMC4490172
14
S.-H. Hsu, T. Mullen, T.-P. Jung, and G. Cauwenberghs, "Online recursive independent component analysis for real-time source separation of high-density EEG," Proc. IEEE Eng. Med. Biol. Soc. Conf. 3845-3848 (2014).
15
T. Yoshioka and T. Nakatani, "Dereverberation for reverberation-robust microphone arrays," Proc. Eur. Signal Process. Conf. 1-5 (2013).
16
T. Nakatani and K. Kinoshita, "A unified convolutional beamformer for simultaneous denoising and dereverberation," IEEE Signal Processing Letters, 26, 903- 907 (2019). 10.1109/LSP.2019.2911179
17
S.-I. Amari, A. Cichocki, and H. H. Yang, "A new learning algorithm for blind signal separation," Adv. Neural Inf. Process. Syst. 8, 752-763 (1996).
18
M. Woodbury, "Inverting modified matrices," Memorandum Rep. 42, MR0038136 (1950).
19
E. Vincent, R. Gribonval, and C. Févotte, "Performance measurement in blind audio source," IEEE Trans. Audio, Speech, and Lang. Process. 14, 1462-1469 (2006). 10.1109/TSA.2005.858005
20
A. W. Rix, J. G. Beerends, M. P. Hollier, and A. P. Hekstra, "Perceptual evaluation of speech quality (PESQ)-A new method for speech quality assessment of telephone networks and codecs," Proc. IEEE Int. Conf. Acoust., Speech, Signal Process. 2, 749-752 (2001).
21
T. Robinson, J. Fransen, D. Pye, J. Foote, and S. Renals, "WSJCAM0: A british english speech corpus for large vocabulary continuous speech recognition," Proc. ICASSP. 81-84 (1995).
22
J. B. Allen and D. A. Berkley, "Image method for efficiently simulating small-room acoustics," J. Acoust. Soc. Am. 65, 943-950 (1979). 10.1121/1.382599
23
S. Bradley, H. Sato, and M. Picard, "On the importance of early reflecꠓtions for speech in rooms," J. Acoust. Soc. Am. 113, 3233-3244 (2003). 10.1121/1.157043912822796
24
T. Nishiura, Y. Hirano, Y. Denda, and M. Nakayama, "Investigations into early and late reflections on distant-talking speech recognition toward suitable reverberation criteria," Proc. Interspeech, 1082-1085 (2007). 10.21437/Interspeech.2007-109
Information
  • Publisher :The Acoustical Society of Korea
  • Publisher(Ko) :한국음향학회
  • Journal Title :The Journal of the Acoustical Society of Korea
  • Journal Title(Ko) :한국음향학회지
  • Volume : 40
  • No :5
  • Pages :503-514
  • Received Date : 2021-07-20
  • Revised Date : 2021-09-07
  • Accepted Date : 2021-09-14