All Issue

2017 Vol.36, Issue 4 Preview Page
31 July 2017. pp. 267-272
Abstract
References
1
A. Mesaros, M. F. McKinney, and J. Skowronek, “Automatic surveillance of the acoustic activity in our living environment,” Proc. IEEE ICME 634-637 (2005).
2
E. Cakir, T. Heittola, H. Huttunen, and T. Virtanen, “Polyphonic sound event detection using multi-label deep neural networks,” Proc. IEEE IJCNN, 1-7 (2015).
3
X. xiao, S. Watanabe, H. Erdogan, L. Lu, J.Hershey, M. L. Seltzer, G. Chen, Y. Zhang, M. Mandel, and D. Yu, “Deep beamforming networks for multi-channel speech recognition,”  Proc. IEEE ICASSP, 5745-5749  (2016).
4
G. Parascandolo, H. Huttunen, and T. Virtanen, “Recurrent neural networks for polyphonic sound event detection in real life recordings,” Proc. IEEE ICASSP, 6440-6444 (2016).
5
T. N. Sainath, O. Vinyals, A. Senior, and H. Sak, “Convolutional, long short-term memory, fully connected deep neural networks,” Proc. IEEE ICASSP, 4580-4584 (2015).
6
C. Knapp and G. Carter, “The generalized correlation method for estimation of time delay,” IEEE Trans. Acoust. Speech Signal Process. 24, 320-327 (1976).
7
B. Uzkent, B. D. Barkana, and H. Cevikalp, “Non-speech environmental sound classification using svms with a new set of features,” in IJICIC, 3511 (2012).
8
A. Mesaros, T. Heittola, A. Eronen, and T. Virtanen, “Acoustic event detection in real life recordings,” 18th European signal processing Conference, 1267-1271 (2010).
Information
  • Publisher :The Acoustical Society of Korea
  • Publisher(Ko) :한국음향학회
  • Journal Title :The Journal of the Acoustical Society of Korea
  • Journal Title(Ko) :한국음향학회지
  • Volume : 36
  • No :4
  • Pages :267-272
  • Received Date : 2017-03-17
  • Accepted Date : 2017-07-31