All Issue

2017 Vol.36, Issue 6 Preview Page
30 November 2017. pp. 407-412
Abstract
References
1
Q. Mao, M. Dong, Z. Huang, and Y. Zhan, “Learning salient features for speech emotion recognition using convolutional neural networks,” IEEE Trans. Multime-dia, 16, 2203-2213 (2014).
2
T. N. Sainath, O. Vinyals, A. Senior, and H. Sak, “Convolutional, long short-term memory, fully connected deep neural networks.” in IEEE ICASSP, 4580-4584 (2015).
3
S. Mirsamadi, E. Barsoum, and C. Zhang, “Automatic speech emotion recognition using recurrent neural networks with local attention,” in IEEE ICASSP, 2227-2231 (2017).
4
S. Y. Chang and N. Morgan, “Robust CNN-based speech recognition with gabor filter kernels,” in Interspeech, 905-909 (2014).
5
D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv:1409.0473 (2014).
6
S. Haq and P. J. B. Jackson, “Speaker-dependent audio-visual emotion recognition,” in AVSP, 53-58 (2009).
Information
  • Publisher :The Acoustical Society of Korea
  • Publisher(Ko) :한국음향학회
  • Journal Title :The Journal of the Acoustical Society of Korea
  • Journal Title(Ko) :한국음향학회지
  • Volume : 36
  • No :6
  • Pages :407-412
  • Received Date : 2017-06-28
  • Revised Date : 2017-07-21
  • Accepted Date : 2017-11-29