All Issue

2024 Vol.43, Issue 6 Preview Page

Research Article

30 November 2024. pp. 630-636
Abstract
References
1

W. Lin and L. Chao, "Review of studies on emotion recognition and judgment based on physiological signals," Appl. Sci. 13, 2573 (2023).

10.3390/app13042573
2

X. Li, Y. Zhang, P. Tiwari, D. Song, B. Hu, M. Yang, Z. Zhao, N. Kumar, and P. Marttinen, "EEG based emotion recognition: A tutorial and review," ACM Computing Surveys, 55, 1-57 (2022).

10.1145/3524499
3

B. Pan, K. Hirota, Z. Jia, and Y. Dai, "A review of multimodal emotion recognition from datasets, preprocessing, features, and fusion methods," Neurocomputing, 561, 126866 (2023).

10.1016/j.neucom.2023.126866
4

D. Nie, X.-W. Wang, L.-C. Shi, and B.-L. Lu, "EEG-based emotion recognition during watching movies," Proc. 5th Int. IEEE/EMBS Conf. Meural Enginering, 667-670 (2011).

10.1109/NER.2011.5910636
5

E. Asutay and D. Västfjäll, "18 Sound and emotion," in Handbook of The Oxford Handbook of Sound and Imagination, edited by M. Grimshaw-Aagaard, (Oxford University press, New York, 2019).

10.1093/oxfordhb/9780190460242.013.23
6

S. Bai, J. Z. Kolter, and V. Koltun. "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling," arXiv preprint arXiv, 1803.01271 (2018).

7

A. Gulati, C.-C. Chiu, J. Qin, J. Yu, N. Parmar, R. Pang, S. Wang, W. Han, Y. Wu, Y. Zhang, and Z. Zhang, "Conformer: Convolution-augmented transformer for speech recognition," arXiv preprint arXiv, 2005.08100 (2020).

10.21437/Interspeech.2020-3015
8

V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, "EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces," J. Neural Eng. 15, 056013 (2018).

10.1088/1741-2552/aace8c29932424
9

K. Simonyan, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv, 1409.1556 (2014).

10

S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras, "DEAP: A database for emotion analysis; using physiological signals," IEEE trans. on affective computing, 3, 18-31 (2011).

10.1109/T-AFFC.2011.15
11

J. D. Morris, "Observations: SAM: the Self-Assessment Manikin; an efficient cross-cultural measurement of emotional response," J. Advert. Res. 35, 63-68 (1995).

12

Z. Li, G. Zhang, J. Dang, L. Wang, and J. Wei, "Multi-modal emotion recognition based on deep learning of EEG and audio signals," Proc. Int. Conf. Neural Networks, 1-6 (2021).

10.1109/IJCNN52387.2021.9533663
13

A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv, 2010.11929 (2020).

14

H.-G, Kim, D.-K. Jeong, and J. Y. Kim, "Electroencephalogram-based emotional stress recognition according to audiovisual stimulation using spatial frequency convolutional gated transformer," (in Korean), J. Acoust. Soc. Kr. 41, 518-524 (2022).

Information
  • Publisher :The Acoustical Society of Korea
  • Publisher(Ko) :한국음향학회
  • Journal Title :The Journal of the Acoustical Society of Korea
  • Journal Title(Ko) :한국음향학회지
  • Volume : 43
  • No :6
  • Pages :630-636
  • Received Date : 2024-08-20
  • Revised Date : 2024-09-20
  • Accepted Date : 2024-10-04