All Issue

2025 Vol.44, Issue 5 Preview Page

Research Article

30 September 2025. pp. 489-495
Abstract
References
1

G. Ciaburro and G. Iannace, “Improving smart cities safety using sound events detection based on deep neural network algorithms,” Informatics, 7, 23 (2020).

10.3390/informatics7030023
2

A. H. Yuh and S. J. Kang, “Real-time sound event classification for human activity of daily living using deep neural network,” Proc. IEEE iThings, GreenCom, CPSCom, SmartData, Cybermatics, 83-88 (2021).

10.1109/iThings-GreenCom-CPSCom-SmartData-Cybermatics53846.2021.00027
3

H. G. Kim and G. Y. Kim, “Deep neural network- based indoor emergency awareness using contextual information from sound, human activity, and indoor position on mobile device,” IEEE Trans. Consum. Electron. 66, 271-278 (2020).

10.1109/TCE.2020.3015197
4

P. Giannakopoulos, A. Pikrakis, and Y. Cotronis, “Improving post-processing of audio event detectors using reinforcement learning,” IEEE Access, 10, 84398–84404 (2022).

10.1109/ACCESS.2022.3197907
5

D. de Benito-Gorrón, D. Ramos, and D. T. Toledano, “A multi-resolution CRNN-based approach for semi- supervised sound event detection in DCASE 2020 challenge,” IEEE Access, 9, 89029-89042 (2021).

10.1109/ACCESS.2021.3088949
6

J. Nam and S. W. Park, “Boosting principal frequency based data augmentation for sound event detection” (in Korean), J. Korean Inst. Electron. Eng. 77-83 (2024).

10.5573/ieie.2024.61.7.77
7

N. K. Kim and H. K. Kim, “Self-training with noisy student model and semi-supervised loss function for DCASE 2021 challenge task 4,” DCASE, Tech. Rep., 2021.

8

L. Lin, X. Wang, H. Liu, and Y. L. Qian, “Guided learning convolution system for dcase 2019 task 4,” arXiv preprint arXiv:1909.06178 (2019).

10.33682/53ed-z889
9

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception architecture for computer vision,” Proc. IEEE CVPR, 2818-2826 (2016).

10.1109/CVPR.2016.308
10

W. Lim, S. Suh, and Y. Jeong, “Weakly labeled semi-supervised sound event detection using CRNN with inception module,” Proc. DCASE, 74-77 (2018).

11

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. IEEE CVPR, 770-778 (2016).

10.1109/CVPR.2016.90
12

G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” Proc. IEEE CVPR, 4700-4708 (2017).

10.1109/CVPR.2017.243
13

G. Huang, S. Liu, L. Van Der Maaten, and K. Q. Weinberger, “CondenseNet: An efficient DenseNet using learned group convolutions,” Proc. IEEE CVPR, 2752-2761 (2018).

10.1109/CVPR.2018.00291
14

N. Turpault, R. Serizel, A. Shah, and J. Salamon, “Sound event detection in domestic environments with weakly labeled data and soundscape synthesis,” Proc. DCASE, Workshop, 253-257 (2019).

10.33682/006b-jx26
15

Ç. Bilen, G. Ferroni, F. Tuveri, J. Azcarreta, and S. Krstulović, “A framework for the robust evaluation of sound event detection,” Proc. IEEE ICASSP, 61-65 (2020).

10.1109/ICASSP40776.2020.9052995
17

H. Nam, S. H. Kim, B. Y. Ko, and Y. H. Park, “Frequency dynamic convolution: Frequency-adaptive pattern recognition for sound event detection,” arXiv preprint arXiv:2203.15296 (2022).

10.21437/Interspeech.2022-10127
18

T. Song and W. Zhang, “Frequency-aware convolution for sound event detection,” Proc. ICMM, 415- 426 (2025).

10.1007/978-981-96-2054-8_31
Information
  • Publisher :The Acoustical Society of Korea
  • Publisher(Ko) :한국음향학회
  • Journal Title :The Journal of the Acoustical Society of Korea
  • Journal Title(Ko) :한국음향학회지
  • Volume : 44
  • No :5
  • Pages :489-495
  • Received Date : 2025-05-26
  • Revised Date : 2025-07-07
  • Accepted Date : 2025-07-20