Abstract
References
1.W.H. Choi, S.I. Kim, M.S. Keum, D.K. Han, and H. Ko, “Acoustic and visual signal based context awareness system for mobile application,” IEEE Trans. Cons. Elec. 57. 2 738-746 (2011). 2.B. Clarkson, N. Sawhney, and A. Pentland, “Auditory context awareness via wearable computing,” in Proc. Works. Perceptual User Interface, 37-42 (1998). 3.L. Ma, B. Milner, and D. Smith, “Environmental noise classification for context-aware application,” in Proc. Works. Database and Expert Sys. Appl. 2736, 360-370 (2003). 4.L. Ma, B. Milner, and D. Smith, “Acoustic environment classification,” ACM Trans. Speech and Lang. Process. 3, 2, 1-22 (2006). 5.A. J. Eronenm, V. T. Peltonen, J. T. Tuomi, A. P. Klapuri, S. Fagerlund, T. Sorsa, G. Lorho, and J. Huopaniemi, “Audio-based context recognition,” IEEE Trans. Audio, Speech, and Lang. Process. 14, 321-329 (2006). 6.T. Nishiura, S. Nakamura, K. Miki, and K. Shikano, “Environmental sound source identification based on hidden Markov model for robust speech recognition,” in Proc. Eurospeech-Interspeech, 2157-2160 (2003). 7.P. Gaunard, C.G. Mubikangiey, C. Couvneur, and V. Fontaine, “Automatic classification of environmental noise events by hidden Markov model,” in Proc IEEE Inter. Conf. Acoust., Speech, and Sig. Process. 6, 3609- 3612 (1998). 8.G. Guo and S.Z. Li, “Content-based audio classification and retrieval by support vector machines,” IEEE Trans. Neural Networks 14, 209-215 (2003). 9.A. Temko, E. Monte, and C. Nadeu, “Comparison of sequence discriminant support vector machines for acoustic event classification,” in Proc. IEEE Inter. Conf. Acoust., Speech, and Sig. Process. 5, 721-724 (2006). 10.K. Kim and H. Ko, “Hierarchical approach for abnormal acoustic event classification in an elevator,” in Proc. IEEE Inter. Conf. Ad. Video and Sig. Surveillance, 88-94 (2011). 11.T. Heittola, A. Mesaros, A. Eronen, and T. Virtanen, “Context-dependent sound event detection,” EURASIP J. Audio, Speech, and Music Process. 1, 1-13 (2013). 12.T. Heittola, A. Mesaros, T. Virtanen, and M. Gabbouj, “Supervised model training for overlapping sound events based on unsupervised source separation,” in Proc IEEE Inter. Conf. Acoust., Speech, and Sig. Process. 8677-8681 (2013). 13.S. Rawat, P. F. Schulam, S. Burger, D. Ding, Y. Wang, and F. Metze, “Robust audio-codebooks for large-scale event detection in consumer videos,” in Proc. Interspeech, 2929-2933 (2013). 14.V. Carletti, P. Foggia, G. Percannella, A. Saggese, N. Strisciuglio, and M. Vento, “Audio surveillance using a bag of aural words classifier,” in Proc. IEEE Inter. Conf. Ad. Video and Sig. Surveillance, 81-86 (2013).
Information
- Publisher :The Acoustical Society of Korea
- Publisher(Ko) :한국음향학회
- Journal Title :The Journal of the Acoustical Society of Korea
- Journal Title(Ko) :한국음향학회지
- Volume : 33
- No :4
- Pages :248-254
- Received Date : 2013-12-19
- Revised Date : 2014-04-21
- Accepted Date : 2014-06-02
- DOI :https://doi.org/10.7776/ASK.2014.33.4.248



The Journal of the Acoustical Society of Korea









