The introduction video to my research under Tampere Institute for Advanced Study (Tampere IAS) has been released. You can watch it here (https://youtu.be/_UbXGCMCDcs).
Our 2017 TASLP journal paper “Convolutional Recurrent Neural Networks for Polyphonic Sound Event Detection” IEEE Signal Processing Society's Best Paper Award 2023
As September 2023, I have started a two-year postdoctoral research fellow position under Tampere Institute for Advanced Study (Tampere IAS). My research will focus on Acoustic Scene Understanding.
DCASE2023 Workshop will be organized in Tampere.
Sounds in our everyday environments inform us about events, actions, and situations which we understand and react to. Acoustic scene understanding aims to facilitate automatic extraction of meaning from sound, for use as a scene understanding tool in different applications. The topic has wide applicability in context awareness and monitoring applications for personal use, wildlife, smart home alarms, or industrial processes and devices.
Automatic sound event detection aims at processing the continuous acoustic signal and converting it into symbolic descriptions of the corresponding sound events present at the auditory scene. Sound event detection can be utilized in a variety of applications, including context-based indexing and retrieval in multimedia databases, unobtrusive monitoring in health care, and surveillance.
Context recognition can be defined as the process of automatically determining the context around a device. Information about the context will enable wearable devices to provide better service to users' needs, e.g., by adjusting the mode of operation accordingly.
Auditory scene synthesis aims to create a new arbitrary long and representative audio ambiance for a location by using a small amount of audio recorded at the exact location. By adding this audio ambiance for the specific location in virtual location-exploration services (e.g. Google Street view) would enhance the user experience, giving the service a more 'real' feeling.