Final slides for a tutorial at ICASSP 2019 on "Detection and Classification of Acoustic Scenes and Events" are now available here. Jupyter notebooks for the Python examples are available at Github repository.
I will be presenting a tutorial at ICASSP 2019 on "Detection and Classification of Acoustic Scenes and Events", together with Tuomas Virtanen and Annamaria Mesaros, on Monday 13.5.2019
Our research group is once again one of the organizing teams of the DCASE2019 challenge, and I am acting as the task coordinator for acoustic scene classification challenge task.
I started to collect glossary with key terms from the field of computational audio content analysis.
Automatic sound event detection aims at processing the continuous acoustic signal and converting it into symbolic descriptions of the corresponding sound events present at the auditory scene. Sound event detection can be utilized in a variety of applications, including context-based indexing and retrieval in multimedia databases, unobtrusive monitoring in health care, and surveillance.
Context recognition can be defined as the process of automatically determining the context around a device. Information about the context will enable wearable devices to provide better service to users' needs, e.g., by adjusting the mode of operation accordingly.
Auditory scene synthesis aims to create a new arbitrary long and representative audio ambiance for a location by using a small amount of audio recorded at the exact location. By adding this audio ambiance for the specific location in virtual location-exploration services (e.g. Google Street view) would enhance the user experience, giving the service a more 'real' feeling.
Understanding the timbre of musical instruments or drums are an important issue for automatic music transcription, music information retrieval and computational auditory scene analysis.