Convergence of Technologies to Connect Audio with Meaning: from Semantic Web Ontologies to Semantic Audio Production
Prof. George Fazekas
Queen Mary University of London
DEIB - PT1 Room
June 29th, 2017
3.00 pm
Contact:
Augusto Sarti
Research Line:
Telecommunications
Sommario
Science and technology plays in an increasingly vital role in how we experience, how we compose, perform, share and enjoy musical audio. The invention of recording in the late 19th century is a profound example that, for the first time in human history, disconnected music performance from listening and gave rise to a new industry as well as new fields of scientific investigation. But musical experience is not just about listening. Human minds make sense of what we hear by categorising and by making associations, cognitive processes which give rise to meaning or influence our mood. Perhaps the next revolution akin to recording is therefore in audio semantics. Technologies that mimic our abilities and enable interaction with audio on human terms are already changing the way we experience it. The emerging field of Semantic Audio is in the confluence of several key fields, namely, signal processing, machine learning and Semantic Web ontologies that enable knowledge representation and logic-based inference. In my talk, I will put forward that synergies between these fields provide a fruitful way, if not necessary to account for human interpretation of sound. I will outline music and audio related ontologies and ontology based systems that found applications on the Semantic Web, as well as intelligent audio production tools that enable linking musical concepts with signal processing parameters in audio systems. I will outline my recent work demonstrating how web technologies may be used to create interactive performance systems that enable mood-based audience-performer communication and how semantic audio technologies enable us to link social tags and audio features to better understand the relationship between music and emotions. I will hint at how some principles used in my research also contribute to enhancing scientific protocols, ease experimentation and facilitate reproducibility. Finally, I will discuss challenges in fusing audio and semantic technologies and outline some future opportunities they may bring about.
Biografia
Prof. George Fazekas, is a Lecturer in Digital Media at Queen Mary University of London. His research Interests
are mainly on Semantic Audio, an interdisciplinary field in the confluence of Digital Signal Processing, Machine Learning, and various knowledge representation and knowledge sharing technologies such as Semantic Web Ontologies, Linked-data, knowledge-based reasoning and the Semantic Web. His focus is on extracting, analysing and linking data about music and developing applications that use semantic metadata, bringing the power of semantic technologies to music technology.
are mainly on Semantic Audio, an interdisciplinary field in the confluence of Digital Signal Processing, Machine Learning, and various knowledge representation and knowledge sharing technologies such as Semantic Web Ontologies, Linked-data, knowledge-based reasoning and the Semantic Web. His focus is on extracting, analysing and linking data about music and developing applications that use semantic metadata, bringing the power of semantic technologies to music technology.