Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 18th Apr 2026, 06:16:17pm EEST
External resources will be made available 5 min before a session starts. You may have to reload the page to access the resources.
|
Agenda Overview |
| Session | ||
STE PS_C7: Special Session MusicAI 2/2
Special Session: Artificial Intelligence and Music (MusicAI) | ||
| Session Abstract | ||
|
The MusicAI section showcases the transformative potential of artificial intelligence in the world of music. Participants will discover innovative tools that support creative and technical processes—from generating musical ideas and mastering tracks to creating backing accompaniments, separating stems, and designing entirely new sounds. AI can generate melodies, harmonies, and rhythms tailored to specific genres, preferences, or moods, serving as an inspiring starting point for fresh compositions. Through advanced techniques such as spectral compression and professional-grade mastering, AI systems can produce realistic vocal performances from MIDI inputs or craft unique vocal samples and instrument-like sounds from human voices. Beyond creation, AI enhances the entire music production ecosystem: it offers lyrical inspiration, restores archival recordings, analyzes listening behavior to recommend playlists, predicts emerging trends, and provides data-driven insights valuable to artists, marketers, and record labels alike. This session invites participants to explore the boundless opportunities of integrating artificial intelligence and smart technologies into music. Both existing solutions and visionary concepts will be presented—encouraging innovation, experimentation, and new ways of thinking about the art and science of music creation. | ||
| Presentations | ||
2:30pm - 2:48pm
Integrating Technology and 21st-Century Skills in Music Theory Education Transilvania University of Brasov, Romania Recent transformations in music education highlight the need to balance technical proficiency with the development of 21st-century skills such as creativity, collaboration, critical thinking, and problem solving. Although digital tools have proven effective in supporting engagement and interactive learning, their systematic integration into Romanian higher music education remains limited. This study proposes innovative, technology-enhanced activities for music theory, solfège, and dictation courses, using software such as EarMaster, Auralia, and Tenuto. Adopting a descriptive and design-based approach, the paper outlines strategies that combine traditional musical competencies with transversal skills promoted by contemporary educational frameworks. The anticipated outcome is increased student motivation, creativity, and autonomy through the meaningful use of technology. Overall, the study advocates for a more dynamic and digitally literate approach to music education, aligning with current global trends and promoting pedagogical innovation in the university environment. 2:48pm - 3:06pm
Is There Room for Digital Technologies in Classical Vocal Pedagogy? Transilvania University of Brasov, Faculty of Music, Romania The incorporation of digital tools into the traditional singing studio is a con-troversial topic, especially in the context of classical singing. Classical vocal pedagogy requires the direct contact between teacher and student, particular-ly in the early phases of vocal development, to ensure that proper technique is acquired by the student and healthy vocal production is established. Fur-thermore, aspects of vocal emission such as colour, brilliance, and dynamics are strongly influenced by the performance venue, since classical singing does not typically use microphones to augment sound. Moreover, there is al-so the issue of preserving a centuries-old legacy, as attested by vocal treatises dating back to the 17th century. The present study was motivated by the in-creased use of online platforms in the process of vocal education. This raised the question: to what extent can the teaching process benefit from the use of video conferencing tools that require the use of microphones, given that the quality of sung tones is altered when sound waves are converted into electri-cal signals? Furthermore, the authors observed that certain digital tools have been specifically designed to help singers warm up or improve their intona-tion, prompting inquiry into how these emerging tools might impact classical vocal pedagogy. 3:06pm - 3:24pm
Edge AI for Music Therapy – Innovations in Sensing, Computing and Security Universitatea Transilvania Brasov, Romania Context - The integration of AI into music therapy enables the creation of personalized playlists, generation of new music, providing real-time biofeedback and even creation of responsive and adaptive clinical, home-based, and digital therapeutic musical environments. Integration process need to consider hardware and software aspects related to advanced sensing, efficient computing, robust security purposes. Purpose - The authors perform a critical review of existing publications related to therapeutic use of music for the relaxation purposes by implementing biofeedback loops with consumer-grade biosensing wearables coupled with other devices. Also it is emphasised the requirement for BSc, MSc and PhD students in music therapy to learn about the potential offered by Edge AI in the design and development of emotion-responsive music generation and other therapeutic tools which contain personalised real-time feedback systems. Approach - Integration process need to consider hardware and software aspects related to advanced sensing, efficient computing and robust security purposes. Advanced Sensing - Smart wearable devices based on real-time contextual analysis aim to provide a more personalised user experience by analysing environmental data alongside personal biometrics. The precise data acquisition and analysis in real-time is based on the integration of novel multi-sensory fusion techniques that leverage Edge AI so the decision making process is fast and responsive. Efficient Computing – sustainable AI of Things (AIoT) requires substantial electricity consumption leading to significant carbon emissions. Real-time feedback and improved gesture recognition accuracy are provided by integration of wearable sensors with advanced data processing techniques. Music therapy has an important role in the management of stress and anxiety because the enhanced vascularization of various mesocorticolimbic structures and activation of dopaminergic neurotransmitters regulate the autonomic system, emotion and cognitive function. Robust Security –The scalable and ethically grounded AI-wearable integration could consider federated learning for privacy, deep learning for noise filtering in EEG data, Generative Adversarial Networks (GAN) for enhanced performance in emotion recognition accuracy, temporal smoothness, and perceptual coherence of emotion-aware music generation Anticipated Outcomes - The paper aims to provide useful technical information to music therapists, researchers and undergraduate and postgraduate students studying music therapy courses on how edge AI can enhance traditional methods (i.e. lyric analysis, music-assisted relaxation, Guided Imagery and Music, music assisted relaxation etc) by offering new creative possibilities, increasing efficiency, and addressing challenges like personalization, cost, ethical implications of AI use in music therapy. Conclusions - The latest developments in sensing, computing and security aspects related to the use of edge AI in music therapy enhance the capacity of therapists, students, researchers and other users to generate personalised music, perform real time emotion recognition using long term comfort and ergonomic wearable devices, develop intelligent assistive tools for improvisation, music composition and music theory. etc. This paper provides a foundation for understanding current state of the art for wearable devices used in music therapy, efficient computing solutions and robust security methods. Also it emphasises areas for future research and development related to technological aspects of the interdisciplinary field of music therapy. 3:24pm - 3:42pm
Using AI to Analyze the Reception of Electroacoustic Music Gheorghe Dima National Music Academy, Romania Nowadays, artificial intelligence (AI) has a heightened interaction with music, being known to even transform music creation. The implementation of AI is discussed to have both, positive and negative outcome: if on one hand, AI helps musicians to overcome creative blocks, to create new musical experiences and to be an aide in gathering and analysing data, on the other hand, it leads to copyright issues and to the idea of a probable substitution of human being musicians. No matter the source of music the emotional effect is existent but variable as a function of cultural context, listeners’ background, and musical preference. The aim of this paper is to use artificial intelligence to study the perception and reception of electroacoustic music through analysing listener data (mainly listening habits) in order to identify patterns and preferences. Audio features, neurological measurements (e.g. EEG Mindwave set), psychological tests and sentiment analysis are used to determine the feelings/emotions led by music. | ||
