AM Lab
This repository contains the raw data collected in Cheah et al (submitted).
This repository includes the PsychoPy and PsyToolkit code for the experiments reported in Cheah et al (submitted).
This repository contains the raw data collected in Cheah et al (in press).
A personalised music reccomeendation system for everyday life activities and wellbeing.
Personalised playlists for eliciting autobiographical memories.
The GEneva Music-Induced Affect Checklist (GEMIAC) is a brief instrument for the rapid assessment of musically induced emotions.
The MUSEBAQ is a modular tool for music research to assess musicianship, musical capacity, music preferences, and motivations for music use.
ChoCo integrates 18 harmonic datasets to provide annotations of chord and tonality for about 20k pieces. The integration workflow was described by de Berardinis et al. (2023) and is available on GitHub
Harmory is a Knowledge Graph representing chord segments and their similarity following a musicological model of music perception. It is built from ChoCo and documented in de Berardinis et al. (2023)
This repository contains the Sincere Apology Corpus (SinA-C). SinA-C is an English speech corpus of acted apologies in various prosodic styles created with the purpose of investigating the attributes of the human voice which convey sincerity.
Electroencaphalography (EEG) and facial Electromyography (EMG) signals collected in the context of a study by van Peer, Grandjean and Scherer (2014) and used in Coutinho, Gentsch, van Peer, Scherer and Schuller (2018).
Electroencaphalography (EEG) and facial Electromyography (EMG) signals collected in the context of a study by Gentsch, Grandjean and Scherer (2013) and used in Coutinho, Gentsch, van Peer, Scherer and Schuller (2018).
Database with 16,930 sound instances (1-10s) belonging to 243 categories of environmental sounds.
Music Meta is an ontology to describe music metadata related to artists, compositions, performances, recordings, broadcasts, and links. It provides an abstraction layer to represent (Western) music metadata across different genres and periods.
Code for Team AugLi's submission for the 2019 MediaEval Theme Recognition challenge.
This repository contains the implementation of the model described in the paper Automated composition of Galician Xota – tuning RNN-based composers for specific musical styles using Deep Q-Learning.
EmoMucs is an audio-based model for music emotion recognition (MER) that uses music source separation (MSS) to disentangle the emotional contribution of each musical voice.