free bootstrap theme

Automated Music Composition


In recent years, there has been an increasing interest in music generation using machine learning techniques. This is a field still in its infancy, and most attempts are still characterized by the imposition of many restrictions to the music composition process in order to favor the creation of “interesting” outputs. In our work, we aim at developing new metrics for an objective assessment of the quality of the generated pieces. We are also developing new approaches for automated music compositions.

Tuning RNN-based composers for specific musical styles

Details coming soon ...


In this work, we applied a model previously used for image generation for generating new music - Variations Autoencoders (VAEs). This type of neural network attempts to model the underlying and complex joint distribution of a given data, sample from it, and generate new examples that fit the same distribution. An important advantage of VAEs is that the input data can be of any kind (e.g., images, sound, video, text). For example, a VAE could be used to learn the distribution of pictures of sunflowers. Then, by sampling from this distribution we would obtain pictures whose content fundamentally follows all the “rules” that make a sunflower what it is – its color, its shape, etc. In our case, we want to learn the distribution of musical pieces belonging to a given style by allowing the VAE to capture the relevant musical rules that underlie the composition process. Furthermore, and most importantly, we developed a simple objective measure to evaluate the music composed, which allowed us to evaluate the pieces composed against a predetermined standard as well as fine-tuning our models for better “performance” and music composition goals. We demonstrate that our model can generate music pieces that follow general stylistic characteristics of a given composer or musical genre, and that our measure permits investigating the impact of various parameters and model architectures on the compositional process and output.

© Copyright 2019 Applied Music Research Lab - All Rights Reserved