Giter Site home page Giter Site logo

nicozenith / music_generation Goto Github PK

View Code? Open in Web Editor NEW

This project forked from lkriener/music_generation

0.0 0.0 0.0 10.53 MB

Deep learning project for music generation using MLPs, RNNs, LSTMs, autoencoders

Python 10.52% Jupyter Notebook 89.48%

music_generation's Introduction

Music generation with artificial neural networks

Group project for Advanced topics in machine learning lecture (2019)

Benjamin Ellenberger, Nicolas Deperrois, Laura Kriener

Project description and motivation

The topics for the group project work was chosen in the very beginning of the course. As it was announced that the course will nearly always use images for demonstrations, examples and excercises, we decided that our project should be on a different form of data. We decided to work in the broad field of music generation with deep learing.

To narrow down the topic we investigated what the state of the art in this field is. The most impressive recent results are produced by Google and OpenAI. The Google Magenta project covers a wide range of applications such as harmonization, drum-machines and a music generating network using the transformer network architecture with attention.

An other very recent result in the field of generating music was published by OpenAI. The MuseNet uses the recently published GPT2-architecture which is a large-scale transformer network as well.

The Google and OpenAI approaches as well as other (less famous) approaches have in common, that they employ very complicated network architectures in combination with the use of immense computational resources.

As the required computational power is far out of our reach, we wondered if this level of complexity is really unavoidable. And so the question How much can you do with how little? became the leading theme for our project. We want to see, what results can be achieved using much simpler network architecutres (i.e. architectures within the scope of the lecture)? Which aspects of music generation can be achieved and which have to be ignored? For example can you generate a resonable melody line without considering the rhythm?

Report

A detailed description of our approach, the used data sets, achieved results and exemplary usage of our networks can be found in our Report.

Presentation

The final project presentation given in the lecture can be found here.

music_generation's People

Contributors

lkriener avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.