Giter Site home page Giter Site logo

midi-ddsp's Introduction

Status

This repository is currently inactive and serves only as a supplement some of our papers. We have transitioned to using individual repositories for new projects. For our current work, see the Magenta website and Magenta GitHub Organization.

Magenta

Build Status PyPI version

Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it's also an exploration in building smart tools and interfaces that allow artists and musicians to extend (not replace!) their processes using these models. Magenta was started by some researchers and engineers from the Google Brain team, but many others have contributed significantly to the project. We use TensorFlow and release our models and tools in open source on this GitHub. If you’d like to learn more about Magenta, check out our blog, where we post technical details. You can also join our discussion group.

This is the home for our Python TensorFlow library. To use our models in the browser with TensorFlow.js, head to the Magenta.js repository.

Getting Started

Take a look at our colab notebooks for various models, including one on getting started. Magenta.js is also a good resource for models and demos that run in the browser. This and more, including blog posts and Ableton Live plugins, can be found at https://magenta.tensorflow.org.

Magenta Repo

Installation

Magenta maintains a pip package for easy installation. We recommend using Anaconda to install it, but it can work in any standard Python environment. We support Python 3 (>= 3.5). These instructions will assume you are using Anaconda.

Automated Install (w/ Anaconda)

If you are running Mac OS X or Ubuntu, you can try using our automated installation script. Just paste the following command into your terminal.

curl https://raw.githubusercontent.com/tensorflow/magenta/main/magenta/tools/magenta-install.sh > /tmp/magenta-install.sh
bash /tmp/magenta-install.sh

After the script completes, open a new terminal window so the environment variable changes take effect.

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Note that you will need to run source activate magenta to use Magenta every time you open a new terminal window.

Manual Install (w/o Anaconda)

If the automated script fails for any reason, or you'd prefer to install by hand, do the following steps.

Install the Magenta pip package:

pip install magenta

NOTE: In order to install the rtmidi package that we depend on, you may need to install headers for some sound libraries. On Ubuntu Linux, this command should install the necessary packages:

sudo apt-get install build-essential libasound2-dev libjack-dev portaudio19-dev

On Fedora Linux, use

sudo dnf group install "C Development Tools and Libraries"
sudo dnf install SAASound-devel jack-audio-connection-kit-devel portaudio-devel

The Magenta libraries are now available for use within Python programs and Jupyter notebooks, and the Magenta scripts are installed in your path!

Using Magenta

You can now train our various models and use them to generate music, audio, and images. You can find instructions for each of the models by exploring the models directory.

Development Environment

If you want to develop on Magenta, you'll need to set up the full Development Environment.

First, clone this repository:

git clone https://github.com/tensorflow/magenta.git

Next, install the dependencies by changing to the base directory and executing the setup command:

pip install -e .

You can now edit the files and run scripts by calling Python as usual. For example, this is how you would run the melody_rnn_generate script from the base directory:

python magenta/models/melody_rnn/melody_rnn_generate --config=...

You can also install the (potentially modified) package with:

pip install .

Before creating a pull request, please also test your changes with:

pip install pytest-pylint
pytest

PIP Release

To build a new version for pip, bump the version and then run:

python setup.py test
python setup.py bdist_wheel --universal
twine upload dist/magenta-N.N.N-py2.py3-none-any.whl

midi-ddsp's People

Contributors

ak391 avatar jesseengel avatar lukewys avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

midi-ddsp's Issues

Question on the installation

Hi,

When I opened up a new colab notebook and tried to pip install the midi-ddsp package, it took over 40 minutes and the installation can not be completed. I didn't experience this until this week.

I've tried pip install midi-ddsp and pip install git+https://github.com/magenta/midi-ddsp and both gave me the same results.

It seemed like pip would spend a lot of time trying to find which version of etils was compatible.

Question on Figure

image

quick question on this figure in the blog post: i know coconet is its own model that will generate subsequent melodies given the input midi file. however, should i decide to train midi ddsp, will the training of coconet also be a part of this? or should i expect a monophonic midi melody as input and the generated audio as output.

thanks for all the help and this awesome project

Hugginface space runtime error

runtime error
Space failed to start. Exit code: 1

Container logs:
Failed to retrieve error logs: Failed to initialize stream (code 404)

Can i test with custom data?

Hi, i am interested in this exciting project and i am trying to test this with our custom dataset and reproduce the format of original data. But there are some difficulties and questions below.

  1. Is there no way to use custom datasets at all?
  1. Is there any code to calculate elements of dataset below?
  • I want to know how to get "note_active_velocities", "note_active_frame_indices", "power_db", "note_onsets", "note_offsets" but there is no any code on repository.

Thank you for reading!

How to distinguish whether it is cresc or decresc in fluctuation?

Hi! I am interested in your project. Currently, I am doing similar stuff, but i have some questions.
(1) I know that Fluctuation stands for how much does that note cresc or decresc. Peak means which position got the maximum energy.
However, i am wondering how to know if that note is cresc or decresc. Let's assume our peak position is 0.4, and fluc is 0.3. Then, is it decrease 0.3 or increase 0.3? How to distinguish it?
(2) I know that all the expressive control values are normalized between 0 and 1, but what's the unit measure in these expressive controls?

Thanks for answering

Technical limitations in processing arbitrary datasets?

Hi! I was interested in fine-tuning midi-ddsp on a set of midi files I already have in order to generate midi from that context (lmk if that's not possible and I've misunderstood), but I see that you don't currently support the processing of arbitrary dataset.

Was just wondering what you're hitting technically there? Or is it just the lack of a pipeline to process a random set of midi files using ddsp's data handling tools?

Would be happy to look into a PR if the scope is fairly well defined!

Why is vibrato_rate not used?

In my opinion, vibrato_rate (peak frequency) is more plausible than vibrato_extend (peak amplitude) to represent pitch pulsating. Why is vibrato_rate not used?

tfrecord features

Hello, I'm very interested in your amazing work. Took a deep look at the urmp tfrecord datasets, I found something a bit confusing for me. Could you be so kind to help?

  1. I checked some urmp_tfrecords data, I found that some of them contain the following features: {"audio", "f0_confidence", "f0_hz", "f0_time", "id", "instrument_id", "loudness_db", "note_active_frame_indices", "note_active_velocities", "note_offsets", "note_onsets", "orig_f0_hz", "orig_f0_time", "power_db", "recording_id", "sequence"}. However, some of them don't include {"orig_f0_hz", "orig_f0_time"} in their tfrecord data. Why is this so and does such an inconsistency influence the model training?
  2. I want to include piano music when I train my own model. To this end, I think I need to generate tfrecords that have the same content as the urmp ones you used in your model. I plan to use maestro dataset. Could you be so kind to indicate if there's a tfrecord data generation code that we can take as a reference? Like the one you used to generate tfrecords for the midi-ddsp model?
  3. What is the difference between "batched" and "unbatched" dataset?

Thank you very much for your help in advance.

Error when using "pip install midi-ddsp"

Hello everyone,

I'm trying to use midi-ddsp to synthesize a few .midi files. In order to achieve it, I create a virtual environment with:

python3 -m venv .venv
source .venv/bin/activate

note: python version == 3.10.6

After creating it I run: "pip install midi-ddsp" . Getting this error message:

Collecting ddsp
Using cached ddsp-1.9.0-py2.py3-none-any.whl (200 kB)
Using cached ddsp-1.7.1-py2.py3-none-any.whl (199 kB)
Using cached ddsp-1.7.0-py2.py3-none-any.whl (197 kB)
Using cached ddsp-1.6.5-py2.py3-none-any.whl (194 kB)
Using cached ddsp-1.6.3-py2.py3-none-any.whl (194 kB)
Using cached ddsp-1.6.2-py2.py3-none-any.whl (194 kB)
Using cached ddsp-1.6.0-py2.py3-none-any.whl (194 kB)
Using cached ddsp-1.4.0-py2.py3-none-any.whl (192 kB)
Using cached ddsp-1.3.1-py2.py3-none-any.whl (192 kB)
Using cached ddsp-1.3.0-py2.py3-none-any.whl (183 kB)
Using cached ddsp-1.2.0-py2.py3-none-any.whl (179 kB)
Using cached ddsp-1.1.0-py2.py3-none-any.whl (175 kB)
Using cached ddsp-1.0.1-py2.py3-none-any.whl (170 kB)
Using cached ddsp-1.0.0-py2.py3-none-any.whl (168 kB)
Using cached ddsp-0.14.0-py2.py3-none-any.whl (143 kB)
Using cached ddsp-0.13.1-py2.py3-none-any.whl (129 kB)
Using cached ddsp-0.13.0-py2.py3-none-any.whl (129 kB)
Using cached ddsp-0.12.0-py2.py3-none-any.whl (127 kB)
Using cached ddsp-0.10.0-py2.py3-none-any.whl (109 kB)
Using cached ddsp-0.9.0-py2.py3-none-any.whl (109 kB)
Using cached ddsp-0.8.0-py2.py3-none-any.whl (108 kB)
Using cached ddsp-0.7.0-py2.py3-none-any.whl (107 kB)
Using cached ddsp-0.5.1-py2.py3-none-any.whl (101 kB)
Using cached ddsp-0.5.0-py2.py3-none-any.whl (101 kB)
Using cached ddsp-0.4.0-py2.py3-none-any.whl (97 kB)
Using cached ddsp-0.2.4-py2.py3-none-any.whl (89 kB)
Using cached ddsp-0.2.3-py2.py3-none-any.whl (89 kB)
Using cached ddsp-0.2.2-py2.py3-none-any.whl (89 kB)
Using cached ddsp-0.2.0-py2.py3-none-any.whl (88 kB)
Using cached ddsp-0.1.0-py3-none-any.whl (88 kB)
Using cached ddsp-0.0.10-py3-none-any.whl (88 kB)
Using cached ddsp-0.0.9-py3-none-any.whl (86 kB)
Using cached ddsp-0.0.8-py3-none-any.whl (86 kB)
Using cached ddsp-0.0.7-py3-none-any.whl (85 kB)
Using cached ddsp-0.0.6-py2.py3-none-any.whl (91 kB)
Using cached ddsp-0.0.5-py2.py3-none-any.whl (91 kB)
Using cached ddsp-0.0.4-py2.py3-none-any.whl (83 kB)
Using cached ddsp-0.0.3-py2.py3-none-any.whl (81 kB)
Using cached ddsp-0.0.1-py2.py3-none-any.whl (75 kB)
Using cached ddsp-0.0.0-py2.py3-none-any.whl (75 kB)
INFO: pip is looking at multiple versions of midi-ddsp to determine which version is compatible with other requirements. This could take a while.
Collecting midi-ddsp
Using cached midi_ddsp-0.1.3-py3-none-any.whl (56 kB)
Using cached midi_ddsp-0.1.1-py3-none-any.whl (56 kB)
Using cached midi_ddsp-0.1.0-py3-none-any.whl (53 kB)
ERROR: Cannot install midi-ddsp because these package versions have conflicting dependencies.

The conflict is caused by:
ddsp 3.4.4 depends on tensorflow
ddsp 3.4.3 depends on tensorflow
ddsp 3.4.1 depends on tensorflow
ddsp 3.4.0 depends on tensorflow
ddsp 3.3.6 depends on tensorflow
ddsp 3.3.4 depends on tensorflow
ddsp 3.3.2 depends on tensorflow
ddsp 3.3.0 depends on tensorflow
ddsp 3.2.1 depends on tensorflow
ddsp 3.2.0 depends on tensorflow
ddsp 3.1.0 depends on tensorflow
ddsp 1.9.0 depends on tensorflow
ddsp 1.7.1 depends on tensorflow
ddsp 1.7.0 depends on tensorflow
ddsp 1.6.5 depends on tensorflow
ddsp 1.6.3 depends on tensorflow
ddsp 1.6.2 depends on tensorflow
ddsp 1.6.0 depends on tensorflow
ddsp 1.4.0 depends on tensorflow
ddsp 1.3.1 depends on tensorflow
ddsp 1.3.0 depends on tensorflow
ddsp 1.2.0 depends on tensorflow
ddsp 1.1.0 depends on tensorflow
ddsp 1.0.1 depends on tensorflow
ddsp 1.0.0 depends on tensorflow
ddsp 0.14.0 depends on tensorflow
ddsp 0.13.1 depends on tensorflow
ddsp 0.13.0 depends on tensorflow
ddsp 0.12.0 depends on tensorflow
ddsp 0.10.0 depends on tensorflow
ddsp 0.9.0 depends on tensorflow
ddsp 0.8.0 depends on tensorflow
ddsp 0.7.0 depends on tensorflow
ddsp 0.5.1 depends on tensorflow
ddsp 0.5.0 depends on tensorflow
ddsp 0.4.0 depends on tensorflow
ddsp 0.2.4 depends on tensorflow
ddsp 0.2.3 depends on tensorflow
ddsp 0.2.2 depends on tensorflow
ddsp 0.2.0 depends on tensorflow
ddsp 0.1.0 depends on tensorflow
ddsp 0.0.10 depends on tensorflow
ddsp 0.0.9 depends on tensorflow
ddsp 0.0.8 depends on tensorflow
ddsp 0.0.7 depends on tensorflow
ddsp 0.0.6 depends on tensorflow
ddsp 0.0.5 depends on tensorflow
ddsp 0.0.4 depends on tensorflow
ddsp 0.0.3 depends on tensorflow
ddsp 0.0.1 depends on tensorflow
ddsp 0.0.0 depends on tensorflow>=2.1.0

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: Resolution Impossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

========
I am currently working from an M1 mac so I would be pleased if you could help me reaching a solution for this problem.

Thank you in advance,
Juan Carlos

ImportError and AttributeError

Hi! Very interesting work!
I’m trying to run MIDI_DDSP_Demo.ipynb, and I encountered some errors.

  1. ImportError: cannot import name 'LD_RANGE' occurred in from ddsp.spectral_ops import F0_RANGE, LD_RANGE(see here).
    -> According to DDSP, I think 'DB_RANGE' is correct, not 'LD_RANGE' .

  2. AttributeError: module 'ddsp.spectral_ops' has no attribute 'amplitude_to_db' occurred where ddsp.spectral_ops.amplitude_to_db is used (see here).
    -> According to DDSP, I suppose it is 'ddsp.core', not 'ddsp.spectral_ops'.

Docker image with all the necessary packages

Hi again!

I'm still trying to execute your code with the "pip install midi-ddsp".

I'm using docker from a tensorflow/tensorflow:latest image and running the pip command. I get this error:

"E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered"

I ask you guys in order to know if you work with docker... if I could be able to have access to the image you use so all versions are the same and to avoid these type of errors.

Thank you,
Best regards,
Juan Carlos

What is ``input`` in the def call()?

Hi, I am looking inside the code. I've seen a lot of methods about def call(self, inputs) in your code, especially looking at this one.

  def call(self, inputs):
    synth_params = self.get_synth_params(inputs)

However, I couldn't find out what's the calculation of inputs, there are some clues I've found. In those codes, inputs is respond to the data in get_fake_data_synthesis_generator, then what are the data and units you input to get_fake_data_synthesis_generator? Frames? Amplitude or anything else?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.