Giter Site home page Giter Site logo

nussl / nussl Goto Github PK

View Code? Open in Web Editor NEW
583.0 22.0 90.0 22.95 MB

A flexible source separation library in Python

Home Page: https://nussl.github.io/docs/

License: MIT License

Python 80.85% HTML 19.15%
source-separation audio repet duet nussl nmf hpss

nussl's Introduction

Flexible easy-to-use audio source separation

PyPI Status Downloads

Build Status codecov License: MIT


nussl (pronounced "nuzzle") is a flexible, object oriented Python audio source separation library created by the Interactive Audio Lab at Northwestern University. It stands for Northwestern University Source Separation Library (or our less-branded backronym: "Need Unmixing? Source Separation Library!"). Whether you're a researcher creating novel network architectures or new signal processing approaches for source separation or you just need an out-of-the-box source separation model, nussl contains everything you need for modern source separation, from prototyping to evaluation to end-use.

Documentation

Tutorials, code examples, and the API reference are available at the documentation website which is available here.

Features

Deep Learning

nussl contains a rich feature set for deep learning models, including a built in trainer with support for many common datasets in music and speech separation. nussl also has a External File Zoo where users are able to download pre-trained models on many different datasets with a number of different configurations.

Training Your Own Algorithms

nussl makes it easy to train your own algorithms, with many common architectures, datasets, and configurations ready-built for many use cases, and an easy way to extend our built-in functionality for your own needs. There are many pre-built useful modules that are easy to piece together, such as LSTMs, GRUs, convolutional layers, fully conneected layers, mask layers, embedding layers. Everything is built on PyTorch, so adding a brand new module is a snap!

Using Your Own Data

nussl has existing support for many popular source separation datasets, like MUSDB18, WSJ0, WHAM!, and FUSS. nussl also defines a simple directory structure to add in any data you want. Want to augment your data? We have support for datasets created with Scaper built in so you can train on magnitudes more training data.

Downloading Pre-trained Models

Pre-trained models on speech separation, music separation, and more are available for download at the External File Zoo (EFZ), and there is a built in python API to download models you want from within your code.

Deep Learning Architectures

We provide the following architectures available out of the box:

  • Mask Inference
  • Deep Clustering
  • Chimera
  • OpenUnmix (Soon!)
  • TasNet (Soon!)
  • DualPath RNN (Soon!)

(See docs for corresponding citations)

Classical Algorithms

Additionally, nussl also contains implementations of many classical source separation algorithms, including:

  • Spatialization algorithms: DUET, PROJET
  • Factorization-based Methods: RPCA, ICA
  • Primitive Methods: Repetition (REPET, 2DFT), Harmonic/Percussive, Vocal melody extraction, Timbre clustering
  • Benchmark Methods: High Pass filter, Binary Masks, Ratio Masks, Weiner Filtering

Interaction

nussl supports light interaction via gradio. To launch a web interface where you can upload audio and hear separations, simply do:

import nussl

# Make a dummy signal to instantiate object.
dummy_signal = nussl.AudioSignal()

# Create some separation object with desired config.
# This will use the 2DFT algorithm, but since interaction
# is implemented in the base separation object, it is
# available to every algorithm.
ft2d = nussl.separation.primitive.FT2D(dummy_signal)

# Launch the Gradio interface. Use `share=True` to make
# a public link that you can share! Warning - people with
# access to the link will be running code on YOUR
# machine.
ft2d.interact()

Or all at once, using share=True to create a public link that you can give to others.

>>> import nussl
>>> nussl.separation.primitive.HPSS(nussl.AudioSignal()).interact(share=True)
Running locally at: http://127.0.0.1:7860/
This share link will expire in 6 hours. If you need a permanent link, email [email protected]
Running on External URL: https://22218.gradio.app

This will launch an interface that looks like this:

Evaluation

nussl has many built in evaluation measures to determine how well your algorithm did, including the museval measure (BSSEvalV4), SI-SDR, SI-SIR, SI-SAR, SD-SDR, SNR, and more! We seek to provide a thorough framework for evaluating your source separation research.

Installation

nussl is on the Python Package Index (PyPI), so it is easy to install with pip:

$ pip install nussl

Note: Requires python 3.7+

For the functionality in nussl.play_utils, you'll want IPython, and ffmpeg, installed as well:

$ pip install IPython
$ brew install ffmpeg # on mac
$ apt-get install ffmpeg # on linux

To use the Melodia separation algorithm, you'll need to specify this:

$ pip install "nussl[melodia]"

vamp doesn't install on some MacOS versions, so we can't include it by default. To actually use Melodia, you'll also need the Melodia plugin. For installation instructions, look here.

Augmentation effects are applied via either PySox or SoxBindings, if on Linux or MacOS. soxbindings can't be installed from PyPI on Windows, but it may be installable from source. For source installation instructions, look here. Other augmentation effects are applied via FFMpeg.

Citing

If you are using nussl for your research project, we please ask that you cite it using one of the following bibtex citations:

@inproceedings {nussl
    author = {Ethan Manilow and Prem Seetharaman and Bryan Pardo},
    title = "The Northwestern University Source Separation Library",
    publisher = "Proceedings of the 19th International Society of Music Information Retrieval 
        Conference ({ISMIR} 2018), Paris, France, September 23-27",
    year = 2018
}

Issues and Contributions

See the contribution guide for detailed information. But the basics are: bug fixes/enhancements/etc have the standard github process; but, when adding new algorithms, contributors must provide benchmark files, paper references, and trained models (if applicable). Please see the issues page before contacting the authors.

Contributors

Former:

nussl's REPET, REPET-SIM implementations are based on MATLAB code from Zafar Rafii. The DUET implementation is based on MATLAB code from Scott Rickard. nussl's PROJET implementation is based on python code from Antoine Liutkus.

See documentation and inline comments for each algorithm for more information about citations and authorship.

Acknowledgements

We'd like to also thank Jonathan Le Roux and Gordon Wichern for very helpful discussions throughout the development of this package!

License

nussl 1.0 is under an MIT License

The MIT License (MIT)

Copyright (c) 2016-2020 Interactive Audio Lab

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

nussl's People

Contributors

abugler avatar cegrief avatar danielfelixkim avatar ethman avatar fpishdadian avatar interactiveaudiolab avatar kimjune01 avatar nathanshelly avatar pseeth avatar tengyifei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nussl's Issues

Documentation - REPET class static function clarification

Are the static functions in the REPET class meant for users, or are those functions you call only internally in your code? For example: I am unsure how to call static compute_beat_spectrum(X) because I don't know how to generate the autocorrelation matrix X.

At first I thought I had to use static compute_beat_spectrum(X) in order to generate the beat spectrum for my Repet object, especially since it is the first function listed on the page. However, I scrolled down and saw that get_beat_spectrum() was what I wanted. The same goes for static compute_similarity_matrix(X) and get_similarity_matrix().

Are your static functions supposed to be accessible to the public? Do they allow a user to have more control over how the similarity matrix, beat spectrum, etc. are computed by the system? Or are they only intended to be called within the program by the system? If it's the former, you should explain how users should use these functions correctly (and with the correct input). If it's the latter, you should remove these from the documentation.

You should also probably put the most important/useful/probably-commonly-used functions at the top of the page, such as self.run(), self.get_beat_spectrum(), self.get_similarity.matrix(), and self.plot(...).

PREM WHATS UP

This is how I'll talk to you now. This is how we're going to communicate.

Reformat documentation for autodoc

I've formatted AudioMix.py's documentation in the format/indentation required for sphinx autodoc. The rest still need to be done. Will be working on them incrementally.

REPET Plot Overrides Don't Work

Bongjun previously mentioned the error of setting plot_beat_spectrum=True on a REPET SIM object:

`AssertionError Traceback (most recent call last)
in ()
5
6 repet2 = Repet(signal, repet_type='sim')
----> 7 repet2.plot('RepetOutput/new_beat_spec_plot2.png', plot_beat_spectrum=True)
8
9 # repet3.plot('RepetOutput/new_sim_matrix_plot2.png')

/Users/kamaddio/miniconda2/envs/nussltests/lib/python2.7/site-packages/nussl/Repet.pyc in plot(self, output_file, **kwargs)
528
529 if plot_beat_spectrum == plot_sim_matrix == True:
--> 530 raise AssertionError('Cannot set both plot_beat_spectrum=True and plot_sim_matrix=True!')
531
532 if plot_beat_spectrum:

AssertionError: Cannot set both plot_beat_spectrum=True and plot_sim_matrix=True!`

However, there is also an issue when you try to set plot_sim_matrix=True on a REPET ORIG object: no error is raised; however, instead of plotting the similarity matrix, the program just plots the beat spectrum instead.

issue with adding signal objects

If you add two signal objects together, you do element-wise addition of their num-py arrays. That part is ok...but if these things have different sample rates, I don't want them to just add together. That doesn't make sense to me. You should error out

Check AudioSignal.AudioData.shape to make sure it's consistent and efficient.

It seems like AudioSignal.AudioData is of shape (Length, nChannels), which is probably not the most efficient way to represent this data. Additionally, we jump through a lot of syntactic hoops to get the data into a representation that we can manipulate. Find a way to represent this array that's computationally and syntactically efficient. I suspect that swapping the array dimensions might solve both of these requirements, but there's a lot of infrastructure that will need to be rebuilt (and simplified).

REPET find_peaks max_num bug

Function, as written on the page: find_peaks(data, min_thr=0.5, min_dist=None, max_num=1)
Simple example I used:
repet1 = Repet(signal)
data = np.array([0,0,.2,.5,1,1,0,1,0,1,0])
peaks_indices = repet1.find_peaks(data)
output: peaks_indices = [[4]]
peaks_indices = repet1.find_peaks(data, max_num=2)
output: peaks_indices = [[4 7]]
peaks_indices = repet1.find_peaks(data, max_num=3)
output: peaks_indices = [[0 4 7]]
peaks_indices = repet1.find_peaks(data, max_num=4)
output: peaks_indices = [[0 0 4 7]]
peaks_indices = repet1.find_peaks(data, max_num=5)
output: peaks_indices = [[0 0 0 4 7]]
etc.

So when max_num is set to an integer value greater than the number of peaks in the data, it adds the index 0 to the output peaks_indices list. It actually keeps adding index 0 (it does this (max_num - number_of_peaks_found) times). Instead, you should probably list only the peak indices found in data, and if fewer than max_num are found, then that is fine, because it is a max_num, not a min_num or total_num.

So in the example above, peaks_indices = repet1.find_peaks(data, max_num=5) should give an output of peaks_indices = [[4 7]]. Even though max_num=5, the system only found 2 peaks, so it should return just those two rather than adding in an incorrect (and repeated) index of 0.

DUET find_peaks() function generates an error

Function, as written on the page: static find_peaks(data, min_thr=0.5, min_dist=None, max_num=1)
In (#45) I worked an example of find_peaks using Repet. I used the same example in this Duet version.
duet = Duet(signal, 2)
data = np.array([0,0,.2,.5,1,1,0,1,0,1,0])
peaks_indices = repet1.find_peaks(data)

I assumed this would work exactly like the Repet example; however, this threw the following exception:
screen shot 2016-04-26 at 4 02 53 pm

The document lists min_dist as an optional parameter, stating that the find_peaks() function handles it on its own by setting min_dist = .25 * matrix dimensions. However, the input data is supposed to be in the form of an np.array, which is inherently one-dimensional (i.e. the above example has a shape of (11,)). Line 252 in the function, the line calling the error, calls for data.shape[1], which in this case is None.

It's possible you copied over the code for find_peaks2() which is designed for matrix input. But find_peaks() asks for "a row vector of positive values (in [0,1]) and finds the peak values and corresponding indices."

Documentation - REPET class does not properly show expected input

You describe the parameters, whether or not they are optional, and what the default values are below the class. The class instantiation itself does not have this information, and instead makes it look like every parameter is optional and defaulted to None:
screen shot 2016-04-14 at 8 34 08 am
Instead, you should show which parameters the user must use, as well as those that are optional and given a default value:
class Repet.Repet(input_audio_signal, sample_rate, stft_params, repet_type, similarity_threshold=0, min_distance_between_frames=1, max_repeating_frames=10, min_period=0.8, max_period=min(8, self.Mixture.SignalLength/3), period=None, high_pass_cutoff=100)

Note: I'm not sure if you can set such a default value to max_period. If not, you could probably just do something like: max_period='default_max', and then inside the class, do: if max_period=='default_max': max_period=min(8, self.Mixture.SignalLength/3).

In addition, I followed the Repet demo code, and it shows instantiating the Repet class with only the signal as an input. If I provide only the signal, and ignore the stft_params section below the class instantiation and above repet.run(), everything still runs fine. Repet clearly has defaults for everything except input_audio_signal, including the parameters that did not seem to be optional, such as sample_rate, stft_params, repet_type. You should indicate the default values in the parameter descriptions.

  • Does sample_rate automatically get tracked from the input_audio_signal?
  • What are the WindowAttributes? Is that a class? Why isn't it clickable? What can you do with it?
  • I see that repet_type allows you to click on RepetType, which then takes you to the corresponding section on the page, showing that the default value is "original". This is not immediately obvious that repet_type is set to a default of "original" in the Repet class instantiation.

What is the intended usage of find_peaks()?

find_peaks(data, min_thr=0.5, min_dist=None, max_num=1)

In (#45) and (#48), I ran examples of find_peaks() using the Repet and Duet classes. In both tests, I simply set data = np.array([0,0,.2,.5,1,1,0,1,0,1,0]), some arbitrary "row vector of real values (in [0,1])", as specified by the documentation.

This function "finds the peak values and corresponding indices in a vector of real values in [0,1]"; however, it is unclear what data is actually supposed to be. What does this have to do with the actual Repet or Duet object? As far as I can tell, find_peaks() deals only with whatever information is contained in data, not the Repet or Duet objects.

Is there some way to generate the data directly from the Repet or Duet object so that it is relevant to the task at hand?

Overriding REPET plotting does not work

In the example under REPET class in the documentation, overriding does not work:

repet = nussl.Repet(signal, repet_type=nussl.RepetType.SIM)
repet.plot('new_sim_matrix_plot.png', plot_beat_spectrum=True)

* Error messages:

AssertionError Traceback (most recent call last)
in ()
----> 1 repet4.plot('new_sim_matrix_plot.png', plot_beat_spectrum=True)

/Users/bongjunkim/anaconda/lib/python2.7/site-packages/nussl/Repet.pyc in plot(self, output_file, **kwargs)
495
496 if plot_beat_spectrum == plot_sim_matrix == True:
--> 497 raise AssertionError('Cannot set both plot_beat_spectrum=True and plot_sim_matrix=True!')
498
499 if plot_beat_spectrum:

AssertionError: Cannot set both plot_beat_spectrum=True and plot_sim_matrix=True!

Doing STFT, then iSTFT, then STFT again crashes

Power spectrum data isn't saved correctly when doing the initial STFT. Also, make sure all of the bookkeeping is correct; sometimes we're overwriting STFT data sometimes we're not. Why?

scipy.io.wavfile does not read .wav files written by Logic Pro

It seems that Logic Pro adds some header data when it writes its wave files that scipy.io.wavfile can't handle. (One workaround is to not use files from Logic Pro, because Audacity's .wav output is fine, but this isn't really a solution.) There are a few other python libraries that handle wave files, check them out.

Once there's a suitable replacement, this should be as simple as replacing one line in AudioSignal.py in the LoadAudioFromFile() function (and the import line, so 2 lines)

Unclear how to run NMF

The documentation for NMF is quite sparse, with many of the functions left without explanations. I tried running "make_audio_signals()" on an NMF object, but it threw a "not implemented yet" exception, so I figured that this class is still unfinished.

I tried to run() the NMF separation algorithm on my NMF object, but had a shape alignment issue, getting the following error:
ValueError: shapes (2,1025) and (1025,2925,2) not aligned: 1025 (dim 1) != 2925 (dim 1)

This tells me that either something is wrong internally in the algorithm, or I simply do not understand how to use the run() method. The documentation says that run() "assumes that all parameters have been set prior to running." I don't know what these parameters are. Are they the ones in the Nmf class initialization (below)?

class Nmf.Nmf(stft, num_templates, input_audio_signal=None, sample_rate=None, stft_params=None, activation_matrix=None, templates=None, distance_measure=None, should_update_template=None, should_update_activation=None)

The above implies that only stft and num_templates are necessary to create an NMF, and all the other parameters are optional. However, since the default values are all "None" rather than actual values, it is possible that not enough information is provided for run() to work, unless the values are set somewhere internally.

The run() function's description also says "No inputs. do_STFT and N must be set prior to calling this function." What are these parameters? They aren't anywhere in the NMF class, so I don't know how to set them before running the function.

For reference, this is how I made my NMF object. I assumed I could use the AudioSignal class's STFT method to create the STFT to be used for NMF.

signal = AudioSignal('audioFile.wav')
signal.stft_data = signal.stft()
nmf = Nmf(signal.stft_data,2)

Beat Spectrum plot is tiny - y-axis too large?

For every song I've tested, the beat spectrum has looked something like:
new_beat_spec_plot
new_sim_matrix_plot

The plotted beat spectrum is so tiny that I can't get any information out of it. The axes are not clear, and they don't seem to have a great scaling factor. I assumed that the x-axis represented number of samples or something time-based due to song length; however, the two songs have very different lengths (one is 13 seconds, the other is 67). I don't know what the y-axis represents.

WRITE TESTS

WHAT ARE YOU DOING HOW COME YOU HAVEN'T WRITTEN ANY TESTS YET

Documentation - AudioSignal class has broken code for example usage

In the "Examples" section, the example code for computing the spectrogram and inverse stft doesn't work. There are a few issues here:

  1. The examples call to STFT() and iSTFT(), but the class defines the functions as stft() and istft(). Instead of seeing how these functions are actually called, people might try to run this exact code and get the error AttributeError: 'AudioSignal' object has no attribute 'STFT'
  2. When I try to run the line sigSpec,sigPow,F,T=sig.stft(), I get the error ValueError: too many values to unpack. The stft() function only returns one output, not four.
  3. When I try to run the line sigrec,tvec=signal.istft(), I get the error Exception: Cannot do inverse STFT without self.stft_data! I can see that self.stft_data exists in the class, and I assumed all these listed variables were initialized on their own. Am I supposed to do something here?
    --> I figured out that in order to take the istft, I can run:
    signal.stft_data=signal.stft()
    sigrec,tvec=signal.istft()
    But I had to set the stft_data manually by taking the stft of the signal. I think you should make this clear.

Documentation - AudioSignal class needs variable clarification

In the AudioSignal class, after the "Parameters" list, you list out all of the variables and functions in the class. I assumed that all of these were in the form of self.variable and self.function(...); however, some of these don't actually exist when I try to run them with an AudioSignal object.

  • signal = AudioSignal('sample_audio_file.wav')
  • signal.Fvec generates the error AttributeError: 'AudioSignal' object has no attribute 'Fvec'
  • signal.Tvec does the same; however, signal.time_vector works fine, which leads me to believe that Tvec is deprecated or something.
  • signal.stft_data and signal.power_spectrum_data are both initialized to empty arrays. I think you should make it clear that the user needs to populate these arrays manually, and then give examples of how to do so.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.