nussl / nussl Goto Github PK
View Code? Open in Web Editor NEWA flexible source separation library in Python
Home Page: https://nussl.github.io/docs/
License: MIT License
A flexible source separation library in Python
Home Page: https://nussl.github.io/docs/
License: MIT License
Make it do something useful
Someone needs to restart the Prem Server.
Modules should be lower-case and underscored as well.
In the AudioSignal class, after the "Parameters" list, you list out all of the variables and functions in the class. I assumed that all of these were in the form of self.variable and self.function(...); however, some of these don't actually exist when I try to run them with an AudioSignal object.
signal = AudioSignal('sample_audio_file.wav')
signal.Fvec
generates the error AttributeError: 'AudioSignal' object has no attribute 'Fvec'
signal.Tvec
does the same; however, signal.time_vector
works fine, which leads me to believe that Tvec is deprecated or something.signal.stft_data
and signal.power_spectrum_data
are both initialized to empty arrays. I think you should make it clear that the user needs to populate these arrays manually, and then give examples of how to do so.I've formatted AudioMix.py's documentation in the format/indentation required for sphinx autodoc. The rest still need to be done. Will be working on them incrementally.
Making it pip-able would also be great.
It seems that Logic Pro adds some header data when it writes its wave files that scipy.io.wavfile can't handle. (One workaround is to not use files from Logic Pro, because Audacity's .wav output is fine, but this isn't really a solution.) There are a few other python libraries that handle wave files, check them out.
Once there's a suitable replacement, this should be as simple as replacing one line in AudioSignal.py in the LoadAudioFromFile() function (and the import line, so 2 lines)
WHAT ARE YOU DOING HOW COME YOU HAVEN'T WRITTEN ANY TESTS YET
Function, as written on the page: static find_peaks(data, min_thr=0.5, min_dist=None, max_num=1)
In (#45) I worked an example of find_peaks using Repet. I used the same example in this Duet version.
duet = Duet(signal, 2)
data = np.array([0,0,.2,.5,1,1,0,1,0,1,0])
peaks_indices = repet1.find_peaks(data)
I assumed this would work exactly like the Repet example; however, this threw the following exception:
The document lists min_dist as an optional parameter, stating that the find_peaks() function handles it on its own by setting min_dist = .25 * matrix dimensions
. However, the input data is supposed to be in the form of an np.array, which is inherently one-dimensional (i.e. the above example has a shape of (11,)
). Line 252 in the function, the line calling the error, calls for data.shape[1], which in this case is None
.
It's possible you copied over the code for find_peaks2()
which is designed for matrix input. But find_peaks()
asks for "a row vector of positive values (in [0,1]) and finds the peak values and corresponding indices."
Constants.py needs to be renamed constants.py (note capitalization change).
...otherwise "import constants" in audio_signal.py does not resolve!
find_peaks(data, min_thr=0.5, min_dist=None, max_num=1)
In (#45) and (#48), I ran examples of find_peaks()
using the Repet and Duet classes. In both tests, I simply set data = np.array([0,0,.2,.5,1,1,0,1,0,1,0])
, some arbitrary "row vector of real values (in [0,1])", as specified by the documentation.
This function "finds the peak values and corresponding indices in a vector of real values in [0,1]"; however, it is unclear what data
is actually supposed to be. What does this have to do with the actual Repet or Duet object? As far as I can tell, find_peaks()
deals only with whatever information is contained in data
, not the Repet or Duet objects.
Is there some way to generate the data
directly from the Repet or Duet object so that it is relevant to the task at hand?
The documentation for NMF is quite sparse, with many of the functions left without explanations. I tried running "make_audio_signals()" on an NMF object, but it threw a "not implemented yet" exception, so I figured that this class is still unfinished.
I tried to run() the NMF separation algorithm on my NMF object, but had a shape alignment issue, getting the following error:
ValueError: shapes (2,1025) and (1025,2925,2) not aligned: 1025 (dim 1) != 2925 (dim 1)
This tells me that either something is wrong internally in the algorithm, or I simply do not understand how to use the run() method. The documentation says that run() "assumes that all parameters have been set prior to running." I don't know what these parameters are. Are they the ones in the Nmf class initialization (below)?
class Nmf.Nmf(stft, num_templates, input_audio_signal=None, sample_rate=None, stft_params=None, activation_matrix=None, templates=None, distance_measure=None, should_update_template=None, should_update_activation=None)
The above implies that only stft and num_templates are necessary to create an NMF, and all the other parameters are optional. However, since the default values are all "None" rather than actual values, it is possible that not enough information is provided for run() to work, unless the values are set somewhere internally.
The run() function's description also says "No inputs. do_STFT and N must be set prior to calling this function." What are these parameters? They aren't anywhere in the NMF class, so I don't know how to set them before running the function.
For reference, this is how I made my NMF object. I assumed I could use the AudioSignal class's STFT method to create the STFT to be used for NMF.
signal = AudioSignal('audioFile.wav')
signal.stft_data = signal.stft()
nmf = Nmf(signal.stft_data,2)
For every song I've tested, the beat spectrum has looked something like:
The plotted beat spectrum is so tiny that I can't get any information out of it. The axes are not clear, and they don't seem to have a great scaling factor. I assumed that the x-axis represented number of samples or something time-based due to song length; however, the two songs have very different lengths (one is 13 seconds, the other is 67). I don't know what the y-axis represents.
self.bkgd = AudioSignal.AudioSignal(audio_data_array=self.bkgd)
is incorrect. You need to make sure you are setting the sample rate of the source signal as well. Because of this error output files are transposed to a different key if the input files are not 44.1kHz (the default rate).
You describe the parameters, whether or not they are optional, and what the default values are below the class. The class instantiation itself does not have this information, and instead makes it look like every parameter is optional and defaulted to None:
Instead, you should show which parameters the user must use, as well as those that are optional and given a default value:
class Repet.Repet(input_audio_signal, sample_rate, stft_params, repet_type, similarity_threshold=0, min_distance_between_frames=1, max_repeating_frames=10, min_period=0.8, max_period=min(8, self.Mixture.SignalLength/3), period=None, high_pass_cutoff=100)
Note: I'm not sure if you can set such a default value to max_period. If not, you could probably just do something like: max_period='default_max'
, and then inside the class, do: if max_period=='default_max': max_period=min(8, self.Mixture.SignalLength/3)
.
In addition, I followed the Repet demo code, and it shows instantiating the Repet class with only the signal as an input. If I provide only the signal, and ignore the stft_params section below the class instantiation and above repet.run(), everything still runs fine. Repet clearly has defaults for everything except input_audio_signal, including the parameters that did not seem to be optional, such as sample_rate, stft_params, repet_type. You should indicate the default values in the parameter descriptions.
Add sample rate to self.fgnd AudioSignal object
In the "Examples" section, the example code for computing the spectrogram and inverse stft doesn't work. There are a few issues here:
AttributeError: 'AudioSignal' object has no attribute 'STFT'
sigSpec,sigPow,F,T=sig.stft()
, I get the error ValueError: too many values to unpack
. The stft() function only returns one output, not four.sigrec,tvec=signal.istft()
, I get the error Exception: Cannot do inverse STFT without self.stft_data!
I can see that self.stft_data exists in the class, and I assumed all these listed variables were initialized on their own. Am I supposed to do something here?signal.stft_data=signal.stft()
sigrec,tvec=signal.istft()
Bongjun previously mentioned the error of setting plot_beat_spectrum=True
on a REPET SIM object:
`AssertionError Traceback (most recent call last)
in ()
5
6 repet2 = Repet(signal, repet_type='sim')
----> 7 repet2.plot('RepetOutput/new_beat_spec_plot2.png', plot_beat_spectrum=True)
8
9 # repet3.plot('RepetOutput/new_sim_matrix_plot2.png')
/Users/kamaddio/miniconda2/envs/nussltests/lib/python2.7/site-packages/nussl/Repet.pyc in plot(self, output_file, **kwargs)
528
529 if plot_beat_spectrum == plot_sim_matrix == True:
--> 530 raise AssertionError('Cannot set both plot_beat_spectrum=True and plot_sim_matrix=True!')
531
532 if plot_beat_spectrum:
AssertionError: Cannot set both plot_beat_spectrum=True and plot_sim_matrix=True!`
However, there is also an issue when you try to set plot_sim_matrix=True
on a REPET ORIG object: no error is raised; however, instead of plotting the similarity matrix, the program just plots the beat spectrum instead.
Function, as written on the page: find_peaks(data, min_thr=0.5, min_dist=None, max_num=1)
Simple example I used:
repet1 = Repet(signal)
data = np.array([0,0,.2,.5,1,1,0,1,0,1,0])
peaks_indices = repet1.find_peaks(data)
output: peaks_indices = [[4]]
peaks_indices = repet1.find_peaks(data, max_num=2)
output: peaks_indices = [[4 7]]
peaks_indices = repet1.find_peaks(data, max_num=3)
output: peaks_indices = [[0 4 7]]
peaks_indices = repet1.find_peaks(data, max_num=4)
output: peaks_indices = [[0 0 4 7]]
peaks_indices = repet1.find_peaks(data, max_num=5)
output: peaks_indices = [[0 0 0 4 7]]
etc.
So when max_num is set to an integer value greater than the number of peaks in the data, it adds the index 0 to the output peaks_indices list. It actually keeps adding index 0 (it does this (max_num - number_of_peaks_found) times). Instead, you should probably list only the peak indices found in data, and if fewer than max_num are found, then that is fine, because it is a max_num, not a min_num or total_num.
So in the example above, peaks_indices = repet1.find_peaks(data, max_num=5)
should give an output of peaks_indices = [[4 7]]
. Even though max_num=5, the system only found 2 peaks, so it should return just those two rather than adding in an incorrect (and repeated) index of 0.
(odd meaning non powers of two.) STFT and iSTFT have a lot of problems. It might just be best to call librosa's STFT utils...
This is how I'll talk to you now. This is how we're going to communicate.
There is self._find_similarity_indices() and self._get_similarity_indices(). The first needs to be changed to self._compute_similarity_indices() so that the distinction is clear.
For all objects that inherit from SeparationBase, Run() should return a list of spectrum of all sources. Is this better then returning what the algorithm provides (e.g., in NMF returning the W and H)?
In Transformer NMF
Are the static functions in the REPET class meant for users, or are those functions you call only internally in your code? For example: I am unsure how to call static compute_beat_spectrum(X)
because I don't know how to generate the autocorrelation matrix X.
At first I thought I had to use static compute_beat_spectrum(X)
in order to generate the beat spectrum for my Repet object, especially since it is the first function listed on the page. However, I scrolled down and saw that get_beat_spectrum()
was what I wanted. The same goes for static compute_similarity_matrix(X)
and get_similarity_matrix()
.
Are your static functions supposed to be accessible to the public? Do they allow a user to have more control over how the similarity matrix, beat spectrum, etc. are computed by the system? Or are they only intended to be called within the program by the system? If it's the former, you should explain how users should use these functions correctly (and with the correct input). If it's the latter, you should remove these from the documentation.
You should also probably put the most important/useful/probably-commonly-used functions at the top of the page, such as self.run(), self.get_beat_spectrum(), self.get_similarity.matrix(), and self.plot(...).
They have some github BS prefixed to them.
It just defaults to 44100 even if the object's sample rate is different
Power spectrum data isn't saved correctly when doing the initial STFT. Also, make sure all of the bookkeeping is correct; sometimes we're overwriting STFT data sometimes we're not. Why?
In the example under REPET class in the documentation, overriding does not work:
repet = nussl.Repet(signal, repet_type=nussl.RepetType.SIM)
repet.plot('new_sim_matrix_plot.png', plot_beat_spectrum=True)
AssertionError Traceback (most recent call last)
in ()
----> 1 repet4.plot('new_sim_matrix_plot.png', plot_beat_spectrum=True)
/Users/bongjunkim/anaconda/lib/python2.7/site-packages/nussl/Repet.pyc in plot(self, output_file, **kwargs)
495
496 if plot_beat_spectrum == plot_sim_matrix == True:
--> 497 raise AssertionError('Cannot set both plot_beat_spectrum=True and plot_sim_matrix=True!')
498
499 if plot_beat_spectrum:
AssertionError: Cannot set both plot_beat_spectrum=True and plot_sim_matrix=True!
DUET_v2.py
repet_org.py
reper_org_freq.py
and corresponding demos
Often times there are collisions between AudioSignal.StftParams and [separation].StftParams. Best way to get rid of it is by removing the SeparationBase dependency.
It seems like AudioSignal.AudioData is of shape (Length, nChannels), which is probably not the most efficient way to represent this data. Additionally, we jump through a lot of syntactic hoops to get the data into a representation that we can manipulate. Find a way to represent this array that's computationally and syntactically efficient. I suspect that swapping the array dimensions might solve both of these requirements, but there's a lot of infrastructure that will need to be rebuilt (and simplified).
If you add two signal objects together, you do element-wise addition of their num-py arrays. That part is ok...but if these things have different sample rates, I don't want them to just add together. That doesn't make sense to me. You should error out
You need to use the 'periodic' windows for Hann, Hamming, and Blackman in order to get constant-overlap-add. To do this, use the scipy.signal windows instead of numpy and add the argument sym=False to the arguments.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.