matthewh806 / killing_me_softly_with_his_dsp Goto Github PK
View Code? Open in Web Editor NEWA collection of small audio experiments, applications & prototypes
A collection of small audio experiments, applications & prototypes
Sometimes the region does not highlight properly.
This generally seems to occur when its the first or last slice which is currently playing
The way MIDI is being handled in the processor isn't ideal. The main issue is that there are allocations and locks in the processBlock method.
Array<juce::MidiMessage>
. This is unlikely to cause much of a performance hit as the MIDI in events are likely sporadic / quite few in number per block (of course this isn't necessarily the case!). However, this data structure is inconsistent with how the output midi data is stored (juce::MidiBuffer
) and specific storage classes exist for MIDI data which should be utilisedjuce::MidiBuffer
which is unfortunately accessed by both the audio thread and the message / GUI thread. I was forced to temporarily put a lock for access to the buffer in place to prevent exceptions and bad memory accesses. I haven't noticed any slowdown, but a better approach should be found.In both cases an abstract fifo could be used (this is whats done in MidiLoggerPluginDemo.h
, see the MidiQueue
class). From what I've read online that is a viable approach, though I need to read a bit more in depth about how the FIFO is thread safe.
Currently the build system relies on Aubio being installed on the system. It would be better and more consistent to add aubio as a github module and compile it as part of the CMake process
Use a clang formatting tool to consistently format all of the code
Currently its expressed in terms of seconds, but it would be nice to have the possibility of syncing with a host DAW too
The minimum ball size is minuscule, barely visible on the screen. Increase the minimum (and consider increasing max too?)
Should this be split into separate issues for each application?
Work on some mock ups?
Pixel style?
Since updating the version of Ubuntu on the raspberry pi the web app is not able to run. The logs report an issue with finding the flask
library. Needs further investigation
When testing with Reaper I've occasionally hit the assertion jassert (dataSize > 0)
in the juce_MidiMessage costructor
MidiMessage::MidiMessage`. Needs to be figured out still
It should be possible to define an octave range from which notes are drawn in the UI, by providing a min and max range
The balls colours are fixed based on the Debug draw method b2World::DrawDebugData()
which hardcodes the colour values based on the type of body.
It would be nice to be able to have more control over this. Hopefully without having to build a whole renderer!
Recently the use of version number in more than one place crept in:
This should be fixed, perhaps using a txt file both read from? Otherwise they'll inevitably eventually fall out of sync ๐ฑ ๐ฑ ๐ฑ
The plugin is detected by Ableton Live 11 but fails to load when dragged onto a track.
There's a thread discussing the issue on the forum. I've posted with info from my end: https://forum.juce.com/t/vst3-midi-plugins-wont-load-in-ableton-live/36323/27
Note: I can't get the example MIDI logger provided by JUCE to work either
Tested on m1 mac mini
This functionality was removed during a refactor, likely needs to be reworked / tweaked to get it working again
Allow an option for a user to draw their own polygon shapes
When the slice information is updated (e.g. slice added, moved or deleted) the audio engine doesn't recognise the change until the engine changes its currently playing slice.
This effect should be immediate -> It might include edge cases to watch out for if the current position in the buffer is no longer valid.
This will then allow the build action to fail overall too
The pitch shifting operation currently runs on a background thread and creates a whole new file. Silently swapping the two once the pitch shift is performed.
It causes an ugly popup to appear while the operation is performed and can lead to a crash when continuously turning the knob (as I think the processes are not ending in time?).
This could be implemented as a real time parameter, this thread goes into detail about how to achieve that. This would eliminate the need for a separate thread entirely, however, it does introduce some latency which would need to be taken into account. Basically the work is performed on the audio thread at each frame - this could be considered a bit wasteful considering the overall audio we're using is static (just the buffer jumping around) so really the operation only needs to be performed once. (What about a hybrid approach? Loading in the new one when its done, could get complex).
I think I should just create a real time pitch shifting class to achieve this rather than refactoring existing stuff
This needs a bit of research & planning before implementation. It's not necessarily something that needs to be done.
The original intention of the the application was to generate sliced break samples for exporting & to record the live output of the playhead jumping around between slices.
This could all be run inside a DAW and synced to that. The drag and drop audio functionality should remain the same (i.e. not having the audio to manipulate on the track itself but inside the plugin), otherwise it would need to be written as an ARA plugin.
Need to consider how to sync best with the host (e.g. jump marker with BPM option? Right now it just plays until the end of each slice)
The creation of the zoom functionality in the waveform has increased the need for some kind of "grid" like visual indicators. Otherwise it's quite easy to get lost in the file.
This should be general enough to put in a wrapper class with the bass waveform. With functionality to show / hide these elements as they could be distracting
This should be enough for now. In future we can think about adding grid lines, different timecodes etc
At the moment the slice selection is done completely randomly. There are some other interesting approaches too.
The zoom & L/R panning have been implemented.
The horizontal panning feels quite sluggish though (compare with Renoise for e.g.)
To make it feel more natural and not like we're dragging ourselves along the waveform, the movement should be given some momentum which slowly decays in velocity to zero over time so that the action continues after the swipe has ended
This makes sense to have as a VST3 rather than just standalone.
One caveat is that the current implementation (based on the research paper) generates the grains randomly in time based on a probability distribution. For a DAW it would probably be preferable to have these generated with incoming MIDI events as well?
Needs further research
It looks awful at the moment!
Also generalise it so that it can be reused in different projects
These parameters either broke or got commented out at some point in the past and have never been reimplemented with all of the new work which has gone on in this project. Should be quite trivial to implement at least retrig and reverse.
Retrigger time could be parameterised too and could be based on either ms
or tempo synced divisions (if host sync is implemented)
Cross fade always introduces unpleasant pops - which is what its supposed to be preventing! So that should be fixed too
The MIDI input is being completely ignored when run as a host. This should be fixed so that incoming MIDI in the process block thread is handled by passing the events into a structure and then on an async update creating the note on / off events
At the moment its hidden away in the users temporary files
In addition the file is not being overwritten, meaning only the first ever recording actually works! (if the user can even find it)
The generalised waveform component would benefit greatly from the ability to zoom in.
Currently balls spawn at a random point inside the polygon, it would be more intuitive for them to spall at the location of the mouse click.
This whole project is under thought and barely functional. Rethink it from the ground up or if its not of interest any more... delete it!
Currently the code isn't wrapped in a namespace, this should be done.
Perhaps KMSWHDSP (is this annoying to type?)
Figure out a way to build a universal binary or have it build twice: one for m1 architectures and one for non m1. Using $(ARCHS_STANDARD)
on the github machine fails for some reason
The current value is just a multiplicative value. It would be more natural to offer it in the familiar semitones / cents units
Perhaps even with a selector that allows x, Hz, semitones/cents?
This needs more investigation, but logging it here for now to keep it in mind. Sometimes the whole DAW crashes while using it
It would be nice to add in functionality which allows for setting the start & end of the sample (i.e. when we're not interested in using the whole thing). Similar to markers perhaps? With an S / E in the triangle? (Compare with Renoise)
Sometimes the way the balls move and the polygon spins makes me feel sick...
I'm building more VSTs now and some are through JUCE and some directly with the Steinberg SDK. The packaging script needs to be updated to gather and zip all these up properly
It would also be worthwhile to consider packaging everything together properly: https://docs.juce.com/master/tutorial_app_plugin_packaging.html
Should the package be of just VSTs? or also include the standalone plugins?
Currently the physics & renderer steps are controlled by the same frequency constant set at 60 Hz. This gives a nice smooth physics integration and render rate.
Quite interesting stuff happens when the physics timestep is modified to be very small or very high. This parameter is therefore interesting to expose. The box2d documentation says the physics timestep shouldn't change, but thats more for videogames working with a consistent world. Here we don't really care about reality...
The libsamplerate library is required by Rubberband (unless we set USE_LIBSAMPLERATE
to OFF
in the CMakeLists.txt. This will then use the speex
library which has its sources included directly in the rubberband src directory.
Following a similar approach to aubio libsamplerate could be added directly as a git module & compiled as part of the Rubberband lib compilation process
Check the necessity of these nullptr checks in code/applications/pulsar/PluginProcessor.cpp
in the PulsarAudioProcessor::handleAsyncUpdate()
. It shouldn't be necessary with the recent refactorings so perhaps its code which I forgot to delete.
However... it could also be the case that logic that follows (ball creation etc) could trigger a method or a listener expecting the editor to pick it up, which if it isn't visible on the screen is nullptr (I think!) so just check it doesn't break stuff before deleting and fix the code if it does cause a crash.
...
auto* editor = getActiveEditor();
if(editor == nullptr)
{
return;
}
auto* pulsarEditor = static_cast<PulsarAudioProcessorEditor*>(editor);
if(pulsarEditor == nullptr)
{
return;
}
...
The uidesc
file generated by Chat-GPT looks like hot garbage. Full of hallucinations. FIX (i.e. ask it again and again and again and again until its just right)
Currently the reverb model (from freeverb) is set up to work with stereo sources.
In order to prevent exceptions when runnning in mono mode a hack is in place in the MattVerbProcessor::process
method:
if(data.numInputs == 1 || data.numOutputs == 1)
{
return kResultOk;
}
Which conveniently side-steps the issue... it would be better to properly implement a Mono reverb option.
Same for ChatVerb!
The note off events are currently hardcoded at 30ms
which means unless the sound has a long tail it dies away very quickly. It would be interesting to add some variation into this.
The value of the bit depth parameter (which affects the quantisation level internally) is not initialized properly (or the quantisation level param doesn't get a notification when its registered as a lister to inform it of the initial value at the very least).
As I'm building more and more VST's its probably worthwhile to add the validation tester to the CI system
Chat-GPT has already explained how it could be possible to implement convolutional reverb by both temporal and spectral approaches.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.