labsound / labsound Goto Github PK
View Code? Open in Web Editor NEW:microscope: :speaker: graph-based audio engine
Home Page: http://labsound.io
License: Other
:microscope: :speaker: graph-based audio engine
Home Page: http://labsound.io
License: Other
Implement a mobile-specific destination for iOS devices using CoreAudio
Canonical way of mutating time-varying data on a node is through an AudioParam. Fix cases where the API mucks this up a little bit (cf. PannerNode and several member classes).
There's ~150 non-critical lingering issues tagged by these terms in the codebase. The vast majority are inherited from Webkit (most notated as FIXME). LabSound specific issues are mostly notated @tofix.
The Android fork by @eleventigerssc suggests a need for SoundBuffer to take a more general approach to file loading.
I'm hesitant to reintroduce factories to LabSound though. I feel like the one time that factories should be used is for when contained objects must be created, but those objects cannot have ownership outside of the containing object. In other words, when an owning object must maintain lifetime ownership of the produced object.
Rather than going with a factory approach, perhaps SoundBuffer could take an asset loading lambda that resolves an asset reference into an array of bytes or something like that.
Or, perhaps SoundBuffers could reasonably be considered the sort of object that shouldn't have external ownership, perhaps because they could be artifacts of something internal to the engine, in some cases, like if the SoundBuffer wraps a hardware resource?
Any thoughts?
The AudioContext destructor sets atomic updateThreadShouldRun
to false
.
But that doesn't seem enough to kill the thread join condition because it's a disjunction instead of a conjunction, so it's totally possible for this to hang infinitely.
Should the thread condition be &&
?
Implement a mobile-specific destination for Android devices using OpenSL ES
A simple pulse generator. Alternate setting for a StepGenerator, which holds the value until the next set operation.
Internally, LabSound's graph engine can handle mixing up to 6 channels (5.1 audio). Presently, no methods are available to configure the AudioDestinationNode's output to anything other than 2-channel 44.1k.
I've been thinking about language bindings.
@modulesio has started a node binding. I notice that the binding is at root, in a directory called lib. I suspect that this is a gyp convention on where code should be placed in order that the build system can find all the components.
Rust has a similar fixed directory layout when making crates.
I'm wondering if it might be an idea that language bindings are created referencing LabSound as a submodule, and then the projects might be named according to a convention, such as
labsound-rs
labsound-node
labsound-py
labsound-c
etc., with their required structure contained within?
Spec states that there is a way to temporarily suspend rendering. However, in LabSound you can only start it and never stop, APIwise. Actual implementation of the internal AudioDestination
has the needed stop()
method, but it is hidden from the exported interfaces. Specifically, AudioDestinationNode
already has only the startRendering()
and no stopRendering()
. Is there a problem with letting people call it?
This is the ONLY use of nodeType anywhere, let's just eliminate nodeType. We don't need home made RTTI across the entire API just to support this one check :\
// First check if this node is an SampledAudioNode. If so, let it know about us so that doppler shift pitch can be taken into account.
if (node->nodeType() == NodeTypeAudioBufferSource)
{
SampledAudioNode * bufferSourceNode = reinterpret_cast<SampledAudioNode*>(node);
bufferSourceNode->setPannerNode(this);
}
The HRTF implementation in LabSound is well functioning, but large code surface area. Support for ambisonics and other HRTF models would be helpful but require further complexity. Proposal: wrap the audio spatialization features of the Steam Audio SDK into a core node for LabSound, replacing all existing HRTF/HRTFPanner code.
https://github.com/ValveSoftware/steam-audio/releases
The Steam Audio SDK is "openly" licensed, but not OSS. Middleware in the strict sense of the word. It breaks my philosophy not to include binary dependencies in LabSound, but the features might be worth it.
What do you say, @meshula ?
What was the thinking on going with RtAudio vs. PortAudio as a base layer?
It shouldn't be hard to add a Raspberry Pi target to the cmake scripts.
Possibly based on this: https://github.com/Metabog/BizzareDelay2014/blob/440ad913395f847dd4a084b53a3a1cff0277167f/DSP_TapeDelay.cpp (Public Domain)
I'm trying to setup LabSound to build using gyp, and get it running on Windows, because I think it's a cool project. Unfortunately, it seems like a lot of the sources are looking for WebCore/WebCoreHeaderDetection.h, which is not in the repository. Am I configuring something incorrectly, or do you have some external dependencies that I can setup on my machine.
Implement an example showing how an AudioContext in offline rendering mode can be used for efficient musical feature extraction
Several lingering disconnect methods on AudioContext need to be tested
So, speaking of WAA, they have this kind of interface:
interface BaseAudioContext : EventTarget {
...
AudioBuffer createBuffer (unsigned long numberOfChannels, unsigned long length, float sampleRate);
Promise<AudioBuffer> decodeAudioData (ArrayBuffer audioData, optional DecodeSuccessCallback successCallback, optional DecodeErrorCallback errorCallback);
And then those guys in threejs use it:
Object.assign( AudioLoader.prototype, {
load: function ( url, onLoad, onProgress, onError ) {
var loader = new FileLoader( this.manager );
loader.setResponseType( 'arraybuffer' );
loader.load( url, function ( buffer ) {
var context = AudioContext.getContext();
context.decodeAudioData( buffer, function ( audioBuffer ) {
onLoad( audioBuffer );
} );
}, onProgress, onError );
}
} );
Well, that boders me a lot, since I'm currently investigating ways to port LabSound onto Node.js. But as LabSound does not comply with WAA standard so far, I would have to patch it first. This is why I ask if there is any meaningful explanation for 934b6bf ? Probably there were some issues with this class, or it was just a matter of over-optimization?
Port one of the algorithms from https://github.com/ddiakopoulos/MoogLadders
Even if node was not producing audio
Build out an example showing a couple useful features for games:
Implement a digital recreation of one of the filters found on the MS-10 or MS-20
Algorithmic reverb node based on the 16 delay-line implementation found in https://github.com/dliganov/Chaotic-DAW/blob/master/rosic/effects/rosic_Reverb.h (public domain)
Implement a configurable circular buffer node that can serve as a collection primitive for audio analysis applications
Something like this: https://vimeo.com/133730628
Possibly port from existing WebAudio implementation: http://js.do/rsiqueira/theresassoundworld-thx-deep-note
Build out an example showing how to load and play 5.1 audio using the new audio configuration API
Hello,
how difficult it would be to move the whole engine (including effects DSP, analysers with optimisations etc) to 64 bits double ?
I know this is not useful for most usages but I may have a use case (that I cannot disclose) that it would be necessary.
thank you
Steve
DelayNode is needlessly abstracted for a simple API
DelayNode => DelayProcessor => DelayDSPKernel => AudioDSPKernelProcessor
Build an example for the SampledInstrumentNode
For me RtAudio
hangs on app close on Windows due to the destructor. Basically this member.
I don't know if this is a bug in RtAudio or the APIs are being used wrongly but I couldn't figure out how to make it not hang. My fix, which probably isn't right, was to make the member a pointer and not destroy on app termination. This works but it would be nice to have a cleaner shutdown.
It's been a while since I've used the OSX version. Just did a fresh clone and there was an issue with an assert vs ASSERT and std::min/max vs MIN etc. Pull request for those later -- but how do you feel about removing libpd as a default on the included project?
Hello.
Walking through the codes, I've just found that lab::AnalyserNode
class lacks an ability to set fftSize
value, it has only getter. Also lab::RealtimeAnalyser
lacks the same thing.
However it exists in chromium's implementation of the above classes. And it doesn't seem to be too hacky. I think this method is needed, because the spec states that this property is settable.
As a temporary workaround I could resort to re-creating the lab::AnalyserNode
instance in my JS (bindings) setter, whenever the fftSize
value is changed from JS. But it would really be better if we avoided the hackery and put this feature into the core.
blink has got a biquad disconnection issue. Do we?
https://chromium.googlesource.com/chromium/blink/+/510aa7c7abeb80c341158b38a272bbe45bd1da4b
I've tried running LabSound examples on two different PCs, only to find that it suddenly crashes on my older AMD, whereas newer Intel laptop runs the example fine. The debug build was not helpful at all, because an exception occurs in a very random place.
Then I googled a bit, and found that AVX mode might be the cause. And it turned out to be so. Removing /arch:AVX
helped.
Whenever I run one of the samples, the program always seem to crash in the MakeAudioContext -function, precisely in the AudioListener constructor.
Exception thrown at 0x0153AFA7 in LabSoundExamples.exe: 0xC000001D: Illegal Instruction.
What might cause this problem?
I'm using VS2017, but I remember having this same problem with VS2013 as well. Is this some hardware specific, or is there something missing in my configuration settings? I'm trying to run this on Windows 7 (x64) Home premium SP1.
Hi. I'm trying to fire-off a cmake on macOS and run immediately into cmake errors. Same goes when I try to fire-off cmake on Debian 9 or CentOS 7. I fire-off the cmake by issuing "cmake ." at the clone's root.
The problems I am seeing are:
On macOS:
[ 42%] Building C object CMakeFiles/libnyquist.dir/third_party/libnyquist/src/VorbisDependencies.c.o
In file included from /Users/jay/dev/repos/heynow/labsound/third_party/libnyquist/src/VorbisDependencies.c:62:
In file included from /Users/jay/dev/repos/heynow/labsound/third_party/libnyquist/third_party/libvorbis/src/vorbisenc.c:155:
/Users/jay/dev/repos/heynow/labsound/third_party/libnyquist/third_party/libvorbis/src/modes/setup_44.h:18:10: fatal error:
'modes/floor_all.h' file not found
#include "modes/floor_all.h"
^~~~~~~~~~~~~~~~~~~
Seems to me that the cmake didn't set things up fully for the third_party dependencies. What should I be doing differently here? Invoke cmake with some special arguments?
A similar set of problems when building on Linux occur:
[ 12%] Building C object CMakeFiles/libopus.dir/third_party/libnyquist/third_party/opus/opusfile/src/info.c.o
In file included from /home/centos/LabSound/third_party/libnyquist/third_party/libogg/include/ogg/ogg.h:25:0,
from /home/centos/LabSound/third_party/libnyquist/third_party/opus/opusfile/include/opusfile.h:109,
from /home/centos/LabSound/third_party/libnyquist/third_party/opus/opusfile/src/include/internal.h:32,
from /home/centos/LabSound/third_party/libnyquist/third_party/opus/opusfile/src/info.c:16:
/home/centos/LabSound/third_party/libnyquist/third_party/libogg/include/ogg/os_types.h:143:32: fatal error: ogg/config_types.h: No such file or directory
# include <ogg/config_types.h>
Again, it seems like I'm missing something here using cmake. Or.. is it possible no one has recently done 'make' builds recently on this repository? Please advise, and, THANK YOU for your help.
/Jay
Deprecated the existing Mac specific destination node and prefer to use RtAudio across all desktop configurations.
Both pure-math (std::sinf) and fast implementations (e.g. taylor series approximation).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.