orottier / web-audio-api-rs Goto Github PK
View Code? Open in Web Editor NEWA Rust implementation of the Web Audio API, for use in non-browser contexts
Home Page: https://docs.rs/web-audio-api/
License: MIT License
A Rust implementation of the Web Audio API, for use in non-browser contexts
Home Page: https://docs.rs/web-audio-api/
License: MIT License
Some improvements could be made in graph.rs when rendering an audio quantum:
Cell
or equivalentExecuting the following code
use web_audio_api::context::{AsBaseAudioContext, AudioContext};
use web_audio_api::node::{AudioNode, AudioScheduledSourceNode};
fn main() {
let context = AudioContext::new();
// Create an oscillator
let left = context.create_oscillator();
//Create an oscillator
let right = context.create_oscillator();
// set a different frequency to distinguish left from right osc
right.frequency().set_value(1000.);
// Create a merger
let merger = context.create_channel_merger(2);
// connect left osc to left input of the merger
left.connect_at(&merger, 0, 0).unwrap();
// connect right osc to left input of the merger
right.connect_at(&merger, 0, 1).unwrap();
// Connect the merger to speakers
merger.connect(&context.destination());
// Start the oscillators
left.start();
right.start();
// connect left osc to splitter
// enjoy listening
std::thread::sleep(std::time::Duration::from_secs(4));
}
440Hz on the left speaker
1000Hz on the right speaker
440Hz + 1000Hz on the left speaker
440Hz + 1000Hz on the right speaker
I think the root cause is Graph::add_node()
function which always allocate mono AudioBuffer for output
pub fn add_node(
&mut self,
index: NodeIndex,
processor: Box<dyn AudioProcessor>,
inputs: usize,
outputs: usize,
channel_config: ChannelConfig,
) {
// todo, allocate on control thread, make single alloc..?
let inputs = vec![AudioBuffer::new(self.alloc.silence()); inputs];
let outputs = vec![AudioBuffer::new(self.alloc.silence()); outputs];
[...]
}
Majority of audio nodes have 1 output including MergerNode, but MergerNode has N channels in this output stream.
N corresponding to its number of inputs.
There is hidden states in the specs that make it difficult to read, my understanding is that a node has
Inputs and/or Outputs
Inputs and Outputs are streams
A stream is composed of one or more channels
A channel is a mono audio data array
numberOfInputs == number of input streams
numberOfOutputs == number of output streams
channelCount is NOT number of input channels
channelCount is NOT number of output channels
computedChannelCount is the number of input channels
computedChannelCount is computed from channelCount, channelCountMode, and channelInterpretation (see),
[hidden] computedOutputChannelCount == number of output channels. In the case of AudioWorklet, the number of output channel is not computed but explicitly specified by the outputChannelCount parameter.
computedOutputChannelCount depends on the node type and "actively processing" state of the node
Possible solution:
There is some differences between Rust AudioBuffer API and Web audio specs AudioBuffer API. (see specs for more details)
Possible solution:
AudioBuffer::sample_rate(&self) -> f32
sample_len()
function as length()
channel_data
function as get_channel_data
channel_data_mut
function as get_channel_data_mut
from_channels
function private and add a public copy_to_channel
functioncopy_from_channel
functionThere is this concept of "acquire the content" in the spec (see specs for more details). I think it could be emulate by using systematically move semantics with AudioBuffer. What do you think ?
If yes, it should be documented
https://webaudio.github.io/web-audio-api/#rendering-loop
Cycles are currently fully muted
One small thing is that the error messages are a bit confusing, nothing really important but maybe it would be better to override them with some generic message (at contrary it would possibly make further debugging more complicated, don't really know)
> --------------------------------
> Error decoding audio file: "samples/empty_2c.wav"
> IoError(Custom { kind: UnexpectedEof, error: "end of stream" })
> --------------------------------
> --------------------------------
> Error decoding audio file: "samples/corrupt.wav"
> IoError(Custom { kind: UnexpectedEof, error: "end of stream" })
> --------------------------------
> --------------------------------
> Error decoding audio file: "samples/sample.aiff"
> DecodeError("mp3: invalid mpeg audio layer")
> --------------------------------
> --------------------------------
> Error decoding audio file: "samples/sample.webm"
> DecodeError("mp3: invalid main_data_begin")
> --------------------------------
Originally posted by @b-ma in #87 (comment)
The implementation should follow this part of the specification (see DecodeAudioData)
Fix all clippy warnings and errors (> 90).
Solution: I can make a PR to solve this, if accepted
In the spec, exceptions are thrown all over the place (when creating nodes, when updating attributes, when calling methods..)
There a few options for the rust version:
Result<..>
Result<..>
(for consistency, even infallible ones)try_...
method for fallible operations returning a Result
All variants have pros and cons. We need to strike a balance between correctness and pragmatism.
e.g. https://docs.rs/curl/0.4.41/curl/ returns a Result for every operation meaning your code will be littered with unwrap
s
The rust stdlib strikes a reasonable balance between (well documented) panics and offering try_...
functions, in my opinion (e.g. https://doc.rust-lang.org/std/cell/struct.RefCell.html#method.try_borrow )
This is currently handled with AudioProcessor::tail_time
To conform to the spec of AudioWorkletProcessCallback
, this should be refactored
Related discussion in #46 (comment)
https://www.w3.org/TR/webaudio/#callback-audioworketprocess-callback
https://www.w3.org/TR/webaudio/#dom-audioparam-automationrate
The automation rate, barring some constraints, can be changed by the control thread.
set_automation_rate
to impl AudioParam
AudioParamValues.get(..)
. It now returns a slice of full render quantum length always. But the spec says it can be length 1 too (for k-rate and for constant pieces) [1]. For type safety we can also make it return an enum.[1] https://www.w3.org/TR/webaudio/#audioworkletprocess-callback-parameters
https://www.w3.org/TR/webaudio/#UpMix-sub
https://www.w3.org/TR/webaudio/#down-mix
A few common mixing implementations are already in src/node.rs
pub fn mix(&mut self, channels: usize, interpretation: ChannelInterpretation)
I profiled heap allocations with heaptrack and found that the bottleneck is AudioParamProcessor::tick method which seems to allocate at each call ( at least for A-rate parameter).
impl AudioParamProcessor {
pub fn value(&self) -> f32 {
if self.value.is_nan() {
self.default_value
} else {
self.value.clamp(self.min_value, self.max_value)
}
}
fn tick(&mut self, ts: f64, dt: f64, count: usize) -> Vec<f32> {
// store incoming automation events in sorted queue
for event in self.receiver.try_iter() {
self.events.push(event);
}
// setup return value buffer
let a_rate = self.automation_rate == AutomationRate::A;
let mut result = if a_rate {
// empty buffer
////////// I think this is were the allocation happens //////////////////////////
Vec::with_capacity(count)
} else {
// filling the vec already, no expensive calculations are performed later
vec![self.value(); count]
};
I am learning the profiling tools in the process, so what i am saying should be double check to be sure.
Originally posted by @Jerboas86 in #15 (comment)
The following test fails sometimes. We should probably use some lock
s or sleep
to make sure output is generated before it is read
test src/media.rs - media::MediaElement (line 74) ... FAILED
failures:
---- src/media.rs - media::MediaElement (line 74) stdout ----
Test executable failed (exit code 101).
stderr:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: BufferDepletedError', src/media.rs:29:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Context::current_time always returns 0
Runs the following example to reproduce:
use web_audio_api::context::{AsBaseAudioContext, AudioContext};
use web_audio_api::node::{AudioNode, AudioScheduledSourceNode};
fn main() {
let context = AudioContext::new();
let osc1 = context.create_oscillator();
osc1.connect(&context.destination());
osc1.frequency().set_value_at_time(440., 0.);
osc1.start();
loop {
println!("Time: {}", context.current_time());
}
}
The example keeps printing 0:
Time: 0
Time: 0
Time: 0
Time: 0
from web audio api spec:
Thus, for a running context, currentTime increases steadily as the system processes audio blocks, and always represents the time of the start of the next audio block to be processed.
File: param.rs
Relevant section in spec: https://webaudio.github.io/web-audio-api/#AudioParam
panic
calls should be replaced by clean and compliant Errors
self
or Result(self, Error)
to allow method chainingtimes
, time_constant
)AudioNode::new
to remove set_value
or set_value_at_time
, default values should be properly initialised at instantiation without calling an automation method explicitlyset_target_at_time
can run forever as it as no "real" end time or value, a possible performance improvement would be to internally cancel the event when intrisic_value
is close enough to the target
(note that Chrome implements such strategy)AutomationEventQueue
in control thread instead of audio thread (e.g. Arc<Mutex<AutomationEventQueue>>
?) - is maybe necessary for proper Error handling (cf. discussion in #62 (comment)set_value_curve_at_time
signature to use Vec<32>
instead of &[f32]
(cf. discussion in #62 (comment))In the specs, we can build audioContext
with an optional argument AudioContextOptions
dictionary AudioContextOptions {
(AudioContextLatencyCategory or double) latencyHint = "interactive";
float sampleRate;
};
I think we should extend this options like that:
dictionary AudioContextOptions {
(AudioContextLatencyCategory or double) latencyHint = "interactive";
float sampleRate;
unsigned long channels = 2;
};
This way, the user can specify the number of output channels that it would like. The current behavior is that the user has only access to default configuration given by CPAL.
Should AudioContext::new(options: Option<AudioContextOptions>)
be fallible ?
My opinion is that it should not. The method should fallback to CPAL default config if user requested config fails.
And provides supported config as an error message.
https://www.w3.org/TR/webaudio/#OfflineAudioContext-methods
Notice the suspend
function has the rather odd feature that you can specify the suspend time. We can implement that (later) as an addiditonal method suspend_at
Add the BiquadFilterNode
to web-audio-api-rs
to be compliant with (web audio api spec)
Solution:
I have not yet written a prototype, but i think i can reach at least partial compliance for this node
May i begin to work on it ?
https://webaudio.github.io/web-audio-api/#StereoPannerNode
The more complicated PannerNode is already implemented, definitely have a look at that one when implementing this
Hey, I open this issue because reading the code and trying to understand it, there are still some aspects that I struggle with and that I think could be clarified and maybe improved.
disclaimer: I just go through my understanding to try to be clear, I don't pretend anything is ground truth here
So, my impression is that there should be 3 different types of audio buffers while there are only 2 for now:
The buffers used to compute blocks of signal (i.e. alloc::AudioBuffer
). Nothing special to say about this one except that it could be renamed to AudioBlock
, AudioBus
or AudioRenderQuantum
to avoid confusion with the AudioBuffer
exposed by the API (which would also lead to renaming BUFFER_SIZE
to BLOCK_SIZE
or RENDER_QUANTUM_SIZE
for consistency)
The AudioBuffer
as defined by the WebAudio API, which is only consumed by the AudioBufferSourceNode
and the ConvolverNode
or returned through a Promise
by OfflineContext.startRendering
. This one should be completely loaded in memory for very fast and accurate access by the nodes that consume it, audioContext.decodeAudioData
should therefore work from a asset (e.g. some audio file) completely loaded in memory (retrieved from network (XHR call in JS land) or file system) and perform the following steps:
audioContext.sampleRate
AudioBuffer
that can be consumed without further processing by e.g. the AudioBufferSourceNode
Some other audio buffer which is more related to a streaming paradigm (data is received on the fly with questions of buffering, back pressure, etc.) and can be piped to some WebAudio nodes but is defined or related to other specs such as WebRTC (microphone, audio streams from network and MediaStreamAudioSourceNode
) or HTML (<audio>
tag and MediaElementAudioSourceNode
). In that case decoding and resampling can only be done on the fly when the chunks are received, and the WebAudio nodes do not expose any start
, stop
or seek
possibilities.
In the current implementation, my impression is that the AudioBuffer
defined buffer.rs
try to handle both 2. and 3. paradigms in the same way and, is finally more close to 3. and not really optimized for 2.
Do this makes sens or am I mistaking somewhere here? If you think it's a good idea, I would be happy to try to propose something really dedicated to 2.
Some functionality was added but is subject to change. Let's introduce an opt-in nightly
feature flag and make unstable functionality only available there.
Observed:
linear_ramp_to_value_at_time(v,end)
sets the parameter value immediatly at time specified in end argument.
Expected:
from [web audio api spec](https://www.w3.org/TR/webaudio/#:~:text=linearRampToValueAtTime(value%2C%20endTime,the%20given%20value):
linearRampToValueAtTime(value, endTime)
Schedules a linear continuous change in parameter value from the previous scheduled parameter value to the given value
Example to reproduce
Run
use web_audio_api::context::{AsBaseAudioContext, AudioContext};
use web_audio_api::node::{AudioNode, AudioScheduledSourceNode};
fn main() {
let context = AudioContext::new();
let osc1 = context.create_oscillator();
osc1.connect(&context.destination());
osc1.frequency().set_value_at_time(440., 0.);
osc1.start();
osc1.frequency().linear_ramp_to_value_at_time(220., 0.4);
std::thread::sleep(std::time::Duration::from_millis(1000));
}
And run
use web_audio_api::context::{AsBaseAudioContext, AudioContext};
use web_audio_api::node::{AudioNode, AudioScheduledSourceNode};
fn main() {
let context = AudioContext::new();
let osc1 = context.create_oscillator();
osc1.connect(&context.destination());
osc1.frequency().set_value_at_time(220., 0.4);
osc1.start();
osc1.frequency().set(220., 0.4);
std::thread::sleep(std::time::Duration::from_millis(1000));
}
No difference in scheduling the paramter change is observed.
Connection to the audio hardware is made through io::build_output
and io::build_input
functions.
build_output
instantiates the output stream.build_input
instantiates the input stream.A rewrite of io::build_output
has been made by PR #36. This PR makes it possible for the end user to change output stream parameters like:
build_input
functionA rewrite of io::build_input
should be made to make stream configuration coherent between build_output
and build_input
functions. It would also be helpful if both functions used similar type system to define them.
Hi,
First, thanks for your work on this promising project.
I was playing with your library to create some Node.js bindings (using https://napi.rs/) and after some minimal work to wrap your code, I wrote this small javascript test. However the sound rapidly starts to be very noisy (a kind of nice noise but not the kind we would expect...), with the script rapidly growing to take almost 100% of a processor:
function triggerSine(audioContext) {
const sine = audioContext.createOscillator();
sine.connect(audioContext.destination);
const now = audioContext.currentTime;
sine.start(now);
sine.stop(now + 0.03);
}
(async function() {
const audioContext = new AudioContext();
await audioContext.resume();
setInterval(() => triggerSine(audioContext), 0.05 * 1000);
}());
While I agree this may appear as a kind of "agressive" test, this kind of pattern is quite common e.g. for granular synthesis or for generative music.
To make sure, the problem does not come from my own code, I re-wrote the test in pure Rust but ran into the exact same issue:
use web_audio_api::context::{AsBaseAudioContext, AudioContext};
use web_audio_api::node::{
AudioNode, AudioScheduledSourceNode, OscillatorType, PeriodicWaveOptions,
};
use std::{thread, time};
fn trigger_sine(audio_context: &AudioContext) {
let osc = audio_context.create_oscillator();
osc.connect(&audio_context.destination());
let now = audio_context.current_time();
osc.start_at(now);
osc.stop_at(now + 0.03)
}
fn main() {
let audio_context = AudioContext::new();
// mimic setInterval
loop {
trigger_sine(&audio_context);
thread::sleep(time::Duration::from_millis(50));
}
}
As I'm a newbie with Rust, I didn't try to dig more deeply into your code, but I would be happy to help the best I can.
Cheers,
Benjamin
API surface: cargo doc --lib --open
Check for
pub
mod
layout?Error
s and constants?BaseAudioContext
and AsBaseAudioContext
?AudioContext::new(AudioContextOptions)
(not wrapped in an Option)Scheduler
and Controller
from public interface, change traits (AudioScheduledSourceNode etc)_sync
AudioContextOptions.channels -> number_of_channels
channel_config_raw
and channel_config_cloned
?disconnect() , disconnect_from(node)
Channel configuration is constrained for some node (more details in the specs). I run into this problem trying to implement ConvolverNode api.
Currently, theses constraints seems not implemented.
I think the most compliant implementation would be to panic when building the constrained node. Another solution would be to catch the constraints in the type system. What do you think ?
I don't think u32 is a great choice, we're almost always casting it to usize anyway
See related discussion #47 (comment)_
Nodes:
From the spec at https://webaudio.github.io/web-audio-api/#iirfilternode
tail-time Yes Continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients.
Originally posted by @orottier in #27 (comment)
context.sample_rate() displays a sample rate of 384 000 Hz on my ALSA system.
This default sample rate is coming from the CPAL default output config.
Possible solution: try to build a stream with a lower sample rate before defaulting into the CPAL default sample rate.
Which conforms to the spec: https://www.w3.org/TR/webaudio/#dom-baseaudiocontext-samplerate
I'd like to keep the newtype wrapper in place so we don't accidentally mix up a f32 sample_rate parameter with a f32 timestamp one. See related discussion #33 (comment)
For the v1.0 release, structs with public fields must be finalized since altering them would mean a breaking API change.
see full discussion at #67
Do we follow the spec literally? Or can we use the rust "ownership" semantics to achieve the goal of the spec?
Optional fields from the spec
// This specifies options for constructing a DelayNode. All members are optional;
// if not given, the node is constructed using the normal defaults.
dictionary DelayOptions : AudioNodeOptions {
double maxDelayTime = 1;
double delayTime = 0;
};
Default and optional arguments from the spec
undefined start (optional double when = 0,
optional double offset,
optional double duration);
The current implementation does not allow for zero delay. The minimum delay is one render quantum because we have split up the DelayNode up front in a DelayWriter and a DelayReader
As discussed in #70 :
The idea would be to create a dummy connection between DelayReader and DelayWriter in the DelayNode::new constructor, this connection would do almost nothing (as basically the writer do not use output and the reader do not use input) expect guarantee the order of the processing if the delay is not in a loop. In the graph process, if the node is found in a cycle, the graph could just delete this connection and somehow flag the Reader as "in_cycle" so that the later would know it must clamp its minimum delay to the quantum duration. (I guess that's not that far from what is described in the spec, and I don't even think this would need to be reversible, e.g. once in a cycle the node behaves like that forever even if the cycle is broken later, which is really a weird edge-case I can't even imagine where users should anyway know what they are doing)
Port a few examples/tutorials from MDN/others to a rust version and see what we run into. For example:
The current implementation of OscillatorRenderer
is based on transcendental function sin()
.
An alternative implementation based on wave table may lead to:
The current implementation is non compliant due to aliasing behavior (web audio spec)
Solution:
i got already a prototype working for sine type.
May i pursue work into this direction ?
C = control thread, R = render thread
registration
field is dropped before the gain: AudioParam
fieldregistration
is dropped which sends a FreeWhenFinished
event for the Gain Processorgain
field is dropped, so a FreeWhenFinished
event is sent for the Audio Param processor, which will be picked up in the next quantumFreeWhenFinished
event for the gain param is handled, but it was already removed!Cool stuff. The fix will be simple (just ignore the case where the node is missing)
[1] https://github.com/rust-lang/rfcs/blob/master/text/1857-stabilize-drop-order.md
Originally posted by @orottier in #90 (comment)
Running showcase example on ALSA device, OutputBuilder::build panicks
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: BackendSpecific { err: BackendSpecificError { description: "ALSA function 'snd_pcm_hw_params_set_buffer_size' failed with error 'EINVAL: Invalid argument'" } }', src/io.rs:81:10
stack backtrace:
0: rust_begin_unwind
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/std/src/panicking.rs:515:5
1: core::panicking::panic_fmt
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/panicking.rs:92:14
2: core::result::unwrap_failed
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/result.rs:1355:5
3: core::result::Result<T,E>::unwrap
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/result.rs:1037:23
4: web_audio_api::io::OutputBuilder::build
at ./src/io.rs:64:22
5: web_audio_api::context::AudioContext::new
at ./src/context.rs:293:22
6: showcase::main
at ./examples/showcase.rs:7:19
7: core::ops::function::FnOnce::call_once
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
If i set BUFFER_SIZE to 256, the program doesn't panick anymore.
But another bug appears. An assertion fails in debug mode (underflow in release mode)
thread 'cpal_alsa_out' panicked at 'assertion failed: `(left == right)`
left: `78`,
right: `0`', src/graph.rs:117:9
I think due to #495 build_output_stream is not called at fixed rate on ALSA devices.
https://www.w3.org/TR/webaudio/#PannerNode
https://www.w3.org/TR/webaudio/#Spatialization
DistanceModelType
option (good first issue)refDistance
, maxDistance
, rolloffFactor
(good first issue)single_valued
AudioRenderQuantum)PanningModelType
HRTF
(lots of work)The golden audio processing standard is that the render thread should be 100% allocation free.
This can be achieved by allocating up front and (optionally) deny any new allocations.
What I think we need is a
AudioContextCapacity {
max_nodes: usize,
max_audio_quanta_in_flight: usize,
max_control_messages: usize
}
max_nodes
should be used as capacity of the Graph
fields
max_audio_quanta_in_flight
should be used as the capacity of the AllocInner.pool
Vec.
The meaning of "audio quanta in flight" is a bit hard to describe, technically it is the sum of all output port counts * channel counts of the leaf nodes of the audio graph.
For example, if your graph is entirely linear (e.g. buffersource -> biquad -> gain -> speakers) and emits on 2 channels, a max of 2 AudioRenderQuantumChannel
should suffice.
If you graph forms a perfectly balanced binary tree with 10 leaves having 2 outputs with 2 channels each, theoretically it could have 40 AudioRenderQuantumChannel
concurrently in flight
I realize this is not true because output buffers are not freed after each individual processor runs, todo
max_control_messages
is the Control-Render communication channel capacity.
We could add a cfg(debug)
assertion that no new allocations are allowed to be made with the alloc_counter
crate (or equivalent). This will probably directly show some allocator calls we need to address:
Graph.add_node
allocates Vecs for the input and output containers (this can be fixed with allocation in the control thread and an unsafe
transfer to the render threadNode.outgoing_edges
is a smallvec
now with stack-capacity of 2 items. Todo figure out a plan hereHi everybody,
This is not an issue. Just an information.
We are now working on the port of the Web Audio API under flutter, react native and Microsoft MAUI.
We need for that a native implementation of the Web Audio API on several platforms :
We found two existing libraries which can be used :
Your library seems very interesting. We appreciate that it is dependant on CPAL, and CPAL is dependant of OBOE.
This seems a good design.
Unfortunately we are not expert with Rust.
We will probably need a little help from you, to start.
Good luck for your library.
On some hardware, our test suite fails when checking the values of the output signal. We're using float_eq with ulps margins, but are now considering testing with absolute value margins.
Differences of up to 1E-7 will absolutely not be audible so this a margin we can stick to in tests.
See discussion by @Jerboas86 in #52 (comment)_
Suggestions:
https://www.w3.org/TR/webaudio/#dom-baseaudiocontext-control-thread-state-slot
Before we think about also adding onStateChange handlers ( https://www.w3.org/TR/webaudio/#dom-baseaudiocontext-onstatechange ) we need to think about the architecture for event handling (probably in a separate issue)
As mentioned in #82:
To be fair I'm struggling a bit with the 'boundaries' of the Web Audio API also in other cases. For #77 I was playing around with microphone input and playback and realized I needed to have something like https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder
There are a few conflicting goals:
MediaStream
trait)I will create a separate issue for this. It's probably best to steer away from names such as MediaStream
and MediaElement
because as you pointed out these are different from the w3c specs. I do think we should include them in the library because you would need them in many applications anyway
Originally posted by @orottier in #82 (comment)
The following processors only support scheduling per BUFFER_SIZE
d quantum, this should be improved:
e.g. in the current implementation the timestamp
(start of the quantum) is evaluated
// todo, sub-quantum start/stop
match self.scheduler.state(timestamp) {
ScheduledState::Active => (),
ScheduledState::NotStarted => {
output.make_silent();
return true; // will output in the future
}
ScheduledState::Ended => {
output.make_silent();
return false; // can clean up
}
}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.