Giter Site home page Giter Site logo

mdn / webaudio-examples Goto Github PK

View Code? Open in Web Editor NEW
1.2K 43.0 427.0 10.99 MB

Code examples that accompany the MDN Web Docs pages relating to Web Audio.

Home Page: https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API

License: Creative Commons Zero v1.0 Universal

HTML 42.34% JavaScript 36.29% CSS 21.37%
webaudio webaudio-api audio demo

webaudio-examples's Introduction

webaudio-examples

Code examples that accompany the MDN Web Audio documentation.

Serving the examples

To preview the examples, clone the repository and navigate to the example you want to view. For example, if you have Python installed, you can use the following commands to serve the audio-analyser example:

cd audio-analyser
python3 -m http.server

Note

If you're using the built-in Python HTTP server, be sure to use at least Python version 3.10.12.

Then navigate to http://localhost:8000 in your browser.

For more information on serving files locally using different languages or technologies, see Running a simple local HTTP server.

Repository contents

Audio analyser

The audio-analyser directory contains a very simple example showing a graphical visualization of an audio signal drawn with data taken from an AnalyserNode interface. Run the demo live.

Audio basics

The audio-basics directory contains a fun example showing a retro-style "boombox" that allows audio to be played, stereo-panned, and volume-adjusted. Run the demo live.

Audio buffer

The audio-buffer directory contains a very simple example showing how to use an AudioBuffer interface in the Web Audio API. Run the demo live.

Audio param

The audio-param directory contains some simple examples showing how to use the methods of the Web Audio API AudioParam interface. Run example live.

Audio context states

The audiocontext-states directory contains a simple demo of the new Web Audio API AudioContext methods, including the states property and the close(), resume(), and suspend() methods. Run the demo live.

Audio worklet

The audioworklet directory contains an example showing how to use the AudioWorklet interface. See also the guide on background audio processing using AudioWorklet. Run the example live.

Compressor example

The compressor-example directory contains a simple demo to show usage of the Web Audio API BaseAudioContext.createDynamicsCompressor() method and DynamicsCompressorNode interface. Run the example live.

Create media stream destination

The create-media-stream-destination directory contains a simple example showing how the Web Audio API AudioContext.createMediaStreamDestination() method can be used to output a stream - in this case to a MediaRecorder instance - to output a sinewave to an opus file. Run the demo live.

Decode audio data

The decode-audio-data directory contains a simple example demonstrating usage of the Web Audio API BaseAudioContext.decodeAudioData() method. View example live.

IIR filter node

The iirfilter-node directory contains an example showing usage of an IIRFilterNode interface. Run the demo live.

Media source buffer

The media-source-buffer directory contains a simple example demonstrating usage of the Web Audio API AudioContext.createMediaElementSource() method. View the demo live.

Multi track

The multi-track directory contains an example of connecting separate independently-playable audio tracks to a single AudioDestinationNode interface. Run the example live.

Offline audio context

The offline-audio-context directory contains a simple example to show how a Web Audio API OfflineAudioContext interface can be used to rapidly process/render audio in the background to create a buffer, which can then be used in any way you please. For more information, see https://developer.mozilla.org/en-US/docs/Web/API/OfflineAudioContext. Run example live.

Offline audio context promise

The offline-audio-context-promise directory contains a simple example to show how a Web Audio API OfflineAudioContext interface can be used to rapidly process/render audio in the background to create a buffer, which can then be used in any way you please. Run the example live.

Output timestamp

The output-timestamp directory contains an example of how the AudioContext.getOutputTimestamp() property can be used to log contextTime and performanceTime to the console. Try the demo live.

Panner node

The panner-node directory contains a demo to show basic usage of the Web Audio API BaseAudioContext.createPanner() method to control audio spatialization. Run the example live.

Script processor node

The script-processor-node directory contains a simple demo showing how to use the Web Audio API's ScriptProcessorNode interface to process a loaded audio track, adding a little bit of white noise to each audio sample. See the live demo.

Spatialization

The spatialization directory contains an example of how the various properties of a PannerNode interface can be adjusted to emulate sound in a three-dimensional space. For more information see Web audio spatialization basics. Try the live demo.

Step sequencer

The step-sequencer directory contains a simple step-sequencer that loops and manipulates sounds based on a dial-up modem. For more information see Advanced techniques: creating sound, sequencing, timing, scheduling. See the live demo also.

Stereo panner node

The stereo-panner-node directory contains a simple example to show how the Web Audio API StereoPannerNode interface can be used to pan an audio stream. Run the example live.

Stream source buffer

The stream-source-buffer directory contains a simple example demonstrating usage of the Web Audio API MediaStreamAudioSourceNode object. View example live.

Violent theremin

Violent theremin uses the Web Audio API to generate sound, and HTML5 canvas for a bit of pretty visualization. The colors generated depend on the pitch and gain of the current note, which are themselves dependent on the mouse pointer position.

You can view the demo live here.

Voice-change-O-matic

The voice-change-o-matic directory contains a Web Audio API-powered voice changer and visualizer.

You can view the demo live here.

webaudio-examples's People

Contributors

ahqrt avatar bsmth avatar chrisdavidmills avatar dctalbot avatar dipikabh avatar elchi3 avatar grhhhh avatar guest271314 avatar leokuma avatar leonfedotov avatar lgaribaldi avatar matt-development-work avatar michaelbartnett avatar mozilla-github-standards avatar mrbrain295 avatar nbriz avatar olavk avatar ragtam avatar rohini-ranjanr avatar rumyra avatar sanxofon avatar sbungartz avatar shuizhongyueming avatar sidewayss avatar stephendwolff avatar stoyan avatar teoli2003 avatar vladikoff avatar wahengchang avatar wbamberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

webaudio-examples's Issues

Uncaught (in promise) DOMException: The element has no supported sources. line 101 in index.html in audio-basics.

What information was incorrect, unhelpful, or incomplete?

The audio-basics example does not work . Uncaught (in promise) DOMException: The element has no supported sources. line 101 in index.html in audio-basics.

What did you expect to see?

I was expecting the example to work.

Do you have any supporting links, references, or citations?

No response

Do you have anything more you want to share?

No response

Example for cleanup

Hey there.
Thanks for all the nice examples.

Could you add an example for an <audio> element, how to unregister all the webaudio nodes, to get back to a normal audio element? Like to be able to register all that stuff again?

[step-sequencer] The slider inputs to set the envelope of the "Sweep" sound are not working as expected

What information was incorrect, unhelpful, or incomplete?

The two sliders to set the envelope of the "Sweep" sound are not working as expected.
image

What did you expect to see?

There are actually 2 issues:

  1. For both inputs, the values are not updated properly. The input should update the value progressively with a 0.1 step, but it only updates to 0 or 1. The problem comes from that the value is cast into an int in the event listener callback.
const attackControl = document.querySelector("#attack");
    attackControl.addEventListener(
        "input",
        (ev) => {
             attackTime = parseInt(ev.target.value, 10);
        },
        false
);

Since their value are between 0 and 1 with a step of 0.1, the usage of "parseInt" is inadecuate.

<section class="controls">
  <label for="attack">Att</label>
  <input
    name="attack"
    id="attack"
    type="range"
    min="0"
    max="1"
    value="0.2"
    step="0.1"
  />
  <label for="release">Rel</label>
  <input
    name="release"
    id="release"
    type="range"
    min="0"
    max="1"
    value="0.5"
    step="0.1"
  />
</section>
  1. The way the release input works is not correct. When the slider is set to left, the release should be 0, whereas when it's set to the right, we should be able to hear the release.
    Your can try for yourself here:
    If you set the values like this you hear a sound that is plucked:
image

When you set it like this you hear a longer release:

image

It should be the other way around

The problem comes from the implementation of the envelope:

sweepEnv.gain.linearRampToValueAtTime(
  0,
  time + attackTime - releaseTime // here it shouldn't be a "-" operator
);

I'm currently going through the Web API guide where this step sequencer is used: https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques

Since I identified the what causes the issue in the code I'd be happy to work on a fix + updating the doc.

Do you have any supporting links, references, or citations?

No response

Do you have anything more you want to share?

No response

Some directories are missing in README.md

The "multi-track", "output-timestamp", and "spacialization" directories are missing from the README.md. I would like to write descriptions for these directories.

Make headers in README.md consistent

I realized that the headers I added for the directory descriptions in #38 are not consistent.

(e.g. the header for the "audio-basics" directory is "Audio Basics" and the header for the "decode-audio-data" is "decodeAudioData"). I did this because some directory names are associated with a specific method, whereas others are more of a description.

For consistency purposes, I think it makes sense for all headers to be capital letters separated by spaces. If there is a method name associated with the directory, then it will appear in the description.

Thoughts on this, @chrisdavidmills?

CODE_OF_CONDUCT.md file missing

As of January 1 2019, Mozilla requires that all GitHub projects include this CODE_OF_CONDUCT.md file in the project root. The file has two parts:

  1. Required Text - All text under the headings Community Participation Guidelines and How to Report, are required, and should not be altered.
  2. Optional Text - The Project Specific Etiquette heading provides a space to speak more specifically about ways people can work effectively and inclusively together. Some examples of those can be found on the Firefox Debugger project, and Common Voice. (The optional part is commented out in the raw template file, and will not be visible until you modify and uncomment that part.)

If you have any questions about this file, or Code of Conduct policies and procedures, please see Mozilla-GitHub-Standards or email [email protected].

(Message COC001)

decodeAudioData example does not work on Safari

-> https://mdn.github.io/webaudio-examples/decode-audio-data/

does not work on safari (Version 11.0.2 (11604.4.7.1.6) on OSX 10.11.6)

Steps to reproduce

  request.onload = function() {
    var audioData = request.response;

    audioCtx.decodeAudioData(audioData, function(buffer) {
        myBuffer = buffer;
        songLength = buffer.duration;
        source.buffer = myBuffer;
        source.playbackRate.value = playbackControl.value;
        source.connect(audioCtx.destination);
        //source.loop = true;

        loopstartControl.setAttribute('max', Math.floor(songLength));
        loopendControl.setAttribute('max', Math.floor(songLength));
      },

      function(e){"Error with decoding audio data" + e.err});   //<-- null is not an object

  }

In fact the error callback gets a null parameter.

white noise live demo throws error

MDN URL

https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/createBuffer

What specific section or headline is this issue about?

white noise live demo throws error in Chrome

What information was incorrect, unhelpful, or incomplete?

The white noise demo at https://mdn.github.io/webaudio-examples/audio-buffer/ produces an error: on the js console

Uncaught TypeError: Failed to construct 'AudioBuffer': Failed to read the 'length' property from 'AudioBufferOptions': Required member is undefined.
    at button.onclick ((index):35:22)

as a result, it plays no noise :(

What did you expect to see?

no error, white noise played

Do you have any supporting links, references, or citations?

Chrome: Version 105.0.5195.52 (Official Build) beta (64-bit)
Windows 11 build : 22000.856

Do you have anything more you want to share?

No response

MDN metadata

Page report details

change the output on mobile device

Hello,

I have the following question. For example, when I call the "Voice-change-O-matic" demo with an IPhone, the output is routed only to the phone speaker. But I would like to have the output to the "normal" speaker. The problem seems to occur only when the microphone is also enabled in the application. Is there a solution for this?

Thanks

Greetings Henri

Multi track example could have mixing capabilities

The Multi track example looks like it has potential for some mixing board features to be added. I'm looking for an example to contribute to, so I would be happy to add some functionality here if it would be helpful.

Some possible additions for the individual tracks:

  1. Volume faders
  2. Solo buttons
  3. Bypass buttons

Screen Shot 2020-12-02 at 3 24 22 PM

The mobile-responsiveness aspect makes it somewhat tricky to fit everything though:

Screen Shot 2020-12-02 at 3 34 06 PM

Perhaps it's best to keep it simple and start with just the volume faders and then add new issues for the additional features once this is complete? I can refer to the Boombox example for how to implement the faders using the same styling, functionality and responsiveness settings.

I think it makes sense to shrink the text and/or play button on page resize so the faders will all fit (mostly needed for the 'Lead Guitar' and 'Bass Guitar' tracks).

[step-sequencer] The slider input to set the DTMF rate is not working as expected

What information was incorrect, unhelpful, or incomplete?

This input doesn't behave as it should:
image

What did you expect to see?

When setting the slider like this, we don't hear anything anymore (here):
image

The problem comes from that the value is cast into an int in the event listener callback (also happens in #117).

rateControl.addEventListener(
  "input",
  (ev) => {
    playbackRate = parseInt(ev.target.value, 10);
  },
  false
);

It was written to have a 0.1 step:

<section class="controls">
  <label for="rate">Rate</label>
  <input
    name="rate"
    id="rate"
    type="range"
    min="0.1"
    max="2"
    value="1"
    step="0.1"
  />
</section>

Since I identified what causes the issue in the code I'd be happy to work on a fix + updating the doc.

Do you have any supporting links, references, or citations?

No response

Do you have anything more you want to share?

No response

Chrome: missing audioCtx.resume()

I've noticed problem here: https://mdn.github.io/webaudio-examples/audio-buffer/

When I click button, nothing happens (using Chrome). I see error message:
The AudioContext was not allowed to start. It must be resume (or created) after a user gesture on the page. https://goo.gl/7K7WLu.

I think, the click handler should start with audioCtx.resume() at

button.onclick = function() {
// Fill the buffer with white noise;
//just random values between -1.0 and 1.0

audio-basics fails out of the box

pushing the play button results in:

index.html:81 Uncaught (in promise) DOMException: The element has no supported sources.

here:

audioElement.play();

Bug: Step-Sequencer Noise Duration 0 causes uncaught DOM exception

What information was incorrect, unhelpful, or incomplete?

TLDR: Noise Dur slider positions left of the midpoint break the step sequencer app

Noise Dur slider positions left of the midpoint cause the noiseBuffer AudioBuffer to be initialized with a length of 0 resulting in the following error:

Uncaught DOMException: Failed to construct 'AudioBuffer': The number of frames provided (0) is less than or equal to the minimum bound (0).

What did you expect to see?

Non-breaking functionality of the Noise Dur slider in the Step Sequencer

Do you have any supporting links, references, or citations?

No response

Do you have anything more you want to share?

No response

Voice-change-O-matic does not work on Safari iOS

Using an iPhone 14 Pro
iOS version: 16.1.2

I've attached a screenshot from the console output from the iPhone.
Screenshot 2022-12-25 at 23 40 29

I'm also attaching a screen recording to show what I did

RPReplay_Final1672008158.mov

python3 -m http.server returns wrong MIME type for javascript files causing samples not to work

in README.md it's suggested to run the samples with

python3 -m http.server

However this server returns the text/html MIME type for .js and .css files. Therefore, on Chrome the audioworklet sample does not work.

I now use this Python script, providing the correct MIME type for javascript and css files.

try:
    from http import server # Python 3
except ImportError:
    import SimpleHTTPServer as server # Python 2

if __name__ == '__main__':
    server.SimpleHTTPRequestHandler.extensions_map['.js'] = 'application/javascript'
    server.SimpleHTTPRequestHandler.extensions_map['.css'] = 'text/css'
    server.test(port=8114, HandlerClass=server.SimpleHTTPRequestHandler, protocol="HTTP/1.1")

How to preserve an audio's pitch after changing AudioBufferSourceNode.playbackRate?

I changed AudioBufferSourceNode.playbackRate using the code here https://github.com/mdn/webaudio-examples/blob/master/offline-audio-context-promise/index.html but found out that the pitch was not preserved.
I looked up the documentation and found out that there is no pitch correction when using AudioBufferSourceNode.playbackRate.

If we change an audio's playbackRate using HTML audio element, the pitch would be preserved. Is there a way that I can obtain the same effect?

https://developer.mozilla.org/en-US/docs/Web/API/AudioBufferSourceNode
Screen Shot 2021-09-21 at 5 57 37 PM

audioCtx.decodeAudioData(audioData, function(buffer) {
      myBuffer = buffer;
      source.buffer = myBuffer;
      source.connect(offlineCtx.destination);
      source.playbackRate = 1.5; // <<<<<<<<< added this to change playbackRate
      source.start();
      //source.loop = true;
      offlineCtx.startRendering().then(function(renderedBuffer) {
        console.log('Rendering completed successfully');
        
        let song = audioCtx.createBufferSource();
        song.buffer = renderedBuffer;

        song.connect(audioCtx.destination);

        play.onclick = function() {
          song.start();
        }
      }).catch(function(err) {
          console.log('Rendering failed: ' + err);
          // Note: The promise should reject when startRendering is called a second time on an OfflineAudioContext
      });
    });

The element has no supported sources.

example of the example of basics stops at line 109: audioElement.play();
Web Tools of Chrome states: the element has no supported sources.
Something "basic" is missing. What could it be?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.