Giter Site home page Giter Site logo

wc-talk's People

Contributors

beaufortfrancois avatar chcunningham avatar padenot avatar ytakio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wc-talk's Issues

Demo throws

FYI your live demo throws errors:


audio_renderer.js:42 DOMException: Unsupported configuration. Check isConfigSupported() prior to calling configure().
error @ audio_renderer.js:42
simple_video_player.html:1 Uncaught DOMException: Failed to execute 'configure' on 'VideoDecoder': H.264 decoding is not supported.
    at VideoRenderer.initialize (https://wc-talk.netlify.app/video_renderer.js:37:18)
    at async Promise.all (index 1)
    at async https://wc-talk.netlify.app/simple_video_player.html:43:3
audio_renderer.js:211 Uncaught (in promise) DOMException: Failed to execute 'decode' on 'AudioDecoder': Cannot call 'decode' on a closed codec.
    at AudioRenderer.fillDataBufferInternal (https://wc-talk.netlify.app/audio_renderer.js:211:20)
    at async AudioRenderer.fillDataBuffer (https://wc-talk.netlify.app/audio_renderer.js:166:5)

Example on PCM data

Hello, thank you for the examples regarding the webcodecs apis.

I am trying to construct AudioData object from raw PCM data (retrieved from wav file). I have the following (parsed from the header of the WAV file):

audioFormat: 1
bitsPerSample: 16
blockAlign: 4
byteRate: 176400
cbSize: 0
chunkId: "fmt "
chunkSize: 16
dwChannelMask: 0
headBitRate: 0
headEmphasis: 0
headFlags: 0
headLayer: 0
headMode: 0
headModeExt: 0
numChannels: 2
ptsHigh: 0
ptsLow: 0
sampleRate: 44100

And the actual Linear PCM data in ArrayBuffer.

How to construct the AudioData object from the ArrayBuffer so that it can be passed for encoding to "opus" codec.

 const chunk = new AudioData({
          data: data,
          timestamp: 0,
          format: 'f32',//not sure what the format is
          numberOfChannels: 2,
          numberOfFrames: 1024,//again this number is arbitrary, do not know what to put here
          sampleRate: 44100,
        })

How to construct the rest of the ArrayBuffer into AudioData? The data.byteLength = 10406468

AudioEncoder config:

   {
      codec: 'opus',
      numberOfChannels: 2,
      sampleRate: 44100
    }

Thanks.

Transition renderers to DedicatedWorkers

Currently both AudioRenderer and VideoRenderer live on the main thread. Moving to workers is a best practice generally. AudioRenderer in particular has a lot of sensitivity to contention, as it tries to keep the PCM buffer full.

Bug in AudioEncoder/EncodedAudioChunk implementation

AudioEncoder output emits 1st EncodedAudioChunk byteLength 8 and duration 60000. 2d EncodedAudioChunk emits byteLength of 409 with duration 60000 (44100 input sample rate).

That does not appear to be correct. How can an 8 byteLength EncodedAudioChunk have same duration as a 409 byteLength EncodedAudioChunk?

When Media Source Entensions is used for playback that winds up losing 1 second of playback, which is 1:35 when playing the WAV file, HTMLMediaElement currentTime reaches 1:34.

Resampling input to 48000 does not change the 1st EncodedAudioChunk byteLength nor playback ending 1 second prematurely.

<!DOCTYPE html>
<html>
  <head></head>
  <body>
    <input type="file" accept=".wav" />
    <script type="module">

      document.querySelector('input[type=file]').onchange = async (e) => {
        const data = new Int16Array(
          await e.target.files[0].arrayBuffer()
        ).slice(44);

        let config = {
            numberOfChannels: 2,
            sampleRate: 44100,
            codec: 'opus',

          };

        const encoder = new AudioEncoder({
          error(e) {
            console.log(e);
          },
          output: async (chunk, metadata) => {
            if (metadata.decoderConfig) {
              config.description = metadata.decoderConfig.description;
            }
            chunks.push(chunk);
          },
        });
        console.log(await AudioEncoder.isConfigSupported(config));
        encoder.configure(config);
        await encoder.flush();

        const audio = new Audio();
        audio.controls = audio.autoplay = true;
        const events = [
          'loadedmetadata',
          'loadeddata',
          'canplay',
          'canplaythrough',
          'play',
          'playing',
          'pause',
          'waiting',
          'progress',
          'seeking',
          'seeked',
          'ended',
          'stalled',
          'timeupdate',
        ];
        for (const event of events) {
          audio.addEventListener(event, async (e) => {
            if (e.type === 'timeupdate') {
              if (!ms.activeSourceBuffers[0].updating) {
                ms.activeSourceBuffers[0].timestampOffset = audio.currentTime;
              }
            }
          });
        }
        document.body.appendChild(audio);

        const ms = new MediaSource();
        ms.addEventListener('sourceopen', async (e) => {
          console.log(e.type, config);
          URL.revokeObjectURL(audio.src);
          const sourceBuffer = ms.addSourceBuffer({
            audioConfig: config,
          });

          sourceBuffer.onupdate = (e) => console.log(e.type);

          sourceBuffer.mode = 'sequence';
          for (const chunk of chunks) {
            await sourceBuffer.appendEncodedChunks(chunk);
          }
        });
        audio.src = URL.createObjectURL(ms);
      };
    </script>
  </body>
</html>

Screenshot_2022-07-30_17-53-48

Screenshot_2022-07-30_18-00-48

blade_runner.wav.zip

Audio Capability Help

We have a discussion on compositing mp4 under the mp4-box project, it's almost working, but we're having trouble with the audio part, through this project we know that you are experts in the audio part and would like to ask for your help, thanks!

@padenot @chcunningham

gpac/mp4box.js#243

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.