Giter Site home page Giter Site logo

bbc / videocontext Goto Github PK

View Code? Open in Web Editor NEW
1.3K 71.0 155.0 124.17 MB

An experimental HTML5 & WebGL video composition and rendering API.

Home Page: http://bbc.github.io/VideoContext/

License: Apache License 2.0

JavaScript 88.52% Shell 1.76% HTML 0.21% GLSL 9.13% Dockerfile 0.39%
video html5-video javascript graph renderer html5

videocontext's Introduction

VideoContext

build status

The VideoContext is an experimental HTML5/WebGL media processing and sequencing library for creating interactive and responsive videos on the web.

It consists of two main components. A graph based, shader accelerated processing pipeline, and a media playback sequencing timeline.

The design is heavily inspired by the Web Audio API, so it should feel familiar for people with experience in the Web Audio world.

Live examples can be found here

Table of Contents

Demo

View on CodeSandbox.

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <script type="text/javascript" src="../dist/videocontext.js"></script>
</head>
<body>
    <!--
        A canvas needs to define its width and height to know how many pixels you can draw onto it.
        Its CSS width and height will define the space it takes on screen
        If omitted, the canvas dimensions will be 300x150 and your videos will not rendered at their
        optimal definition
        https://webglfundamentals.org/webgl/lessons/webgl-resizing-the-canvas.html
    -->
    <canvas id="canvas" width="1280" height="720" style="width: 852px; height: 480px"></canvas>

    <script type="text/javascript">
        window.onload = function(){
            var canvas = document.getElementById("canvas");

            var videoCtx = new VideoContext(canvas);
            var videoNode1 = videoCtx.video("./video1.mp4");
            videoNode1.start(0);
            videoNode1.stop(4);

            var videoNode2 = videoCtx.video("./video2.mp4");
            videoNode2.start(2);
            videoNode2.stop(6);

            var crossFade = videoCtx.transition(VideoContext.DEFINITIONS.CROSSFADE);
            crossFade.transition(2,4,0.0,1.0, "mix");

            videoNode1.connect(crossFade);
            videoNode2.connect(crossFade);
            crossFade.connect(videoCtx.destination);


            videoCtx.play();
        };
    </script>
</body>
</html>

Graph and timeline view

Debugging

If you need to debug video context graphs or get a better insight into what is happening under the hood, there's a new browser extension for chrome, videocontext-devtools

Debugging view

Documentation

API Documentation can be built using ESDoc by running the following commands:

yarn install
yarn doc

The documentation will be generated in the "./doc" folder of the repository.

Node Types

There are a number of different types of nodes which can be used in the VideoContext's processing graph. Here's a quick list of each one. Following that is a more in-depth discussion of each type.

  • VideoNode - Plays video.
  • AudioNode - Plays audio.
  • ImageNode - Displays images for specified time.
  • CanvasNode - Displays output of canvas for specified time.
  • EffectNode - Applies shader to limited number of inputs.
  • TransitionNode - Applies shader to limited number of inputs. Modifies properties at specific times.
  • CompositingNode - Applies same shader to unlimited inputs, rendering to same output.
  • DestinationNode - Node representing output canvas. Can only be one.

VideoNode

A video source node.

View on CodeSandbox.

var videoNode = videoCtx.video("./video1.mp4");
videoNode.connect(videoCtx.destination);
videoNode.start(0);
videoNode.stop(4);

For best results the video played by a VideoNode should be encoded with a fast decode profile. The following avconv line shows how this can be achieved.

avconv -i input.mp4 -tune fastdecode -strict experimental output.mp4

AudioNode

An audio source node.

View on CodeSandbox.

var audioNode = videoCtx.audio("./audio.mp3");
audioNode.connect(videoCtx.destination);
audioNode.start(0);
audioNode.stop(4);

ImageNode

An image source node.

View on CodeSandbox.

var imageNode = videoCtx.image("cats.png");
imageNode.connect(videoCtx.destination);
imageNode.start(0);
imageNode.stop(4);

CanvasNode

A canvas source node.

View on CodeSandbox.

var canvas = document.getElementById("input-canvas");
var canvasNode = videoCtx.canvas(canvas);
canvasNode.connect(videoCtx.destination);
canvasNode.start(0);
canvasNode.stop(4);

CustomSourceNode

Sometimes using the pre-built node is just not enough. You can create your own source nodes that host more logic and let you hook into the VideoContext Node API easily.

The following example shows how to create a custom source node that can play a HLS VOD:

View on CodeSandbox

import Hls from "hls.js";

class HLSNode extends VideoContext.NODES.VideoNode {
  constructor(src, gl, renderGraph, currentTime, playbackRate, sourceOffset, preloadTime, hlsOptions = {}) {
    //Create a video element.
    const video = document.createElement("video");

    super(video, gl, renderGraph, currentTime, playbackRate, sourceOffset, preloadTime);

    //Create a HLS object.
    this.hls = new Hls(hlsOptions);

    //Bind the video element.
    this.hls.attachMedia(video);

    //Set the source path.
    this._src = src;

    this._displayName = "HLSNode";
    this._elementType = "hls";
  }

  _load() {
    //Load the video source on first load.
    if (!this._loadTriggered) {
      this.hls.loadSource(this._src);
    }
    super._load();
  }

  destroy() {
    if (this.hls) {
      this.hls.destroy();
    }
    super.destroy();
  }
}

//Setup the video context.
const canvas = document.getElementById("canvas");
const ctx = new VideoContext(canvas);

//Create a custom HLS source node and play it for 60 seconds.
const hlsNode = ctx.customSourceNode(HLSNode, "https://video-dev.github.io/streams/x36xhzz/x36xhzz.m3u8");
hlsNode.start(0);
hlsNode.stop(60);

//Set-up the processing chain.
hlsNode.connect(ctx.destination);

//start playback.
ctx.play();

Another use case for custom node types would be to play GIFs. The custom node would be in charge of decode the GIF frames and paint them on a canvas depending on the _update calls from VideoContext.

EffectNode

An EffectNode is the simplest form of processing node. It's built from a definition object, which is a combination of fragment shader code, vertex shader code, input descriptions, and property descriptions. There are a number of common operations available as node descriptions accessible as static properties on the VideoContext at VideoContext.DEFINITIONS.

The vertex and shader code is GLSL code which gets compiled to produce the shader program. The input description tells the VideoContext how many ports there are to connect to and the name of the image associated with the port within the shader code. Inputs are always render-able textures (i.e images, videos, canvases). The property descriptions tell the VideoContext what controls to attached to the EffectNode and the name, type, and default value of the control within the shader code.

The following is a an example of a simple shader description used to describe a monochrome effect. It has one input (the image to be processed) and two modifiable properties to control the color RGB mix for the processing result.

View on CodeSandbox.

var monochromeDescription = {
    title:"Monochrome",
    description: "Change images to a single chroma (e.g can be used to make a black & white filter). Input color mix and output color mix can be adjusted.",
    vertexShader : `
        attribute vec2 a_position;
        attribute vec2 a_texCoord;
        varying vec2 v_texCoord;
        void main() {
            gl_Position = vec4(vec2(2.0,2.0)*a_position-vec2(1.0, 1.0), 0.0, 1.0);
            v_texCoord = a_texCoord;
        }`,
    fragmentShader : `
        precision mediump float;
        uniform sampler2D u_image;
        uniform vec3 inputMix;
        uniform vec3 outputMix;
        varying vec2 v_texCoord;
        varying float v_mix;
        void main(){
            vec4 color = texture2D(u_image, v_texCoord);
            float mono = color[0]*inputMix[0] + color[1]*inputMix[1] + color[2]*inputMix[2];
            color[0] = mono * outputMix[0];
            color[1] = mono * outputMix[1];
            color[2] = mono * outputMix[2];
            gl_FragColor = color;
        }`,
    properties:{
        "inputMix":{type:"uniform", value:[0.4,0.6,0.2]},
        "outputMix":{type:"uniform", value:[1.0,1.0,1.0]}
    },
    inputs:["u_image"]
};

Here's an example of how the above node description might be used to apply sepia like effect to a video.

//Setup the video context.
var canvas = document.getElementById("canvas");
var ctx = new VideoContext(canvas);

//Create a video node and play it for 60 seconds.
var videoNode = ctx.video("./video.mp4");
videoNode.start(0);
videoNode.stop(60);

//Create the sepia effect node (from the above Monochrome effect description).
var sepiaEffect = ctx.effect(monochromeDescription);

//Give a sepia tint to the monochrome output (note how shader description properties are automatically bound to the JavaScript object).
sepiaEffect.outputMix = [1.25,1.18,0.9];

//Set-up the processing chain.
videoNode.connect(sepiaEffect);
sepiaEffect.connect(ctx.destination);

//start playback.
ctx.play();

TransitionNode

Transition nodes are a type of effect node which allow the automatic modification/tweening of properties in relation to the VideoContexts notion of time. In every respect they are the same as an effect node except they have a "transition" function which can be used to cue the transitioning of a shader property from one value to another.

You can use them to perform a video transition effect (such as cross-fades, wipes, etc) by creating a definition with two inputs and having a property which controls the mix of the two inputs in the output buffer.

The following is an example of a simple cross-fade shader.

View on CodeSandbox.

var crossfadeDescription = {
    title:"Cross-Fade",
    description: "A cross-fade effect. Typically used as a transistion.",
    vertexShader : `
            attribute vec2 a_position;
            attribute vec2 a_texCoord;
            varying vec2 v_texCoord;
            void main() {
                gl_Position = vec4(vec2(2.0,2.0)*a_position-vec2(1.0, 1.0), 0.0, 1.0);
                v_texCoord = a_texCoord;
            }`,
        fragmentShader : `
            precision mediump float;
            uniform sampler2D u_image_a;
            uniform sampler2D u_image_b;
            uniform float mix;
            varying vec2 v_texCoord;
            varying float v_mix;
            void main(){
                vec4 color_a = texture2D(u_image_a, v_texCoord);
                vec4 color_b = texture2D(u_image_b, v_texCoord);
                color_a[0] *= mix;
                color_a[1] *= mix;
                color_a[2] *= mix;
                color_a[3] *= mix;
                color_b[0] *= (1.0 - mix);
                color_b[1] *= (1.0 - mix);
                color_b[2] *= (1.0 - mix);
                color_b[3] *= (1.0 - mix);
                gl_FragColor = color_a + color_b;
            }`,
        properties:{
            "mix":{type:"uniform", value:0.0}
        },
        inputs:["u_image_a","u_image_b"]
};

The shader has two inputs and a mix property.

//Setup the video context.
var canvas = document.getElementById("canvas");
var ctx = new VideoContext(canvas);

//Create a video node that plays for 10 seconds from time=0.
var videoNode1 = ctx.video("./video1.mp4");
videoNode1.start(0);
videoNode1.stop(10);

//Create a video node that plays for 10 seconds from time=8, overlapping videoNode1 by two seconds.
var videoNode2 = ctx.video("./video2.mp4");
videoNode2.start(8);
videoNode2.stop(18);

//Create the sepia effect node (from the above Monochrome effect description).
var crossfadeEffect = ctx.transition(crossfadeDescription);

//Setup the transition. This will change the "mix" property of the cross-fade node from 0.0 to 1.0.
//Transision mix value from 0.0 to 1.0 at time=8 over a period of 2 seconds to time=10.
crossfadeEffect.transition(8.0, 10.0, 0.0, 1.0, "mix");


//Set-up the processing chain.
videoNode1.connect(crossfadeEffect); //this will connect videoNode1 to the "image_a" input of the processing node
videoNode2.connect(crossfadeEffect); //this will connect videoNode2 to the "image_b" input of the processing node


// NOTE: There's multiple ways to connect a node to specific input of a processing node, the
// following are all equivalent.
//
// By default behavior:
// videoNode1.connect(crossfadeEffect);
// videoNode2.connect(crossfadeEffect);
//
// By named input port:
// videoNode1.connect(crossfadeEffect, "image_a");
// videoNode2.connect(crossfadeEffect, "image_b");
//
// By input port index:
// videoNode1.connect(crossfadeEffect, 0);
// videoNode2.connect(crossfadeEffect, 1);


crossfadeEffect.connect(ctx.destination);

//start playback.
ctx.play();

CompositingNode

Compositing nodes are different from regular effect nodes as they can have an infinite number of nodes connected to them. They operate by running their effect shader on each connected input in turn and rendering the output to the same texture. This makes them particularly suitable for layering inputs which have alpha channels.

When compositing nodes are run, they map each input in turn to the first input in the definition. This means compositing node definitions typically only have a single input defined. It's also worth noting that an effect node definition with a single input can also be used as a compositing shader with no additional modifications.

A common use for compositing nodes is to collect a series of source nodes which exist at distinct points on a timeline into a single connection for passing onto further processing. This effectively makes the sources into a single video track.

Here's a really simple shader which renders all the inputs to the same output.

View on CodeSandbox.

var combineDecription ={
    title:"Combine",
    description: "A basic effect which renders the input to the output, Typically used as a combine node for layering up media with alpha transparency.",
    vertexShader : `
        attribute vec2 a_position;
        attribute vec2 a_texCoord;
        varying vec2 v_texCoord;
        void main() {
            gl_Position = vec4(vec2(2.0,2.0)*a_position-vec2(1.0, 1.0), 0.0, 1.0);
            v_texCoord = a_texCoord;
        }`,
    fragmentShader : `
        precision mediump float;
        uniform sampler2D u_image;
        varying vec2 v_texCoord;
        varying float v_mix;
        void main(){
            vec4 color = texture2D(u_image, v_texCoord);
            gl_FragColor = color;
        }`,
    properties:{
    },
    inputs:["u_image"]
};

And here's an example of how it can be used.

//Setup the video context.
var canvas = document.getElementById("canvas");
var ctx = new VideoContext(canvas);

//Create a video node that plays for 10 seconds from time=0.
var videoNode1 = ctx.video("./video1.mp4");
videoNode1.start(0);
videoNode1.stop(10);

//Create a video node that plays for 5 seconds from time=10.
var videoNode2 = ctx.video("./video2.mp4");
videoNode2.start(10);
videoNode2.stop(15);

//Create a video node that plays for 12 seconds from time=15.
var videoNode3 = ctx.video("./video3.mp4");
videoNode3.start(15);
videoNode3.stop(27);

//Create the combine compositing node (from the above Combine effect description).
var combineEffect = ctx.compositor(combineDecription);

//Connect all the videos to the combine effect. Collecting them together into a single point which can be connected to further points in the graph. (Making something logically equivalent to a track.)
videoNode1.connect(combineEffect);
videoNode2.connect(combineEffect);
videoNode3.connect(combineEffect);

//Connect all the input sources to the destination.
combineEffect.connect(ctx.destination);

//start playback.
ctx.play();

Writing Custom Effect Definitions

Making custom effect shaders for the VideoContext is fairly simple. The best starting point is to take one of the built in effects and modify it. It's very useful to have an understanding of how shaders work and some experience writing shaders in GLSL.

var effectDefinition ={
    title:"",               //A title for the effect.
    description: "",        //A textual description of what the effect does.
    vertexShader : "",      //The vertex shader
    fragmentShader : "",    //The fragment shader
    properties:{            //An object containing uniforms from the fragment shader for mapping onto the effect node.
    },
    inputs:["u_image"]      //the names of the uniform sampler2D's in the fragment shader which represent the texture inputs to the effect.
};

Advanced Examples

You can view more advanced usage examples here.

Development

VideoContext has a pretty standard package.json

# install build and development dependencies
yarn install

# run a dev server with automatic reload
yarn dev

# watch unit and integration tests
yarn test-watch

# run the end-to-end regression tests in a headless browser
yarn cypress

For more information on writing, running and debugging the end-to-end cypress tests see ./test/cypress#readme.

For an overview of all testing see ./test#readme

Gitflow

VideoContext uses the gitflow branching model. To contribute raise a pull request against the develop branch.

Releases

Releases are prepared in release branches. When the the release is ready run one of

yarn release:major
yarn release:minor
yarn release:patch

these scripts build and commit the docs, the changelog, update the package.json version number and push to the current branch with tags.

CI will publish to npm when the release branch has been merged into master.

Release step-by-step

  1. git checkout develop
  2. git pull
  3. git checkout -b release-xxx
  4. tag and push using script
    • yarn release:patch|minor|major
  5. open pull request against master
  6. merge when tests have passed
  7. merge master back in to develop:
    • git checkout master
    • git pull
    • git checkout develop
    • git merge master
    • git push

There is one housekeeping task (this will be automated at some point):

  1. update the codesandbox examples to use the latest release

CI

VideoContext uses the BBCs public travis account to run all tests and publish to npmjs. All tests must pass before PRs can be merged.

Other options

yarn build     # build dist packages
yarn doc       # create documentation
yarn build_all # do all of the above

The library is written in es6 and cross-compiled using babel.

videocontext's People

Contributors

barata0 avatar bdrobinsonbbc avatar cnorthwood avatar dependabot[bot] avatar germain-gg avatar mark-holmes avatar munwaikong avatar mysterypancake avatar phanronald avatar ptaylour avatar riceyrice avatar richski avatar sacharified avatar tinybug avatar tjenkinson avatar yakca avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

videocontext's Issues

VideoContext state description error in api

http://bbc.github.io/VideoContext/api/VideoContext.html
<VideoContext.STATE.PLAYING: all sources are active>; I think the ‘all sources’ maybe change to ‘current sources’ is more precise;
because i set a right url from 0s to 10s;and then set a error url from 15s to 20s; the VideoContext.state is STALLED after play 15s; so i think it is the current source state.
other i want to know is there some mark or status let me know from 10s to 15s VideoContext not has sources it‘s empty content

The frame not change after set currentTime

the frame is not change when VideoContext is paused and set currentTime。。。
like this, the VideoContext is pause in 5s, the frame is stop in 5s,then change the currentTime to 10s but the frame is still in 5s,
Is that right? Or what do I need to do to change the picture?

Expose the duration of the videoNode

Hi,

I need to add 2 clips in a row on a timeline. But I don't know in advance the durations.

videoNode1 = videoCtx.video(url1);
videoNode1.startAt(0);
videoNode1.stopAt(???); // video1FullDuration
videoNode1.connect(videoCtx.destination);
videoNode2 = videoCtx.video(url2);
videoNode2.startAt(???); // video1FullDuration
videoNode1.stopAt(???); // video1FullDuration + video2FullDuration
videoNode2.connect(videoCtx.destination);

How to query for the VideoNode full duration (maybe even if it is not connected to any context)?

Negative PlaybackRate

Negative playbackRate should play the video in reverse mode or it should not be allowed to have negative values.

fast rendering?

I do wrote myself my own timeline, which has a cursor to move the playhead. When the cursor is changed, I call: ctx.currentTime = newTime. I have enabled the visualiseVideoContextTimeline. I can see the cursor in the visualization is the same as my own one, so the sync here is good. What lacks behind is the video. There is always some lag on the web, yet the video basically only changes, when I stop moving the cursor around or slow down my cursor movement drastically. I'm only on a local machine, so there is bandwith limit for me at the moment. I was wondering how speed up rendering the video in "time-critic" situations.

There was a talk at jsconf eu this year as well adressing this issue: https://youtu.be/NpKLt_YO3o8?list=PL37ZVnwpeshFmAPr65sU2O5WMs7_CGjs_&t=722 I think this is a solution to this. Although there is a drawback of using tons of RAM. Maybe as an opt-in option? Ref #58

can this do some anti-aliasing ?

When I use this to play video, it look like this:
46e41c93-c9ce-4a9b-8d08-c2f6768aa6a1
When I go to https://www.shadertoy.com/view/Mdf3zr, play the same video(comment the 39 line of the glsl script), it look like this:
b918cf52-e8d6-43af-9ad4-13bd3d6d1c20

Can VideoContext do some anti-aliasing to let the video look more smoothly?

[shadertoy.com use piLibJs(http://www.iquilezles.org/code/piLibsJS/piLibsJS.htm) for rendering, but it's too complex for me...😭]

ImageNode not play after seek

var canvasElement = document.getElemenyById("canvas");
this.ctx = new VideoContext(canvasElement);
var imageNode = this.ctx.image("image.png");
this.ctx.startAt(0);
this.ctx.stopAt(15);
imageNode.connect(this.ctx.destination);

btn.click(function(){
ctx.currentTime = 10;
ctx.play();
})

After i click btn the ctx is not play; and i saw the ctx.currentTime is still 10s。But change image to video is play ok.

how can i check videoSrc is right(when src is wrong, the videocontext is stop, can get me some prompt)

this.ctx = new VideoContext();
var videoNode = this.ctx.createVideoSourceNode('errorvideo.mp4');
videoNode.start(0);
videoNode.stop(10);
var videoNode2 = ctx.createVideoSourceNode('errorvideo.mp4');
videoNode2.start(10);
videoNode2.stop(20);

the first videoNode will still loading, so how can i check the src is right; or can you give me a prompt when it is wrong like 'registerCallback('nocontent', ... )'

Consider to change preserveDrawingBuffer to false for webgl context

I plan to use the library not only for desktop, but also for mobile phones, thus performance is important to me.

I've read that especially low performance devices would benefit from a "perserveDrawingBuffer": false situation, where the graphics driver can swap instead of copy the frame buffer. At the moment, the VideoContext constructor always uses "preserveDrawingBuffer: true.

I want to open this issue to make sure we can check (later on) whether this is possible for the implementation and how big the performance speedup is on low end devices.

Some links for reference:
http://stackoverflow.com/questions/27746091/preservedrawingbuffer-false-is-it-worth-the-effort
https://www.khronos.org/webgl/public-mailing-list/archives/1110/msg00003.php

Georg

source empty

this.ctx = new VideoContext();
var videoNode = this.ctx.createVideoSourceNode('video.mp4');
videoNode.start(0);
videoNode.stop(10);
var videoNode2 = ctx.createVideoSourceNode('');
videoNode2.start(10);
videoNode2.stop(20);

4a8d654a-c16e-48aa-bec6-0467eb9c5bdf
In this code the this.ctx.duration is 20s,
i want to show 20s video and the last 10s has no content(during 10s~20s i want listen in nocontent callback) but it looks like something wrong leads to videocontext is not continue after 10s.

Canvas goes black on switch tab

Sometimes when I switch away from and then come back to the tab with the VideoContext, the canvas goes black.

In the snippet below, the video is paused after 2 seconds by setting stretchPaused=true. Once it has paused, if you move to a different tab for more than 8 seconds (the video's stop time (10 seconds) minus the video's current time (2 seconds)) the canvas goes black when you return to the tab. The issue doesn't seem to be there if the stop time is not set.

var canvas = document.getElementById("canvas");

var videoCtx = new VideoContext(canvas);
var node = videoCtx.createVideoSourceNode("./video.mp4");

node.start(0);
node.stop(10);
node.connect(videoCtx.destination);

videoCtx.play();

setTimeout(function() {
    node.stretchPaused = true;
}, 2000);

VideoElementCache immediately fills when not on localhost

When using our application on a non-localhost domain, we immediately see
No available video element in the cache, creating a new one. This may break mobile, make your initial cache larger.
in the logs, which is intermittently followed immediately by the video (canvas) playback stopping.

On localhost we never experience the message or the video playback problem. We're using hash-based routing which might be the culprit, but we're using that on localhost too.

There's a line of code that inspects the url of the window so I suspect that might be the problem ¯_(ツ)_/¯

if ((element.src === "" || element.src === undefined || element.src === stripHash(window.location)) && element.srcObject == null )return element;

do something in videoContext end function

this.ctx.registerCallback('ended', this.hanldeEnded);
hanldeEnded = () => {
console.log('====>Playback ended');
this.ctx.pause();
this.ctx.currentTime = 0;
}

the console.log is show but this.ctx is still playing form 0s last for a few seconds, and frame is not show, i saw videoNode registerCallback is still loading

Video transition example not working in Chrome for Android

Hi,

i've noticed that the video transition example does not work on Chrome for Android. It blends the video as expected, but the charachter is not moving. It looks like two images - instead of videos - get transitioned.

I've used the Chrome Developer tools to inspect the error log and came to this message:
videocontext.js:1623 Uncaught (in promise) DOMException: play() can only be initiated by a user gesture

The problem is that the play() cal is not within the stack trace of the click() handler for the play button. Thus, Chrome does not allow it to play.

I've read some comments in the chrome issue tracker and there is a workaround that seems to work for tags. It may work also for video (haven't had time to test it yet): Create an empty

By the way: On Firefox for Android it works as expected. But i've read that mobile safari has the same restrictions as Chrome for Android. Is here somebody with an iPhone device to test this?

Georg

Initial load race condition "cannot set property 'playbackRate' of undefined"

About 60% of the time when we instantiate multiple videocontext instances at the same time where some have multiple source nodes, we see the following error:

Uncaught TypeError: Cannot set property 'playbackRate' of undefined
    at VideoNode._update (videocontext.js:1561)
    at VideoContext._update (videocontext.js:945)
    at UpdateablesManager._update (videocontext.js:3074)
    at UpdateablesManager._updateRAFTime (videocontext.js:3066)

which is followed by the videocontext instances failing. Based on the traceback above, there seems to be some situation where a SourceNode can have no this._element but where the this._state is SOURCENODESTATE.playing.

We've not seen this issue occur when loading videocontext instances where each has just a single source node.

I've looked to see if I can find any glaring error conditions at our end and can't. We're not sending any options to the videocontext constructor, just the canvas, so it could perhaps have something to do with the videoelementcache.

Warning: "WebGL: INVALID_VALUE: texImage2D: no video"

I do get this warning fairly often, populating my console: WebGL: INVALID_VALUE: texImage2D: no video

I tracked it down to the function:

export function updateTexture(gl, texture, element){
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, element);
}

Background: I do have parts of my videocontext, which are black (nothing to show at that time). I also see this, when the video is loading.

Managing z-layer

The z-layers of my tracks change and I would like to be able to change them programmatically, yet haven't found a way.

As far as I can see, the z-layer is determined at the time a sourcenode is connected to the videocontext. Changing the z-layer later on is impossible? Do I have to kill all sourcenodes first and then add them in the new order again? What about a sourceNode.layer = value? (layer was just the best name I could come up with at the moment though I'm pretty sure, you already have some internal value here)

frame is not change but ctx.currentTime is add

const canvas = document.getElementById('canvas');
this.ctx = new VideoContext(canvas);
const node = this.ctx.video('video.mp4');
node.startAt(0);
node.stopAt(15);
node.connect(this.ctx.destination);

function play () {
this.ctx.play();
}

function pause () {
this.ctx.pause();
}

sometimes first play(); the VideoContext is not change the frame but ctx.currentTime is adding; then after click pause to start, the frame is change but currentTime is not right.

  1. click play(): ctx.currentTime is adding; frame not change
  2. click pause(): ctx.currentTime is 10s,the ctx is pause
  3. click play(): ctx.currentTime is adding from 10s, but video is play from 0s, after 5s the ctx is stop(because i set node.stopAt(15))

It just happens sometimes, not every time

demo is no working in mobile's chrome browser

hi,
I am checking your example, and it can't work in mobile chrome . Here is the error in console:

Uncaught TypeError: Array.from is not a function
85 videocontext.js:641 Uncaught TypeError: Array.from is not a function

Am I missing something ? Or it's no support in mobile yet?

thank you !

changing source of a sourcenode

To change the src for a node, I used node.element.src = newSrc; which of course works, however doesn't align with the given ctx.video(src); API. In video() src can be different types, while node.element.src is explicitely a string with a url. Should be more dev friendly to offer:

class VideoNode {
   set src() {}
}

and then to process the input accordingly.

Use with array playlist as input?

This is more of a question and less of an issue, but I'd been using the node deprecated BBC HTML5 Video Compositor, and I really liked the possibility of being able to pass in a playlist that consisted of an array of clips, as an EDL of some sort.

Is there a way to do the same in VideoContext?

Playing a WebM results in the final frame remaining visible until VideoContext is restarted.

Hi,

We have been testing WebM videos with alpha in VideoContext, overlaying on top of another video. It works well except for when the video ends, the final frame is still visible on the canvas until VideoContext is restarted.

We have checked the source node with the WebM video and it definitely looks like it ends correctly and is unloaded by VideoContext.

Is there any other reason we would be seeing this behaviour?

Thanks,

John

CrossFade values are opacity or transparency?

The transition function is:
transition(startTime, endTime, currentValue, targetValue, propertyName)

currentValue and targetValue define opacity or transparency?

1.0 is 100% opaque or 100% transparent?

It is working like 100% transparency but for a CrossFade transition I think it should indicate the oposite.

Does this make sense to you?

VideoNode crashes if node._startTime === node._currentTime on first update

I have a videocontext graph that sometimes produces this error on loading

Uncaught TypeError: Cannot set property 'playbackRate' of undefined

This error occurs in videonode.js if, on loading, this._startTime === this._currentTime, This prevents this._load() being called before this._state is set to playing in sourcenode.js

this._load is only called if this._startTime - this._currentTime < this._preloadTime (https://github.com/bbc/VideoContext/blob/master/src/SourceNodes/videonode.js#L172), so this can be fixed by changing this condition to "less than or equal to" this._startTime - this._currentTime <= this._preloadTime

Or, if the bug is that playing is set prematurely, you could update the condition that sets the playing state https://github.com/bbc/VideoContext/blob/master/src/SourceNodes/sourcenode.js#L354 to be less than (but not equal to)

Best way to clean

Hi,

What is the best way to clean a video context timeline? I mean removing all the videos and effects (or destroy and create again)?

Thnks

Easier set volume

I could not find the volume for the VideoContext. Instead, each source has its own volume. I - and I guess most others as well - treat the whole composition as one video with one volume control. However if I do have build an UI around video context, setting the volume, is cycle through all source nodes and set it on each. I was wondering, whether there is a default volume, that is set on all source nodes, when they are added and when set is applied onto all present source nodes as well.

In pseudo code:

VideoContext {
    get volume(); // returns the default volume
    set volume(value); // sets the default volume and applies it to all present source nodes
}

Would be wonderful to have that.

video is black, sound is playing

I made the switch from desktop to mobile development, using cordova.

Cordova and mobile platforms are kinda weird when it comes to path and URLs for files. Cordova mainly offers two urls: file:// and cdvfile:// to access videos on the mobile device. The apps itself are served via file://android_asset/www/index.html (as prod build) or via http:// (dev build). Also there are many restrictions that need to be adjusted to load locally available videos (CORS and mobile device access whitelisting). Using a standard <video> element with one of the urls above as element.src works quite nice. Setting the crossorigin attribute to anonymous can make the video unloadable (which is a must to display it onto canvas). I was able to load a video in a prod build (served as file://) from file:// with anonymous crossorigin. At least I could instruct the VideoContext to play, however the screen was black, yet the audio gives sound.

From MDN video element reference:

crossorigin This enumerated attribute indicates whether to use CORS to fetch the related image. CORS-enabled resources can be reused in the element without being tainted

Since I set CORS to whitelist my video origin, which finally made the VideoContext play but only sound, I'm wondering whether you experienced this field before and if you have a good advise to make this library usable on mobile devices (using cordova).

Thanks

playbackRate only update AUDIO/VIDEO after changing currentTime

When
ctx.playbackRate++;
The currentPosition is increased properly but the audio/video do not speed up.

Work around: The playbackRate only changes the VIDEO speed after
ctx.currentTime = ctx.currentTime;

Ps. Tried with the video Big_Buck_Bunny_4K.webm.160p.webm

Chromecast?

Hey guys,

I am really impressed by your library and might consider using it. I was wondering, if - mostly technically spoken - the rendered video (as it is put on canvas) can be transmitted to chromecast?

Answer: Yes, it is possible. See comment below.

thanks

Changing playbackRate unaffects the video rendering

I do have a very strange situation. I developed a video player around VideoContext, which is great. It runs in electron, which sounds like a pretty stable context for running a webapp. However, video behaves different across systems. I have 4 buttons to change playback rate: 0.5x, 1x, 2x and 5x. It works well for me, yet for some of my colleagues, when they select 5x, the time updates accordingly, yet the frames are still being rendered at 1x speed. It's difficult to narrow this error down to what's responsible for that. What might be cause for VideoContext to still render the frames at 1x speed while actually it is 5x ?

Thanks a lot

Pixelated video and images

So I'm working on a live coding language to wrap around VideoContext; the language is going well but playback is off. Videos and images are extremely pixellated. It may be a problem with my implementation but Ian mentioned having the problem internally at the BBC before, any ideas? I've attached an example below.

vid-example-bad

vid-example-good

replay video with tradition has some errors

const node = this.ctx.video('mp4');
node.startAt(0);
node.stopAt(5);

const node2 = this.ctx.video('mp4');
node2.startAt(2);
node2.stopAt(10);

const node3 = this.ctx.video('mp4');
node3.startAt(7);
node3.stopAt(15);

const node4 = this.ctx.video('mp4');
node4.startAt(15);
node4.stopAt(20);

const node5 = this.ctx.video('mp4');
node5.startAt(20);
node5.stopAt(27);

const crossFade = this.ctx.transition(VideoContext.DEFINITIONS.HORIZONTAL_WIPE);
const crossFade2 = this.ctx.transition(VideoContext.DEFINITIONS.CROSSFADE); // 渐显
const crossFade3 = this.ctx.transition(VideoContext.DEFINITIONS.CROSSFADE); // 渐显
const crossFade4 = this.ctx.transition(VideoContext.DEFINITIONS.CROSSFADE); // 渐显

// 1 2 平移
node.connect(crossFade);
node2.connect(crossFade);
crossFade.transition(2, 5, 0.0, 1.0);

// 2 3 渐显
crossFade.connect(crossFade2); // connect to input of crossfade2 rather than destination.
node3.connect(crossFade2);
crossFade2.connect(this.ctx.destination);
crossFade2.transition(7, 10, 0.0, 1.0);

// 4 无特效
node4.connect(this.ctx.destination);

// 5 开始
node5.connect(crossFade3);
crossFade3.connect(this.ctx.destination);
crossFade3.transition(20, 23, 1.0, 0.0);

The code is like this; first play is all right; but when i replay it after first end; some node show nocontant in this.ctx.registerCallback('nocontent', function () { console.log('nocontent'); }); has something wrong with it or did I make a mistake?

video scaling and quality

I give the library a try and it made all my code a lot easier (since I was working around the same issues you had (= no fixed currentTime, etc.)). I followed the basic example, create video context, connect a video node and play, nothing fancy.

Before that, I had a <div> container in which I placed many <video> elements, making one of them active (= hide the others). The <video> elements took up all the space from the surrounding <div>. Videos played on the space are automatically scaled to their own ratio (some had 16:9, others 4:3 but the <video> element was clever enough to arrange it quite fine. Now, instead of the video elements, I have my canvas, which also fits into the surrounding div, however, the videos take the full canvas and because of that become skewed. So I asked for the dimensions of the videos at first and set the width and height of the canvas to fit their ratio.

I feel like I'm doing it wrong as I was also possibly expecting to auto do that for me.

Second (which I thought is a reason of bad scaling) is the quality of the video is lost. I do have a 1080p video but when rendered on the canvas it looks horrible (see screenshot below).

videocontext-bad-quality

How to get the original quality at auto scaling videos?

Thanks a lot.

single videoNode before and after adding transition is not work

this.ctx = new VideoContext(canvas)
const node = this.ctx.video('mp4');
node.startAt(0);
node.stopAt(10);

const node2 = this.ctx.video('mp4');
node2.startAt(5);
node2.stopAt(20);

const node3 = this.ctx.video('mp4');
node3.startAt(15);
node3.stopAt(25);

const crossFade = this.ctx.transition(VideoContext.DEFINITIONS.HORIZONTAL_WIPE);
const crossFade2 = this.ctx.transition(VideoContext.DEFINITIONS.CROSSFADE);

node.connect(crossFade);
node2.connect(crossFade);
crossFade.connect(this.ctx.destination);
crossFade.transition(5, 10, 0.0, 1.0);

node3.connect(crossFade2);
crossFade2.connect(this.ctx.destination);
crossFade2.transition(15, 20, 0.0, 1.0);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.