Giter Site home page Giter Site logo

gre / gl-react Goto Github PK

View Code? Open in Web Editor NEW
2.9K 47.0 160.0 131.55 MB

gl-react – React library to write and compose WebGL shaders

Home Page: https://gl-react-cookbook.surge.sh

License: MIT License

JavaScript 96.31% HTML 0.15% CSS 2.41% Shell 0.29% GLSL 0.84%
react webgl universal react-native glsl

gl-react's Introduction

Build Status npm

icon gl-react (v4)

gl-react is a React library to write and compose WebGL shaders. Implement complex effects by composing React components.

This universal library must be coupled with one of the concrete implementations:

Links

References

They use gl-react 🙂

Gist

import React from "react";
import { Shaders, Node, GLSL } from "gl-react";
const shaders = Shaders.create({
  helloBlue: {
    frag: GLSL`
precision highp float;
varying vec2 uv;
uniform float blue;
void main() {
  gl_FragColor = vec4(uv.x, uv.y, blue, 1.0);
}`
  }
});
class HelloBlue extends React.Component {
  render() {
    const { blue } = this.props;
    return <Node shader={shaders.helloBlue} uniforms={{ blue }} />;
  }
}

import the correct implementation,

import { Surface } from "gl-react-dom"; // for React DOM
import { Surface } from "gl-react-expo"; // for React Native via Expo GLView
import { Surface } from "gl-react-native"; // for React Native
import { Surface } from "gl-react-headless"; // for Node.js!

and this code...

<Surface width={300} height={300}>
  <HelloBlue blue={0.5} />
</Surface>

...renders:

Features

  • React, VDOM and immutable paradigm: OpenGL is a low level imperative and mutable API. This library takes the best of it and exposes it in an immutable, descriptive way with React.
  • React lifecycle allows partial GL re-rendering. Only a React Component update will trigger a redraw. Each Node holds a framebuffer state that get redrawn when component updates and schedule a Surface reflow.
  • Developer experience
    • React DevTools works like on DOM and allows you to inspect and debug your stack of effects.
  • Uniform bindings: bindings from JavaScript objects to OpenGL GLSL language types (bool, int, float, vec2, vec3, vec4, mat2, mat3, mat4, sampler2D...)
  • An extensible texture loader that allows to support any content that goes in the shader as a sampler2D texture.
    • support for images
    • support for videos (currently gl-react-dom)
    • support for canvas (gl-react-dom)
  • flowtype support.
  • Modular, Composable, Sharable. Write shaders once into components that you re-use everywhere! At the end, users don't need to write shaders.

Atom nice GLSL highlighting

If you are using Atom Editor, you can have JS inlined GLSL syntax highlighted.

To configure this:

  • add language-babel package.
  • Configure language-babel to add GLSL:source.glsl in settings "JavaScript Tagged Template Literal Grammar Extensions".
  • (Bonus) Add this CSS to your Atom > Stylesheet:
/* language-babel blocks */
atom-text-editor::shadow .line .ttl-grammar {
  /* NB: designed for dark theme. can be customized */
  background-color: rgba(0, 0, 0, 0.3);
}
atom-text-editor::shadow .line .ttl-grammar:first-child:last-child {
  display: block; /* force background to take full width only if ttl-grammar is alone in the line. */
}

gl-react's People

Contributors

alessbell avatar brentvatne avatar dependabot[bot] avatar erl-j avatar gre avatar inokawa avatar juanfrnz avatar jussikinnula avatar kelset avatar kesha-antonov avatar wcandillon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gl-react's Issues

Support for vertex shader / custom buffer data

the vertex and buffer should NOT be hard coded by implementations but provided by gl-react-core (and defaulted if none is provided).

Something like:

var shaders = GL.Shaders.create({ shaderName: { frag, vert } });
// and then
<GL.View shader={shaders.shaderName} buffer={[ 1, 2, 3, 4, 5, /* whatever*/ ]} ...  />

This would allow more advanced use-cases (like 3D shapes).

Now giving more power means we have to be more generic and expose much more OpenGL things.
This might be challenging and we will have to identify a subset of use-cases to start with.

<GL.Surface> wrapper => make gl-react more "universal"

There is an important API design decision to make, and this is something i've been thinking about for months now.

This issue depends on React 0.14+ being a dependency for react-dom and react-native ( facebook/react-native#2985 ) and therefore will only work with these upcoming versions. This will be a major semver release anyway.

Gist

Split <GL.View> in 2: <GL.Node> (abstract component, doesn't actually render() anything) and introduce a <GL.Surface> (contains the tree of GL.Node/GL component and renders the concrete impl's DOM Canvas / GLKView).

For instance, instead of doing:

<HelloGL width={511} height={341} blue={0.5} />

we will have to do:

<GL.Surface width={511} height={341}>
  <HelloGL blue={0.5} />
</GL.Surface>

This make things less magical and more explicit (final users of GL libraries have the responsability to "instanciate" a pipeline effects with GL.Surface). This will also simplify GL.View logic. Finally, this is the only way we can make GL.View fully universal (GL.Node is render-less so don't depend on the plateform specifics that is implemented by GL.Surface).

How current GL.View works

Currently, a GL.View is an unified component that can either be at the top-level of the tree (then it will render into a <canvas/>) or inside a GL.View (this it is not rendered but we use its props to build a data tree): we can use GL.View directly in the VDOM, it will be rendered into what we call internally a GLCanvas. But if you use a GL.View inside a GL.View it will be used differently: we will merge it with parents to make an unique tree of effects (resulting with only one rendering).

pros:

  • it allows to define component that you can use without thinking about if it's contextual to a GL.View or not. e.g: you just do <Blur><Blur>...</Blur></Blur> directly, with wrapping it into a wrapper.

cons:

  • complexity of the current code because of the unification
  • bad separation of concerns. This makes an unclear semantic and GL.View is harder to explain.
  • some props only make sense for the root node (the canvas).
  • The current design prevent us from creating truely universal components: GL.View is provided by implementers (gl-react or gl-react-native) but not the shared lib (gl-react-core). The reason is just because at the root level, GL.View have to render into the concrete implementation, but that is not true for all GL.View children nodes which just describe some data.

Proposal 1

split current GL.View into 2 pieces: GL.Surface and GL.Node. A bit like in react-canvas, you have Surface and View.

API changes

GL.Node

  • Props: { shader, uniforms?, width?, height?, preload? }
  • Methods: none
  • render() throws an exception that says something like "You need to wrap your GL Components / GL.Node into a GL.Surface".

GL.Surface

  • Props: { width, height, children, opaque?, preload?, onLoad?, onProgress?, autoRedraw?, eventsThrough?, visibleContent? }
    • children must be a GL.Node or a GL Component.
    • NB: width and height are only ruling the View / canvas style, not the actual DrawingBuffer size which will be determined by the root GL.Node (and optionally be inherited)
  • Methods: all current methods that are on GL.View

Library responsability

  • gl-react-core exposes { Node, Shaders, Uniform, createComponent }
    • Shaders will need an event system so implementers can be in sync (RN impl will list() and then on("create",...) to sync shaders and upload to the ObjC)
  • gl-react and gl-react-native exposes { Surface }. it will also still use gl-react-core helpers to create Surface with minimal effort.

Library renaming

  • Rename gl-react lib to gl-react-dom
  • Rename gl-react-core to gl-react.
    That way we are consistent with react naming. We will rename repository at the very end so we keep history.

improve how fbo get reused

We can improve a bit how fbo get reused.
At the moment a bunch of reuse cases are supported across multiple children but we don't support this classic the "pipeline" swap:

screen shot 2016-02-07 at 19 04 22

which is an important use-case

`eventsthrough` feature

having a use-case where I want to have events going through the glview (as being transparent) and the events get handle by the content to rasterize and also we want the content to be "responsible" for those events.

In term of API, I'm not sure how to inform which content is responsible for this however. Maybe forbidding to have multiple content in that context?

for the impl.: in gl-react this is just using pointer-events: none CSS. and for gl-react-native: iOS have view.userInteractionEnabled = NO

Add unit tests

we need unit tests to avoid regressions when adding features/refactoring (this happened a few times recently). We need to try https://facebook.github.io/jest/ (like used in React)

I think this should wait next major release (2.0.0).

Configure the linear interpolation of any sampler2D uniform

linear interpolation can be defined on almost everything: textures, content and fbos.
so we should be able to define a disableLinearInterpolation: true prop on a GL.Uniform or attached in a {value, opts: { disableLinearInterpolation: true }} object, like it's already possible for ndarray in DOM impl: https://github.com/ProjectSeptemberInc/gl-react-dom/blob/601180581a461760f541ee459552df9b8d2c52ef/Examples/Simple/Colorify.js#L36 / https://github.com/ProjectSeptemberInc/gl-react-dom/blob/6e4b9f4a8eb0045977c643a4920651082cb36fcb/Examples/Tests/index.js#L102-L111

use case:
you have a HelloGL 2x2 and you want to scale up to 200x200 but without linear interpolation so you preserve the 4 pixels in the display

How to smartly know that a content must be rasterized again or not

The current startegy is to re-draw everytime the VDOM get re-rendered.

But if you just rasterizing, let's say, an <img>, this should be done once after load.

First idea:
We could introduce a shouldUpdate prop?
OR
Is implementing the shouldUpdate of a component fix this?

Support for multi-pass

Use case: 2-pass blur.

Implement an efficient blur in GLSL is tricky and for best efficiency require more than one GLSL pass,
a simple way is to apply a gaussian blur on X and then on Y.

Here is an impl of gaussian blur on one dimension:

const React = require("react");
const GL = require("gl-react");

const shaders = GL.Shaders.create({
  blur1D: {
    frag: `
precision highp float;
varying vec2 uv;
uniform sampler2D image;
uniform vec2 resolution;
uniform vec2 direction;

// from https://github.com/Jam3/glsl-fast-gaussian-blur
vec4 blur13(sampler2D image, vec2 uv, vec2 resolution, vec2 direction) {
  vec4 color = vec4(0.0);
  vec2 off1 = vec2(1.411764705882353) * direction;
  vec2 off2 = vec2(3.2941176470588234) * direction;
  vec2 off3 = vec2(5.176470588235294) * direction;
  color += texture2D(image, uv) * 0.1964825501511404;
  color += texture2D(image, uv + (off1 / resolution)) * 0.2969069646728344;
  color += texture2D(image, uv - (off1 / resolution)) * 0.2969069646728344;
  color += texture2D(image, uv + (off2 / resolution)) * 0.09447039785044732;
  color += texture2D(image, uv - (off2 / resolution)) * 0.09447039785044732;
  color += texture2D(image, uv + (off3 / resolution)) * 0.010381362401148057;
  color += texture2D(image, uv - (off3 / resolution)) * 0.010381362401148057;
  return color;
}

void main () {
  gl_FragColor = blur13(image, uv, resolution, direction);
}
    `
  }
});

class Blur1D extends React.Component {
  render () {
    const { width, height, direction, children } = this.props;
    return <GL.View
      shader={shaders.blur1D}
      width={width}
      height={height}
      uniforms={{
        direction,
        resolution: [ width, height ]
      }}>
      <GL.Target uniform="image">{children}</GL.Target>
    </GL.View>;
  }
}

And here is how we could combine it to make a real blur effect:

class Blur extends React.Component {
  render () {
    const { width, height, factor, children } = this.props;
    return <Blur1D width={width} height={height} direction={[ factor, 0 ]}>
      <Blur1D width={width} height={height} direction={[ 0, factor ]}>
        {children}
      </Blur1D>
    </Blur1D>;
  }
}

at the end we have a nice component:

<Blur width={200} height={100} factor={1}> ...whatever... </Blur>

Technical thoughts

In the current implementation this multi-pass code is likely to not work (it won't render in the same "rendering frame" so the first draw will probably be black..) and this will not be efficient at all (using a webgl canvas per pass and drawing one canvas into the other).

We need to do some more work to optimize the specific case where, basically, a GL.View is put inside a GL.Target.

We need to use framebuffers to draw the effects pipeline once and using only one WebGL Context.

We need to linearize the VTree: put all targets into the same level, targets that are GL.View is a special case (add a new props) that will be rendered via framebuffers.

Inspiration: https://github.com/rackt/react-router/blob/0.13.x/modules/createRoutesFromReactChildren.js (this is how we can traverse the tree to localize the sub GL.View)

(PS: These are raw ideas, we need more thoughts on this.)

Code polishes

GL.Target children content

To allow more composability in this multi-pass pattern, we will also need to allow GL.Target to just contains an image URL as a children.
That way, we could do:

<Blur ...>http://greweb.fr/me.jpg</Blur>

// as well as

<Blur ...><AnotherComponent /></Blur>

factorize some code across gl-react and gl-react-native ?

flag to continuously re-draw

for use-case like the Video example, or if you want to do an effect over an interface that updates continously and it is not observable,
we could simply introduce a "autoRedraw" prop. This is better than current approach that do a requestAnimationFrame loop on "JS" side (especially for gl-react-native, we would handle the render-loop in native code).

TODO 2.1.x

  • write doc
  • gre/gl-react-dom-v2@caef65a#commitcomment-15227730
  • make it work with directly depending on React for incoming React Native dep
    • remove all reference to previous hack of injecting React in documentation
    • update all examples to fix all warnings
    • release new version of gl-react libs
  • update versions and deps
  • write release notes
  • test all RN example on ios and android

TODOs for 1.1.0

  • #11
  • #13
  • #17
  • refactor gl-react-core to simplify the complexity of the {type:"image"} and split into more types
  • implement the texture array feature in gl-react-native & port Simple & Tests example <= huge work. not for now. also we can't send a typed buffer from JS to RN, so this will be complex & maybe not so performant.
  • make sure null value is allowed for textures. this renders the default GL color
  • #15 : optimize gl-react-core shared search algorithm
  • in gl-react-native. there seem to be a regression with retina.
  • synchronise the documentation with latest features
  • update the Simple example documentation to add the new examples
  • test all examples in gl-react & gl-react-native

[2.0.0] Update documentation

also, because gitbook can only show one doc version at a time, we should have the summary clearly saying the version & main readme pointing to 1.x branch doc

  • examples should have directly link to implementations

GL.Target->GL.Uniform & Sharing computation of that GL.Uniform content

N.B. i've put together those 2 concerns because the second depends on the first. As the API gets renamed and the internal format will change a bit, this will be a minor release (not a patch)

First point is about renaming <GL.Target uniform> to <GL.Uniform name>.

GL.Target is nothing else that just linking to a uniform.

  • rename <GL.Target uniform={name}> to <GL.Uniform name={name}>.
  • allow that anything put inside a <GL.Uniform> is handled as being passed in uniforms props of the GL.View.

Second point is about sharing computation of the GL.Uniform content (view to rasterize / sub GL.View).

If 2 virtual DOM are identical by reference, it means we can share a same computation (share the same FBO / same rasterization).

Example 1

If you write something like...

const content = <canvas ... />; // or <View> in gl-react-native
return <GL.View ...>
  <GL.Uniform name="foo">{content}</GL.Uniform>
  <GL.Uniform name="bar">{content}</GL.Uniform>
</GL.View>

...currently, we will generate two times the content, and rasterize it two times. This is a waste of computation because we can trust the Virtual DOM to be referential transparent.
We can optimize this to do one computation by just checking the Virtual DOM reference duplicates.

Example 2

This Blur implementation is a 4-Pass blur that uses 4 times Blur1D. Blur1D takes 2 texture uniforms: the texture to blur and also a "blur depth map texture".

As a Blur user, we can either give static content like in the example:

<Blur width={w} height={h} blurMap="http://i.imgur.com/SzbbUvX.png">
  http://i.imgur.com/NjbLHx2.jpg
</Blur>

but more interesting part would be to give dynamic content:

<Blur width={w} height={h} blurMap={ <ComponentThatGeneratesTheBlurMap /> }>
  <ComponentThatGeneratesSomethingToBlur />
</Blur>

This doesn't change the API, just the fact that the user gives one blurMap vdom (that gets passed to 4 Blur1D) means we have to render it only one time. Also the Blur implementer will have nothing to do to make this share possible.

Implementation notes

a big part of the implementation will happen in gl-react-core, however there might be some rework to do in underlying implementations because the data format probably won't allow to express that contextual requirement of depending on a shared framebuffer (we need to have framebuffer from context as an extend to the current to framebuffer from children). note that for a "content to rasterize" type, the format already would allow that.

In the data node:

  • add "contexts" array, which is like children but are rendered before children and uses fbos that are contextual for these children (as well as contextual to the current node)
  • move the "fbo mapping" logic from the impls to gl-react-core so we can share this. the id in the texture pointers will reference the framebuffer id and not anymore the children index. { type: "framebuffer", id: 42 }. The interesting algorithm to work on will be to continue to minimize the number of fbos.

To summary, the implementers when going down the data tree will do:

function draw (data) {
  data.contexts.forEach(draw); // draw all contexts
  data.children.forEach(draw); // draw all children
  drawNode(data); // finally draw the current node
}
// N.B.: implements my continue to "enhance" the data to "renderData" which have resolved the shaders/textures objects

InvalidStateError while switching/seeking through videos

So I am using gl-react for rendering 2 videos which can be seeked into one webgl - canvas. The gl-react code looks like this:

<Surface
  autoRedraw
  width={width}
  height={height}>
  <GL.Node
    shader={this.getActiveTransition()}
    width={width}
    height={height}>
    <GL.Uniform name="from">
      <GL.Node
      shader={rawShaders[ADJUST]}>
        <GL.Uniform name="backgroundTexture">
          {this.renderVideo()}
        </GL.Uniform>
      </GL.Node>
    </GL.Uniform>
    <GL.Uniform name="to">
      <GL.Node
      shader={rawShaders[ADJUST]}>
        <GL.Uniform name="backgroundTexture">
          {this.renderVideo()}
        </GL.Uniform>
      </GL.Node>
    </GL.Uniform>
  </GL.Node>
</Surface>

the thing is, when I change the video elements current position (with ready state on mind), or I change their sources quickly, in Firefox v44 I got an error:
InvalidStateError: An attempt was made to use an object that is not, or is no longer, usable
this should happen when you are trying to set current time of an element when its not ready (eg. doesn't have metadata etc)

the error occurs on following places:
https://github.com/stackgl/gl-shader/blob/master/lib/create-uniforms.js#L10
https://github.com/stackgl/gl-shader/blob/master/lib/create-uniforms.js#L26
https://github.com/stackgl/gl-shader/blob/master/lib/create-uniforms.js#L106
https://github.com/stackgl/gl-shader/blob/master/lib/create-attributes.js#L85

I know it's possibly not an issue with gl-react itself but could someone point the direction what could cause this bug? Im always checking the ready state of the element before changing its props.

infer width and height from parent/style

it would be perfect if gl-react / gl-react-native could auto determine width and height from parent/style (for instance if using flex:1)

I think we'll wait 2.0.0 to land for this.

Undefined _fingGLNodeGLComponent

Hey, i try use gl-react on react native. I wrote a test application looking at your examples. And writes this error.
undefined is not an object(evaluating '_findGLNodeInGLCompon.via')

add more checks on numbers

  • check for NaNs. Because the RN impl will not be happy about this. We should detect this from everywhere: uniforms, number props.
  • non strict positive values for some cases: width, height, pixelRatio

inject previous buffer as a texture

We need a way to inject the previous buffer as a texture. usecase: persistence effect (motion blur)

This is a solution to address gre/gl-react-native-v2#19

API

First idea: <GL.View ... uniforms={{ backbuffer: GL.DrawingBuffer }} />

Second idea (more generic): <GL.View saveBuffer="foo" ... uniforms={{ backbuffer: GL.fromBuffer("foo") }} />

GL.View: onLoad event

I need to know when the GL.View has been "loaded",
use-case: you want to wait that 2 GL.View are loaded before showing them.

This can be implemented with #3 and maybe it works well with the "waitImages" solution

[idea] use a "copy" shader to solve some cases

Here are the cases:

  • #33 surface size is different of root node size
  • #27 if you cache the root node, we need to keep the rendering from one frame to another

we might be able to do this from gl-react. Also after this, the width and height of the root node should probably lead more the GL Surface (if it's not already the case).

such a "copy" shader is probably not the most efficient idea, but i'm not seeing another way in OpenGL/WebGL. we need to be sure to only enable it if it's required.

GL.View.isGLView to detect a GL.View

current method is to check class reference, this does not support multiple instance of the lib as React 0.14 can do, so we will just use duck-typing with isGLView instead.

this should probably go with #22

Need to find a solution for the alpha issue; not using blending

(issue to follow gre/gl-react-native-v2#12 )

Problem

The problem we are facing is trying to solve this:

this is the result of a pipeline a chain of N "Copy" shader framebuffers (that just copy a texture). And we do this on an image with a gradient of opacity values on top of another image. Fact is we observe a reduction of the alpha values (like if alpha is multiplying itself. 80% x 80% = 64% , ...).

This result occurs when doing a classical:

gl.clearFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)

First solution attempted

A solution is explained here: http://stackoverflow.com/a/18497511/343892 (answering this same problem).
It involves doing gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA); with premultiplied texture.

and i've tried to implement it. It works, but now there is two big downside:

1. it is not reliable at all in WebGL world...

Because to make it possible, I need to create the WebGL context with { premultipliedAlpha: false } option (which is not the default), and it does not work everywhere... As mentioned here, premultiplied alpha is enabled by default in WebGL and as mentioned here, the premultipliedAlpha WebGL option is not stable across browser implementation since "this option is optional and it is not feature-detectable".

2. shaders have to be premultiplied

This method only relies on using premultiplied alpha textures which means textures that have their RGB components multiplied by the A component.
This is also true for "fbo textures" that have been rendered by a shader.

using this "non standard" blending method, any "generic" shader will have to adapt to make itself "premultiplied". (if using transparency)

Now a solution I've found is to not change it on the shader side, but introducing a new props (option) to make a specific pipeline effect using the classical gl.SRC_ALPHA instead.

This makes everything more complicated now...

Help wanted

So we are back at the beginning for this problem. I like to keep as few variation as possible from a default WebGL settings, so I wish to find a solution for this initial problem without involving esoteric blending modes.

PS: the solution introduced here "works" in the react-native iOS implementation but i'm not so sure it works across all devices (since blending is generally a more advanced feature of OpenGL) and not sure it will also work in a future Android version. So I wish to move it to more standard approach too.

`visibleContent` props

false by default.
If false, then:

  • no content is visible if the canvas is non opaque. (on RN we can offset the content to be moved away from the container)
  • the content is responsible of catching events IF eventsThrough is on

Better error message for wrong import

Hi there,

I just began using your lib and it seems pretty great by now. One Feedback I would like to give is that the error message mentioned in #32 and displayed here was not very helpful at the first moment. I had to search for this error in the project, I think having this part as error message would have helped me to understand even faster what I did wrong.

Hope your project stays as cool as it is, Happy Holidays!

Surface on load callback not firing when texture url changes

Hey,

I am changing the texture url via props, while everything works fine the callbacks do not fire. Both onLoad and onProgress only fire on initial load. I was going to use that callback to show a loading indicator when loading a new texture. Is that something you can fix or maybe there is a way to go around it?

Thanks!

Support for inline shader props

<GL.View
  shader={{ frag: "..." }}
  ... />

Use-case: shader editor

To do this, we need to introduce .remove(id) on GL.Shaders and implement it in implementations.

gl-react might do most of the work by handling the create / remove and providing shader id to the Surface implementations. remove is called when a previous inline shader is gone and when the Surface is destroyed (we somehow need to hook on this from gl-react side).

Expose gl.viewport

This is a great project, saved me so much time and effort.

It would be nice to have a way to control the viewport, a good use case would be a canvas that fills the whole page. You can set the canvas to window width and height and then change the viewport to make sure the image will fill the frame and keep the ratio.

Thanks!

Capture view as svg

This would only be applicable to 3d models that are hidden line views. Doing it with phong renderings, for example is too much work.

There is a common pattern where you need to capture a screen, but you also want to print. To print you need a vector representation.

I know this is a tough one, and I am asking for advice of how to do it, so I can try.

[width] and [height] should be optional for sub GL.View

and by default it would take its parent width/height if those props are not provided.
The root width/height still needs them and should throws if not provided.

Also I'm wondering if we should make the root a special component, something like react-canvas's Surface

Surface: contentOver

it should be possible to put content over the canvas instead of under at the same time as using that view as rasterization.
Example use-case: text shadow.

Should be quite straightforward to implement. My only concern is how to re-order the elements, at the moment the canvas is appended after the content, does this means it would be prepended? how to fix the implementations without race conditions.

Somehow, I'm wondering if we shouldn't even allow to have any arbitrary (by tag id / node handle) external content. For the moment let's forget about this.

more format for captureFrame

should be able to allow following option (second parameter):

  • { type: "png" } 
  • { type: "jpg", quality: 0.9 }
  • { type: "png", quality: 0.9, format: "base64" }
  • { type: "png", quality: 0.9, format: "blob" } // React DOM
  • { type: "png", quality: 0.9, format: "file" } // React Native

Not working with electron-react-boilerplate

Hi, first thanks you for your awesome project 👍

I've been experimenting with gl-react and electron-react-boilerplate (https://github.com/chentsulin/electron-react-boilerplate).
See this little example: https://github.com/morukutsu/electron-react-boilerplate/blob/master/app/containers/App.js

Running this code will produce an exception in events.js (Node.js related file, I think) in the addListener function line 220:

++this._eventsCount;
Uncaught TypeError: Cannot assign to read only property '_eventsCount' of #<Object>

The bug is caused by https://github.com/ProjectSeptemberInc/gl-react/blob/master/src/Shaders.js#L182 which exports an immutable Shaders object.
Then, here https://github.com/ProjectSeptemberInc/gl-react-dom/blob/master/src/compileShaders.js#L17 , an event is added to this object.

To quick fix it, I just removed the call to Object.freeze and it worked very well.

I don't know what the Javascript RFC says about adding an event to a frozen Object but I guess it violates the immutability assertion. Also, why does the error happens only in this environment, I did not test it in the browser yet. What do you think?

Thanks

Better image load experience

Problem

Currently, as soon as an image src disappears from the uniforms (e.g: moving from uniforms { t: "foo.png" } to { t: "bar.png" }), the image/texture allocated to "foo.png" instantly get cleaned which results of blinks and is especially annoying when you loop over a limited set of images.

We need a way to tell the GL.View to retain some textures to avoid glitches during loading steps (see for instance the Blur example, switching textures create a blink).

Solutions

Caching: images prop

We might want to preload images and ensure they are retained by having a preload={[ array of images ]} props.

Rendering mode: preload prop

Now, that raise another interesting use-case: a user may also want to freeze the rendering until all the uniforms are loaded.

Implementation:
gl-react-core will resolve the array of images to wait & pass-in to implementers. we might handle it in multiple levels in the rendering tree.

LRU cache

We might implement some LRU cache strategy that guess that the image are likely to get re-used (this is advanced, and I would be in favor of giving more control to users).

workaround for releasing 2.0.0

as RN depends on R is taking a bit long to be a reality, we'll still release 2.0.0 with following workaround:

Final user will have to do a require("gl-react/react-native") or require("gl-react/react") in the entry point and before all imports, so we can internally store the React instance so the lib can work with it.
Internally in the library, we'll replace require("react") by like require("./react-runtime"). And that "./react-runtime.js" file would throw an explicit exception in case the React instance was not set.

That way, we can still make all libraries to work and we can remove the need to require() once RN depends on R.

Cache a node: GL.Node saveBuffer+cached props

We should be able to cache a node (which means part of the rendering tree) using a previously saved buffer. We should also be able to pipe the previous framebuffer as a texture for more advanced use-cases like cellular automaton or simply persistence effects (and making gl-react more compatible with shaders from shadertoy.com and glslsandbox.com). This can also be used in combination of discard in GLSL to re-render only part of a buffer (preserve part of the previous buffer) to allow interesting things like drawing a brush (see gre/gl-react-native-v2#19).

API Draft

  • <GL.Node shouldUpdate={false} saveBuffer="foo">: a way to cache node's render computation.
    • saveBuffer: keep the rendering content for the next frame (using a framebuffer). It's a bit like 'key' semantic in React but it's global across the whole tree.
    • shouldUpdate: if set to true (default) will make the GL.Node render normally. if set to false, will shortcut the rendering to re-use the previous framebuffer instead.
  • special symbols that can be passed to uniforms:
    • GL.PreviousBuffer: the previous rendered buffer.
    • GL.HasPreviousBuffer: boolean to know if GL.PreviousBuffer is provided (if false, PreviousBuffer will be rendered black).

Implementation

gl-react-native and gl-react-dom implementations will have to use a new framebuffer collection that is preserved after each render. This new collection of framebuffers will be solved by gl-react. There is 2 categories: "shared" fbos (if supported by the implementation (e.g iOS), fbos are shared between the surface instances) and "preserved" fbos that exclusively belong to each surface instance. The preserved fbos are identified by an id (user provided in saveBuffer props). (so preservedFbos will be a Map / Dictionary / JSObject from that id to the fbo object).
We need a new internal format to describe a fbo: { type: "shared", id: 1 } & { type: "preserved", id: "foo" }.
Everywhere we used a fboId, it should adopt this new format (e.g in texture types).

in draw()

in draw() method of implementations we need to do this:

  • save glcanvas' preservedFbos into previousPreservedFbos.
  • create a newPreservedFbos.
  • every time a uniform references a preservedFbos, we pick it from the previousPreservedFbos map.
  • every time a node uses saveBuffer, we still use the shared fboId to store the rendering, but at the end of that rendering, we store that shared fboId into newPreservedFbos and create a new fbo in the shared fboId (this is the safest, but we should try to limit this recreation.., i'm already thinking of a smarter way but that might be premature optim). – I need to rethink that, i'm not 100% sure if this works and if it's the simpler way
  • set newPreservedFbos into glcanvas' preservedFbos for next draw().

gl-react

there must be an invariant that check there are no dups in saveBuffer in the tree.

This feature would have to be carefully used because storing part of the tree cost one more framebuffer to keep in memory (for each view instance) but in some cases we might want to cache a complex rendering that rarely changes. It's a tradeoff between GPU computation and Memory.


`<GL.Node shader={...} saveBuffer="foo" cached={true || false} />

  • saveBuffer because I think we need a unique id to locate the node if tree changes a lot. this also makes it consistent with #25
  • cached is a boolean to indicate the lib should use a previously saved buffer.

also, saved buffer can be used in uniforms:

<!--
<GL.Node shader={...}
  uniforms={{
    backbuffer: GL.fromBuffer("foo")
  }}
/>
-->

GL.fromBuffer references a buffer saved with saveBuffer prop in the last rendering frame, it can be the node itself (allows to do persistence effect – in that case, implementation will need to use a trick either to switch between 2 fbos or copy fbos one into another).

Support for grabbing the output from GL

I'm trying to give users in my a app a button to download the output from a gl canvas. I'm doing this by grabbing the canvas DOM element and using canvas.toDataURL(). However, this results in a blank image (even though the canvas is not blank).

The reason for this, as well as two possible fixes is explained in this SO question. Basically, either set preserveDrawingBuffer: true when you create the GL context, or grab the image using canvas.toDataUrl() immediately after drawing, before control is returned to the browser.

It would be nice if there was either an option to set a preserveDrawingBuffer option (i.e. on GL.View, similar to the opaque option), or tell gl-react to save a copy of the output somewhere after every render so that you can grab it later. I suppose another possibility would be to allow for an optional callback which is called directly after every render, so that it can be done manually.

optimize performance of the gl-react-core search algorithm

childVDOMs() might be a current bottleneck (in the JS/React part, very negligible when comparing with the GL part which is much more intensive)

a simple way to optimize this is to compute the "children vdoms" once for each data node, and just make this an accessor.
because currently, as we going down the tree, we are recomputing again everything

Creating textures from arrays

I have a shader which I'm trying to adapt to work with gl-react, but I've run into a problem. My shader uses a texture effectively as a gaussian function lookup table. In my previous implementation, the gaussian function was calculated beforehand as an array of values which was then used to create a GL texture to be used in the shader.

However, the only way I can see of creating textures using gl-react is by specifying an image source path. Is there any way of using an array of precomputed values instead? Or any sort of work around?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.