gpujs / gpu.js Goto Github PK
View Code? Open in Web Editor NEWGPU Accelerated JavaScript
Home Page: https://gpu.rocks
License: MIT License
GPU Accelerated JavaScript
Home Page: https://gpu.rocks
License: MIT License
What if along with the function inputs we could use a string that has a webgl script and has access to all the inputs. That would make it more reliable seeing as the compiling is not perfect yet.
Just a suggestion
A website like html5rocks.com but for parallel programming. Such easily accessible educational material would be novel for this field.
We want to investigate how to implement int type. This would mean emitting different code depending on which mode we run the compiler in.
Use the WebWorker API to run the kernel on CPU cores.
Challenges:
Some basic compile-time optimisations would be nice.
Already implemented:
Possible optimisations:
decode32(encode32(x))
, can easily be replaced as x
if (x == 1) y += 1;
to y += x*1;
Currently all APIs are blocking and will freeze the main thread if the kernel is computationally large.
Challenges:
gpu.js ?
@fuzzie360 : Frankly speaking steam chat is kinda bad, sometimes msgs goes into blackhole, when my com auto updates lol.
As by @staceytay reported in : #31
Hello. How I can passthrough JS functions (i. e. if can be universal) in gpu.js?
Also, how I can use variants of grid values?
Because I want to make graphical color picker, for HSV, HSL, HCG.
Currently argument arrays are read only. However this limits the kind of operations that are possible.
For instance it is not possible to define FFT like algorithms as you have to map to the array output, necessitating an N^2 operation.
That is it is not possible to separate the size and shape of the kernel from the size and shape of the outputted/modified array.
Would it be possible to support this mode of operation? What would be needed in gpu_core.js to achieve this?
Vectors are very useful for graphical computations so it would be very useful to support vec variables and function arguments and function returns.
When it's implemented it could look like:
function kernel() {
var v = this.vec2( 1, 2 );
var mag = Math.sqrt(v.x*v.x + v.y*v.y);
}
The "this" context variable can provide the vec2 function in fallback mode just like this.thread.x and etc.
Challenges:
/*vec3*/var v = ...
which might look very ugly.We want to support full type inference for all variables and function parameters. In order to do that however, we need to ensure that userland functions all obey the language rules of GPU.js in order for the type inference to work.
Types we need to support:
float
vec2/3/4
array of
Types we plan to support:
int
object
Conclusion of : #24
Hey there guys,
I'm currently working on shifting big calculations from the CPU to the GPU using GPU.js to increase the performance of my app.
I tried the following:
I created a gpu Kernel which should multiplicate each element from ArrayA, with the corresponding Item from ArrayB:
var gpuMultiplication = gpu.createKernel(function (ArrayA, ArrayB) {
return ArrayA[this.thread.x] * ArrayB[this.thread.x];
}).dimensions([50]);
this Code works just fine but only when I'm calling the function using newly created arrays as the parameters. E.g.: gpuMultiplication([1 ,2, 3, 4, ...], [..., 4, 3, 2, 1]);
Once I'm trying to pass an array as a variable into the function (gpuMultiplication(ArrayX, ArrayY);
), gpu.js crashes with the following exception:
gpu.js:44 An error occurred compiling the shaders:
ERROR: 0:135: 'user_ArrayA' : undeclared identifier
ERROR: 0:135: 'user_ArrayASize' : undeclared identifier
ERROR: 0:135: 'user_ArrayASize' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'user_ArrayASize' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'user_ArrayADim' : undeclared identifier
ERROR: 0:135: 'user_ArrayADim' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'user_ArrayADim' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'user_ArrayADim' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'get' : no matching overloaded function found
ERROR: 0:135: 'user_ArrayB' : undeclared identifier
ERROR: 0:135: 'user_ArrayBSize' : undeclared identifier
ERROR: 0:135: 'user_ArrayBSize' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'user_ArrayBSize' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'user_ArrayBDim' : undeclared identifier
ERROR: 0:135: 'user_ArrayBDim' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'user_ArrayBDim' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'user_ArrayBDim' : left of '[' is not of type array, matrix, or vector
ERROR: 0:135: 'get' : no matching overloaded function found
Both Arrays are initialized as Float32Array(50) and filled with random values. (Values are only numeric and without any commas)
Am I doing something wrong? Or is it not possible to input arrays like that?
I created my code according to the following example from your website: http://gpu.rocks/examples/
Regards,
Michael
Kudos for @staceytay for doing up a ray tracer demo
http://staceytay.com/raytracer/
Sadly for the GPU.JS team, it also shows that we may have a canvas related bug for rendering mode (floating point accuracy?). Specifically for "Lambertian reflectance"
Maybe its his code, or our code (most probably ours), but yea =(
@todo : investigate and fix
To replicate, switch between CPU and GPU with Lambertian reflectance "on" and pay close attentions to the shadow shading
We want to support fix sized arrays. To ensure that, we check for modifications to arrays beyond initialization. Not very JS-like, but we need to either initialise or annotate the array var declaration for the type inference to work
I was curious on how decimal precision is obtained in gpu.js. 64, 32, 24, or less? Can we have that put into documentation as well?
Cause seriously, for small data sets CPU is just better.
Need to "objectively" show the data speed up as the data set gets larger (lol)
Would be useful for image processing like convolution filters and etc. Might require #7 to store rgba but its possible to use partial support before vec are implemented as its a very specific case for output.
=== is the recommended way to compare values in javascript, but it throws a compiler error in, for example:
var y = gpu.createKernel(function() {
if (1 === 1) {
}
});
We should strive to achieve human friendly compiler errors like the ones produced by clang llvm
Long term practical application, that may have commercial usage.
Have you ever considered adding GPU.JS to NPM?
An error occurred compiling the shaders: ERROR: 0:135: 'constants' : undeclared identifier
ERROR: 0:135: 'constants_pixelMultiplier' : undeclared identifier
ERROR: 0:135: 'constants_scale' : undeclared identifier
ERROR: 0:135: '' : methods supported in GLSL ES 3.00 and above only
ERROR: 0:135: 'func' : invalid method
ERROR: 0:135: 'assign' : cannot convert from 'const int' to 'highp float'
Here is my code.
let gpu = new GPU();
function generateArrayWithFunction(w, h, scale, f) {
/*let output = [];
for (let x = 0; x < w * pixelMultiplier; x++) {
output.push([]);
for (let y = 0; y < h * pixelMultiplier; y++) {
output[x][y] = f(x / pixelMultiplier * scale, y / pixelMultiplier * scale, 0);
}
}
return output;*/
let g = gpu.createKernel(function() {
return this.constants.func(this.thread.x / this.constants.pixelMultiplier * this.constants.scale, this.thread.y / this.constants.pixelMultiplier * this.constants.scale, 0);
}).dimensions({dimensions: [w, h], constants: {func: f, pixelMultiplier: pixelMultiplier, scale: scale}});
return g();
}
This: http://gpu.rocks/getting-started/
is over 9000 times better then
This: http://gpujs.github.io/dev-docs/
Problem is the former is manually created, the later is generated.
And so far from every JS documentation engine i use, they have quirks, or something off about it. NaturalDocs for all its "old-fashion" looks, works.
Basically we need to replace this or fix it.
We want to support calling functions declared in the environment outside the kernel. This means that we will use reflection to import the function from the environment when we detect a call in the code generator, and compile the function automatically.
Hi,
When trying to execute a GPU function on a large array (1920x1080x3 which represents a 1920x1080 picture with 3 color components) it fails with the following error:
"RangeError: too many arguments provided for a function call" on Firefox
"RangeError: Maximum call stack size exceeded" on Chrome.
See attached example, which do has non-op function to exhibit the issue: https://drive.google.com/file/d/0BxAjgoe3PaRwaTFHMUsxWG1xV2c/view?usp=sharing
It works when working on smaller arrays, but fails on large ones.
I'm not sure how big arrays are common in Javascript (I feel like Firefox or Chrome have issues handling them anyway) but such arrays don't sound that big to me, particularly for taking full advantage of GPU operations by working on big data set.
In the demo, coloring a canvas is done using this.color(r,g,b,a) for each individual pixel.
First, something like var i=0, and then in the kernel function doing this.color(0,i,0,1); i++; gives errors.
Is the only way possible to style a pixel is with fixed numeric values? Can we use variables or equate pixels to other pixels from the browser for example?
The docs say that the flag outputToTexture
can be used to get a Texture Object
instead of an array, and that it can be fed as an input to a new Kernel Function
to avoid the round-trip penalty between kernels. Is there an example of two kernels working this way?
Using travis CI? or my own personal jenkins slave?
Support for multiple functions, allowing nested calls. Etc.
So we can make changes and do regression testing easier
@Drulac would be interested. If we get for loops with variables done =P
See: https://gitter.im/gpujs/gpu.js muhaha
When I try to run on safari I get:
WebGL: INVALID_ENUM: readPixels: invalid type
Ever since we fixed computations on Intel and AMD gpus, calculations are not as fast as they can be. I'm going to explore how to improve/optimize this.
While I yet to test it,
I managed to reduce down the whole polyfill to 884 bytes.
https://github.com/picoded/small_promise.js/blob/master/bin/small_promise.polyfill.min.js
So we could actually squeeze it in.
for example:
var y = gpu.createKernel(function() {
for (i =0; i <10; i++) {
break;
}
}, opt);
It seems the ast_BreakStatement and ast_ContinueStatement functions are not implemented in functionnode_webgl.js
FOR MORE JS POWER!
Is it wise? I'm not sure how the concurrent access will impact our performance.
Are there any other JS language features that we want to support?
Referencing an old (commented out) failing test case inside if_else.js
function booleanBranch( mode ) {
var f = GPU(function() {
var ret = 0.0;
if(true) {
ret = 4.0;
} else {
ret = 2.0;
}
return ret;
}, {
thread : [1],
block : [1],
mode : mode
});
QUnit.ok( f !== null, "function generated test");
QUnit.close(f(), 42.0, 0.01, "basic return function test");
}
QUnit.test( "booleanBranch (auto)", function() {
booleanBranch(null);
});
QUnit.test( "booleanBranch (GPU)", function() {
booleanBranch("gpu");
});
QUnit.test( "booleanBranch (CPU)", function() {
booleanBranch("cpu");
});
Which currently fails, presumingly cause our entire system expects an argument for data input.
Soooo..... should this be fixed?
To replicate: just comment back in those test case, its inside test/src/features/if_else.js
Two options
Prefix all the function with a "user_" voodoo
Have a blacklist of function names. Throw an exception on GPU.addFunction call if function name is illegal.
Personally i vote for 2. Makes the shader code have less voodoo
Generally the methodology behind it can be found here
Where this...
if (x == 0) {
y += 5;
}
Becomes the more efficient...
y += 5 * when_eq(x, 0)
However is this really needed? or does WebGL already do this internally?
This project needs to adopt semver 2.0.0. So:
Let's do a quick once over the following advertised functionality:
Basic
Settings
Syntax
Platforms:
We should implement function level scope for all variables, for more JS-like scoping. What this means is that in codegen we emit all code for variable declarations on top first. This could also mean that we need to decompose all for loops into while loops, because:
func() {
for (var a = 0...) {}
for (var a = 0...) {}
}
is now invalid code.
Nodejs can use this https://github.com/unbornchikken/NOOOCL at serverside. I really like transpiler idea, but I want serverside runtime. Is it possible? Is there any example? Async API?
Note. Yes I understand, that openCL != webGL.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.