Giter Site home page Giter Site logo

js-reactivity-benchmark's People

Contributors

alxhub avatar erjanmx avatar fabiospampinato avatar modderme123 avatar nin-jin avatar pkozlowski-opensource avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

js-reactivity-benchmark's Issues

How to run the benchmark locally

Hi, I cloned the repository locally so that I can try out the benchmark on my machine,

  • what are the steps to install everything? npm install?
  • how do I apply the angular patch? git apply ./patches/.....patch? does the working dir matters from which to run the command?
  • how to run the application after install? node ./src/index.ts? npx tsx ./src/index.ts? when I try I only see "Assertion failed" messages in my console

thx

Various possible improvements

When working on adding GC tests #8, I had a few other improvement ideas that came to mind. So I'll suggest them all here, and I'd be happy to make PRs for them at some point if you'd like.

  • Isolated benchmarks: One thing I noticed when benchmarking my own experiments is that if there was a memory leak or heavy task scheduled during one benchmark that would affect the results of the next benchmark and the results of other libraries. It would be great if each benchmark case would run in it's own isolated process then return timings to the main process. This is similar to how js-framework-benchmark behaves.

  • App benchmarks: If libraries have their own unique API they can be forced into taking non optimal paths in order to conform to the reduced ReactiveFramework API. It would be good to see something like a implementation of js-framework-benchmark without the DOM parts, where libraries have freedom to implement a ReactiveApp API.

  • Reporting improvements:

    • Logs/Assertions get collected and reported separately to a log file
    • New column for if there were any assertion failures on a bench case
    • Report crashes on the table as a benchmark failure (Isolated benchmarks will allow this)

Unable to execute the benchmark with NPM

I can't seem to be able to execute the benchmark, neither the test nor the bench scripts work for me, after a clean clone after an npm install.

~/Code/fabiospampinato/js-reactivity-benchmark ❯ npm run test               

> [email protected] test
> vitest


 DEV  v0.29.7 /Users/fabio/Code/fabiospampinato/js-reactivity-benchmark

 ❯ src/frameworks.test.ts (40)
   ✓ @angular/signals | simple dependency executes
   × @angular/signals | static graph
   × @angular/signals | static graph, read 2/3 of leaves
   × @angular/signals | dynamic graph
   ✓ Compostate | simple dependency executes
   ✓ Compostate | static graph
   ✓ Compostate | static graph, read 2/3 of leaves
   ✓ Compostate | dynamic graph
   ✓ $mol_wire_atom | simple dependency executes
   ✓ $mol_wire_atom | static graph
   ✓ $mol_wire_atom | static graph, read 2/3 of leaves
   ✓ $mol_wire_atom | dynamic graph
   ✓ Preact Signals | simple dependency executes
   ✓ Preact Signals | static graph
   ✓ Preact Signals | static graph, read 2/3 of leaves
   ✓ Preact Signals | dynamic graph
   ✓ @reactively | simple dependency executes
   ✓ @reactively | static graph
   ✓ @reactively | static graph, read 2/3 of leaves
   ✓ @reactively | dynamic graph
   ✓ s-js | simple dependency executes
   ✓ s-js | static graph
   ✓ s-js | static graph, read 2/3 of leaves
   ✓ s-js | dynamic graph
   ✓ SolidJS | simple dependency executes
   ✓ SolidJS | static graph
   ✓ SolidJS | static graph, read 2/3 of leaves
   ✓ SolidJS | dynamic graph
   ✓ uSignal | simple dependency executes
   ✓ uSignal | static graph
   ✓ uSignal | static graph, read 2/3 of leaves
   ✓ uSignal | dynamic graph
   ✓ Vue | simple dependency executes
   ✓ Vue | static graph
   ✓ Vue | static graph, read 2/3 of leaves
   ✓ Vue | dynamic graph
   ✓ x-reactivity | simple dependency executes
   ✓ x-reactivity | static graph
   ✓ x-reactivity | static graph, read 2/3 of leaves
   ✓ x-reactivity | dynamic graph

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ Failed Tests 3 ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯

 FAIL  src/frameworks.test.ts > @angular/signals | static graph
TypeError: runWatchQueue is not a function
 ❯ Object.withBatch src/frameworks/angularSignals.ts:22:5
     20|   withBatch: (fn) => {
     21|     fn();
     22|     runWatchQueue();
       |     ^
     23|   },
     24|   withBuild: (fn) => fn(),
 ❯ Module.runGraph src/util/dependencyGraph.ts:64:15
 ❯ src/frameworks.test.ts:38:17

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯[1/3]⎯

 FAIL  src/frameworks.test.ts > @angular/signals | static graph, read 2/3 of leaves
TypeError: runWatchQueue is not a function
 ❯ Object.withBatch src/frameworks/angularSignals.ts:22:5
     20|   withBatch: (fn) => {
     21|     fn();
     22|     runWatchQueue();
       |     ^
     23|   },
     24|   withBuild: (fn) => fn(),
 ❯ Module.runGraph src/util/dependencyGraph.ts:64:15
 ❯ src/frameworks.test.ts:48:17

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯[2/3]⎯

 FAIL  src/frameworks.test.ts > @angular/signals | dynamic graph
TypeError: runWatchQueue is not a function
 ❯ Object.withBatch src/frameworks/angularSignals.ts:22:5
     20|   withBatch: (fn) => {
     21|     fn();
     22|     runWatchQueue();
       |     ^
     23|   },
     24|   withBuild: (fn) => fn(),
 ❯ Module.runGraph src/util/dependencyGraph.ts:64:15
 ❯ src/frameworks.test.ts:62:17

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯[3/3]⎯

 Test Files  1 failed (1)
      Tests  3 failed | 37 passed (40)
   Start at  21:36:43
   Duration  333ms (transform 88ms, setup 0ms, collect 147ms, tests 16ms)


 FAIL  Tests failed. Watching for file changes...
       press h to show help, press q to quit
~/Code/fabiospampinato/js-reactivity-benchmark ❯ npm run bench

> [email protected] bench
> esbuild src/index.ts --external:v8-natives --bundle --format=cjs --platform=node | node --allow-natives-syntax

✘ [ERROR] No matching export in "node_modules/@angular/core/fesm2015/core.mjs" for import "runWatchQueue"

    src/frameworks/angularSignals.ts:2:35:
      2 │ import { signal, computed, effect, runWatchQueue } from "@angular/core";
        ╵                                    ~~~~~~~~~~~~~

1 error
~/Code/fabiospampinato/js-reactivity-benchmark ❯ 

Am I doing something wrong?

Incorrect use of v8 intrinsics

v8.optimizeFunctionOnNextCall(iter);

optimizeFunctionOnNextCall optimises the function on the next call but in this case, we've never called it before so there is no feedback. The generated code is going to be not very optimal and then trigger a deopt (potentially).

The correct way to use this intrinsic is to call the function a couple of times (which will trigger feedback collection) and then force compilation with the instrinsic.

Here is an example from the v8 testsuite:


function f(x, y) { return x + y; }

%PrepareFunctionForOptimization(f); // tells v8 to collect feedback
assertEquals(1, f(0, 1));  // feedback is collected
assertEquals(5, f(2, 3)); // feedback is collected
%OptimizeFunctionOnNextCall(f); // tells v8 to optimise based on the collected feedback
assertEquals(9, f(4, 5)); // compilation with type feedback happens
assertOptimized(f);

Separately, I'm not entirely sure why the benchmark wrapper needs to be optimised. Ideally, the signal library should be compiled. I would just remove the intrinsics and call the benchmark function in a smaller loop that warms it up and optimises everything called by it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.