Giter Site home page Giter Site logo

tracerbench / tracerbench Goto Github PK

View Code? Open in Web Editor NEW
241.0 241.0 31.0 10.64 MB

Automated Chrome tracing for benchmarking.

License: BSD 2-Clause "Simplified" License

JavaScript 2.86% TypeScript 60.45% R 2.17% Batchfile 0.01% Handlebars 1.87% CSS 32.60% Shell 0.04%
benchmark chrome

tracerbench's Introduction

tracerbench's People

Contributors

chadhietala avatar chancancode avatar chriseppstein avatar chriskrycho avatar dependabot-preview[bot] avatar dependabot[bot] avatar efx avatar gitstevenpham avatar krisselden avatar lifeart avatar lynchbomb avatar makepanic avatar mbutki avatar melmerp avatar mike-north avatar mixonic avatar pzuraq avatar romex91 avatar rondale-sc avatar runspired avatar rwjblue avatar scalvert avatar stefanpenner avatar tomdale avatar tsubomii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tracerbench's Issues

How to use compare for raw .cpuprofile and trace.json?

Hello,
Is it possible to use your comparing tool for already recorded series of .cpuprofiles and Tracing profiles.

Probably there is way for convert raw profiles data to your Trace-Results data models and execute compare over it?

Is it possible to use tracerbench as only trace processor?

question: what is the format of the cookies file?

I am attempting to use tracerbench record-har but run into an error:

 tracerbench record-har --url http://localhost:8080 --cookiespath scripts/perf/cookies.json
[0106/143338.035276:WARNING:dns_config_service_posix.cc(184)] dns_config has unhandled options!
[0106/143338.105746:WARNING:dns_config_service_posix.cc(184)] dns_config has unhandled options!
[0106/143338.106824:WARNING:address_sorter_posix.cc(388)] FromSockAddr failed on netmask
[0106/143338.106903:WARNING:address_sorter_posix.cc(388)] FromSockAddr failed on netmask
Error: Network Request could not be captured. TypeError: Cannot read property 'length' of undefined
    at Object.recordHARClient (~/.volta/tools/image/packages/tracerbench/3.0.11/node_modules/@tracerbench/core/dist/trace/archive_trace.js:80:15)

What is the expected format of the cookies.json file? I'm using dummy contents since my application does not use cookies for authentication:

{
	"nothing": "foo"
}

diagnostics

I installed tracerbench with volta.

 tracerbench --version && node --version
tracerbench/3.0.11 darwin-x64 node-v12.14.0
v12.13.0

Slightly related to #96; it would be nice to not have to specify cookies.

Compare cmd callback tick

Add a callback timestamp tick to compare for the total duration of the test. With as minimal noisy overhead added as possible.

[enhancement] how do I specify custom HTTP headers?

3 cheers for low variance!

As a developer I would like to use tracerbench to isolate a performance issue in an Ember application. The application is authenticated and requires custom headers to authenticate against the API server. Lighthouse provides the --extra-headers flag. It thus allows me to sample my application without changing any functionality of the application. So I would like similar functionality in tracerbench.

what I bring to the open source table

  • no TS skills ๐Ÿ˜ฆ
  • beginner Rust, but this is the cli not the server ๐Ÿ˜„

Could not find network emulation

When trying to use tracerbench profile with the --network option I'm getting the following error:
(after calling tracerbench record-har --url=https://www.tracerbench.com --headless)

tracerbench profile ./tracerbench.har --network=3g
โˆš Reading cookies json
โˆš Preparing results folder
โˆš Validating & Cloning HAR file
ร— Recording the live trace
โ†’ Error: Live Trace could not be captured. Error: Could not find network emulation "[object Object]"
Setting trace events
ยป Error: Error: Error: Live Trace could not be captured. Error: Could not find network emulation "[object Object]"

Sometimes it keeps spinning. I find it a bit weird since I think conditions.network should probably be just a string so that the lookup at

const networkCondition = networkConditions[conditions.network];
works ?

Could not find Paint event in trace after last mark

This issue is not regular, but appear quite often on same HAR file if we have a lot of samples and it cause the whole compare run to fail.

image

wondering, is a way to enforce paint event inside tracerbench or ignore this assert?

or rerun exact experiment, because it's quite painful to catch this event on attempt โ„–99 (of 100)

looks like closest logic to it landed in #200, especially https://github.com/TracerBench/tracerbench/pull/200/files#diff-0f54ca8abebf7489fe5914256d43426b584a7cbca147eeb2bb25c506c62a4299R41

in theory, we could enforce Paint event, like Modernizr do it: https://github.com/Modernizr/Modernizr/blob/master/test/browser/src/mq.js#L29

function enforcePaintEvent() {

    const docElem = document.documentElement;
    const refNode = docElem.firstElementChild || docElem.firstChild;
    const fakeBody = document.createElement('body');
    const div = document.createElement('div');

    div.id = 'mq-test-1';
    div.style.cssText = 'position:absolute;top:-100em';
    fakeBody.style.background = 'none';
    fakeBody.appendChild(div);
    div.innerHTML = '&shy;<style> #mq-test-1 { width: 42px; }</style>';
    docElem.insertBefore(fakeBody, refNode);

    try {
        return div.offsetWidth === 42;
    } finally {
        fakeBody.removeChild(div);
        docElem.removeChild(fakeBody);
    }

}


requestAnimationFrame(() => {
    enforcePaintEvent();
    requestIdleCallback(() => {          __tracerbench.stop();        });
});

Profile Workflow from documentation fails with error

The documentation example says to try

tracerbench record-har --url=https://www.tracerbench.com --headless && tracerbench profile ./tracerbench.har

but on windows (tracerbench/3.1.1 win32-x64 node-v12.16.3, installed globally through npm) it seems that the command tracerbench record-har --url=https://www.tracerbench.com --headless requires a package.json to be provided ?

The ouptut is

tracerbench record-har --url=https://www.tracerbench.com --headless
Error: ENOENT: no such file or directory, open 'package.json'
Code: ENOENT

I did not see any mention of this in the API reference either.

Is the package.json file mandatory ? If yes, what is it used for ?

Generating a HAR file

I am creating a HAR file with tracerbench create-archive --url https://www.google.com/, but I noticed that the HAR file doesn't have as much information as when I download a HAR file manually from a browser. Is there a way to get more info using tracerbench create-archive?

unhashed phase name can lead to invalid report output

Hi,

I noted that marker labels are directly used int the generated javascript.

const canvasFor{{#toCamel analysisForPhase.identifierHash}}{{/toCamel}} = document.getElementById('{{analysisForPhase.identifierHash}}-chart').getContext('2d');

This means a label containing dots or other characters will generate a broken report page.
E.g. a label random.words will generate the following javascript:

 const canvasForrandom.words = document.getElementById('random.words-chart').getContext('2d');

It seems like the value used is intended to be something hash like (

)

Would you be Ok if I open a PR that hashes the label to generate a id?
For example:

identifierHash = `id${md5sum(label)}`

Alternatively one could also normalize the label when setting it as identifierHash or create a new helper that does the normalization.

tracerbench compare --report

When running the tracerbench compare command with the --report boolean flag, the stdout message still reads "To generate a pdf report run "tracerbench report"".

This command should be aware when running with the --report flag and update the messaging to be context aware.

tracerbench compare --config

When running the tracerbench compare command with both the --config PATH and --report flags the stdout message reads "A TracerBench config undefined was not found. The config file is not required, however is strongly recommended".

It appears the config is still picked up by TracerBench which is correctly configured with the passed tbconfig.json file. More than likely is simply a stdout messaging issue. Which again is only reproducible with the --report flag.

FireFox support?

Find a smoother way to handle HierarchyNode nodes without children

The HierarchyNode class has its children array as either an array or undefined. I'd rather it just be defined but empty to make it easier to work with. We should either find a good way to force it to be defined and have that be the used TS schema, or explore using helper functions when handling nodes which take care of the check automatically.

Below is Mike North's suggestion for the 2nd idea:

Rather than using the not null assertion operator all over, you could write a few array helper functions that are more tolerant of undefined values.

Instead of children!.find(...)

You'd end up doing find(children, ...)

This is the approach TS takes internally to allow handling of optional values without potentially unsafe casting or defensive programming.

Originally posted by @mike-north in #24 (comment)

[enhancement] improved compare cli output

Motivation

When running larger test suites, the CLI output can be very verbose:

Finished 1 of 50 runs.
Finished 2 of 50 runs.
Finished 3 of 50 runs.
Finished 4 of 50 runs.
Finished 5 of 50 runs.
Finished 6 of 50 runs.
Finished 7 of 50 runs.
Finished 8 of 50 runs.
Finished 9 of 50 runs.
Finished 10 of 50 runs.
Finished 11 of 50 runs.
Finished 12 of 50 runs.
Finished 13 of 50 runs.
Finished 14 of 50 runs.
Finished 15 of 50 runs.
Finished 16 of 50 runs.
Finished 17 of 50 runs.
Finished 18 of 50 runs.
Finished 19 of 50 runs.
Finished 20 of 50 runs.
Finished 21 of 50 runs.
Finished 22 of 50 runs.
Finished 23 of 50 runs.
Finished 24 of 50 runs.
Finished 25 of 50 runs.
Finished 26 of 50 runs.
Finished 27 of 50 runs.
Finished 28 of 50 runs.
Finished 29 of 50 runs.
Finished 30 of 50 runs.
Finished 31 of 50 runs.
Finished 32 of 50 runs.
Finished 33 of 50 runs.
..

If we could consolidate the output to the following it would make it a lot easier to gauge what the status of the runs are

Running 1 of 50 (~15 minutes 20 seconds remaining)

duration phase estimated difference -247ms [-292ms to -201ms]
load phase no difference
boot phase estimated difference -191ms [-199ms to -183ms]
transition phase estimated difference -38ms [-57ms to -18ms]
render phase estimated difference -18ms [-33ms to -3ms]
paint phase no difference

The above output would render using something like https://github.com/vadimdemedes/ink to allow the output to constantly change based on the current run.

tracerbench report

Often in the tracerbench PDF report when the confidence interval is [0,1] the result is flagged as 0 rather than (no difference) vs when the confidence interval is [0,<0.5] the result is flagged as (no difference).

The expected resolve would be (no difference) in both scenarios.

Issue tracing with Chrome 90

The latest build of TracerBench is failing benchmarking against Chrome 90. The failure is coming from core tracing; with:
Error: Could not find mark "navigationStart" in trace.

The team is currently investigating this issue as primary.

CLI documentation improvements

CLI docs look like they're coming along!

Let's make sure that, as the primary documentation for this tool, the first reviewable draft includes

  • CLI output (files, data that can be piped to other commands, etc...) should be described in terms of format and contents - mainly we need trace.json schema documented somewhere
  • Installation/setup instructions, including any dependencies
  • What this tool does, and why I should care - will be present in top-level documentation
  • Tips for using the tool successfully (i.e, minimizing noise on the machine that's being used to measure)

Errors with "tracerbench trace"

Currently I'm running version 3.0.1. I ran the following:

tracerbench trace --url http://localhost:4200/?tracerbench=true --harpath=bin/relationship-performance-tracking/src/trace.har --cookiespath tracerbench-cookies.json --insights

and I get this error:

ember-data_โ€”-bashโ€”_172ร—25

Is there something I am missing?

CLI report include marker screenshots for control and experiment

Within the PDF/HTML reports include screenshots for every performance marker. Similar to:

perf-metrics-load-timeline

Spike code implementation credit @krisselden @chadhietala

const { writeFileSync } = require('fs');
const { spawnChrome } = require('chrome-debugging-client');
const fs = require('fs');
โ€‹
function filterObjectByKeys(obj, keyArray) {
  const o = Object.assign({}, obj);
  const k = Object.keys(o);
  k.forEach(c => {
    if (!keyArray.includes(c)) {
      delete o[c];
    }
  });
โ€‹
  return o;
}
โ€‹
async function setCookies(page, cookies) {
  for (let i = 0; i < cookies.length; i++) {
    const cookie = filterObjectByKeys(cookies[i], ['name', 'value', 'domain']);
    await page.send('Network.setCookie', cookie);
  }
}
โ€‹
async function screenShot(url, file) {
  const chrome = spawnChrome({ headless: false });
  try {
    const browser = chrome.connection;
โ€‹
    await browser.send('Security.enable');
    await browser.send('Security.setIgnoreCertificateErrors', { ignore: true });
โ€‹
    const { targetId } = await browser.send('Target.createTarget', {
      url: 'about:blank',
    });
    const page = await browser.attachToTarget(targetId);
โ€‹
    // enable events for Page domain
    await page.send('Page.enable');
    await setCookies(page, JSON.parse(fs.readFileSync('cookies.json', 'utf8')));
    await page.send('Page.setLifecycleEventsEnabled', { enabled: true });
โ€‹
    const result = await page.send('Page.getFrameTree');
    const mainFrameId = result.frameTree.frame.id;
โ€‹
    await page.send('Tracing.start', {
      traceConfig: {
        includedCategories: ['blink.user_timing', 'disabled-by-default-devtools.screenshot']
      }
    });
โ€‹
    let events = [];
    page.on('Tracing.dataCollected', ({ value }) => {
      events = events.concat(value);
    });
    // concurrently wait until load and navigate
    await Promise.all([
      page.until('Page.loadEventFired'),
      page.send('Page.navigate', { url }),
    ]);
โ€‹
    await page.until('Page.lifecycleEvent', event => {
      return event.name === 'networkAlmostIdle' && event.frameId === mainFrameId;
    });
โ€‹
    await new Promise(resolve => setTimeout(resolve, 4000));
โ€‹
    await Promise.all([
      page.until('Tracing.tracingComplete'),
      page.send('Tracing.end')
    ]);
โ€‹
    writeFileSync(file, JSON.stringify(events, null, 2), 'utf8');
โ€‹
    events.sort((a, b) => {
      return a.ts - b.ts;
    });
โ€‹
    let before;
    let after;
    let fmpMarker;
    for (let i = 0; i < events.length; i++) {
      let evt = events[i];
      let type = evt.ph;
โ€‹
      if (type === 'O' && evt.name === 'Screenshot' && fmpMarker === undefined) {
        before = evt;
        continue;
      }
โ€‹
      if (type === 'R' && evt.name.startsWith('FMP_')) {
        fmpMarker = evt;
      }
โ€‹
      if (type === 'O' && evt.name === 'Screenshot' && fmpMarker !== undefined) {
        after = evt;
        break;
      }
    }
โ€‹
    fs.writeFileSync('before.png', before.args.snapshot, 'base64');
    fs.writeFileSync('after.png', after.args.snapshot, 'base64');
    console.log('Delta: ', fmpMarker.ts - before.ts, after.ts - fmpMarker.ts);
    // attempt graceful close
    await chrome.close();
  } finally {
    // kill process if hasn't exited
    await chrome.dispose();
  }
โ€‹
  console.log(`${url} written to ${file}`);
}
โ€‹
if (process.argv.length < 4) {
  console.log(`usage: printToPDF.js url file`);
  console.log(
    `example: printToPDF.js https://en.wikipedia.org/wiki/Binomial_coefficient Binomial_coefficient.pdf`
  );
  process.exit(1);
}
โ€‹
printToPDF(process.argv[2], process.argv[3]);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.