https://www.tracerbench.com/docs/guide/stats-primer
https://www.tracerbench.com/docs/guide/reports
https://www.npmjs.com/package/@tracerbench/stats https://github.com/TracerBench/tracerbench/tree/master/packages/stats
Automated Chrome tracing for benchmarking.
License: BSD 2-Clause "Simplified" License
Hello,
Is it possible to use your comparing tool for already recorded series of .cpuprofiles and Tracing profiles.
Probably there is way for convert raw profiles data to your Trace-Results data models and execute compare over it?
Is it possible to use tracerbench as only trace processor?
An issue has been flagged in a certain use-case that a network request can return the wrong method GET
when expecting POST
.
This has been confirmed by manually recording a HAR with dev-tools and inspecting the request in question which does have a request method as expected POST
.
I am attempting to use tracerbench record-har
but run into an error:
tracerbench record-har --url http://localhost:8080 --cookiespath scripts/perf/cookies.json
[0106/143338.035276:WARNING:dns_config_service_posix.cc(184)] dns_config has unhandled options!
[0106/143338.105746:WARNING:dns_config_service_posix.cc(184)] dns_config has unhandled options!
[0106/143338.106824:WARNING:address_sorter_posix.cc(388)] FromSockAddr failed on netmask
[0106/143338.106903:WARNING:address_sorter_posix.cc(388)] FromSockAddr failed on netmask
Error: Network Request could not be captured. TypeError: Cannot read property 'length' of undefined
at Object.recordHARClient (~/.volta/tools/image/packages/tracerbench/3.0.11/node_modules/@tracerbench/core/dist/trace/archive_trace.js:80:15)
What is the expected format of the cookies.json file? I'm using dummy contents since my application does not use cookies for authentication:
{
"nothing": "foo"
}
I installed tracerbench with volta.
tracerbench --version && node --version
tracerbench/3.0.11 darwin-x64 node-v12.14.0
v12.13.0
Slightly related to #96; it would be nice to not have to specify cookies.
Add a callback timestamp tick to compare
for the total duration of the test. With as minimal noisy overhead added as possible.
3 cheers for low variance!
As a developer I would like to use tracerbench to isolate a performance issue in an Ember application. The application is authenticated and requires custom headers to authenticate against the API server. Lighthouse provides the --extra-headers
flag. It thus allows me to sample my application without changing any functionality of the application. So I would like similar functionality in tracerbench.
When trying to use tracerbench profile with the --network
option I'm getting the following error:
(after calling tracerbench record-har --url=https://www.tracerbench.com --headless
)
tracerbench profile ./tracerbench.har --network=3g
โ Reading cookies json
โ Preparing results folder
โ Validating & Cloning HAR file
ร Recording the live trace
โ Error: Live Trace could not be captured. Error: Could not find network emulation "[object Object]"
Setting trace events
ยป Error: Error: Error: Live Trace could not be captured. Error: Could not find network emulation "[object Object]"
Sometimes it keeps spinning. I find it a bit weird since I think conditions.network
should probably be just a string so that the lookup at
tracerbench/packages/core/src/trace/utils.ts
Line 178 in d3d5320
I think it probably makes sense to include things like Dropbox, Backblaze, Carbonite, CrashPlan, etc as well. Even if not including specific steps to disable them, it might be good to mention them.
In my understanding, tracerbench should be able to serve prerecorded har files for har and experiment cases.
but, there is no examples and documentation for it.
This issue is not regular, but appear quite often on same HAR file if we have a lot of samples and it cause the whole compare run to fail.
wondering, is a way to enforce paint event inside tracerbench or ignore this assert?
or rerun exact experiment, because it's quite painful to catch this event on attempt โ99 (of 100)
looks like closest logic to it landed in #200, especially https://github.com/TracerBench/tracerbench/pull/200/files#diff-0f54ca8abebf7489fe5914256d43426b584a7cbca147eeb2bb25c506c62a4299R41
in theory, we could enforce Paint event, like Modernizr do it: https://github.com/Modernizr/Modernizr/blob/master/test/browser/src/mq.js#L29
function enforcePaintEvent() {
const docElem = document.documentElement;
const refNode = docElem.firstElementChild || docElem.firstChild;
const fakeBody = document.createElement('body');
const div = document.createElement('div');
div.id = 'mq-test-1';
div.style.cssText = 'position:absolute;top:-100em';
fakeBody.style.background = 'none';
fakeBody.appendChild(div);
div.innerHTML = '­<style> #mq-test-1 { width: 42px; }</style>';
docElem.insertBefore(fakeBody, refNode);
try {
return div.offsetWidth === 42;
} finally {
fakeBody.removeChild(div);
docElem.removeChild(fakeBody);
}
}
requestAnimationFrame(() => {
enforcePaintEvent();
requestIdleCallback(() => { __tracerbench.stop(); });
});
CLI: Compare command currently getEmulateDeviceSettingForKeyAndOrientation
The documentation example says to try
tracerbench record-har --url=https://www.tracerbench.com --headless && tracerbench profile ./tracerbench.har
but on windows (tracerbench/3.1.1 win32-x64 node-v12.16.3, installed globally through npm) it seems that the command tracerbench record-har --url=https://www.tracerbench.com --headless
requires a package.json
to be provided ?
The ouptut is
tracerbench record-har --url=https://www.tracerbench.com --headless
Error: ENOENT: no such file or directory, open 'package.json'
Code: ENOENT
I did not see any mention of this in the API reference either.
Is the package.json
file mandatory ? If yes, what is it used for ?
I am creating a HAR file with tracerbench create-archive --url https://www.google.com/
, but I noticed that the HAR file doesn't have as much information as when I download a HAR file manually from a browser. Is there a way to get more info using tracerbench create-archive
?
Hi,
I noted that marker labels are directly used int the generated javascript.
This means a label containing dots or other characters will generate a broken report page.
E.g. a label random.words
will generate the following javascript:
const canvasForrandom.words = document.getElementById('random.words-chart').getContext('2d');
It seems like the value used is intended to be something hash like (
)Would you be Ok if I open a PR that hashes the label to generate a id?
For example:
identifierHash = `id${md5sum(label)}`
Alternatively one could also normalize the label when setting it as identifierHash or create a new helper that does the normalization.
tracerbench record-har --url="https://googe.com"
tracerbench/6.1.0 darwin-x64 node-v14.16.0
also, there is a confusion in documentation https://www.tracerbench.com/docs/api/record-har
some options marked as "required" but, default value also mentioned
When running the tracerbench compare
command with the --report
boolean flag, the stdout message still reads "To generate a pdf report run "tracerbench report"".
This command should be aware when running with the --report
flag and update the messaging to be context aware.
When running the tracerbench compare
command with both the --config PATH
and --report
flags the stdout message reads "A TracerBench config undefined was not found. The config file is not required, however is strongly recommended".
It appears the config is still picked up by TracerBench which is correctly configured with the passed tbconfig.json
file. More than likely is simply a stdout messaging issue. Which again is only reproducible with the --report
flag.
Firefox has a pretty good profiler.
Examples:
I've run in to an issue with a chrome update, so wanted to try a different browser, but haven't found out how.
I see there are
But it all seems to use chrome for everything.
could not find matching B phase for E phase event
coming from
is thrown when using the runner with chrome canary but not stable:
canary: Version 80.0.3964.0 (Official Build) canary (64-bit)
stable: Version 78.0.3904.97 (Official Build) (64-bit)
The HierarchyNode class has its children array as either an array or undefined. I'd rather it just be defined but empty to make it easier to work with. We should either find a good way to force it to be defined and have that be the used TS schema, or explore using helper functions when handling nodes which take care of the check automatically.
Below is Mike North's suggestion for the 2nd idea:
Rather than using the not null assertion operator all over, you could write a few array helper functions that are more tolerant of undefined values.
Instead of children!.find(...)
You'd end up doing find(children, ...)
This is the approach TS takes internally to allow handling of optional values without potentially unsafe casting or defensive programming.
Originally posted by @mike-north in #24 (comment)
I took the "hello world" example in the README and created a simple usage example: https://github.com/asakusuma/chrome-tracing-example
benchmark.run()
seems to resolve before chrome even boots, and the resolved value is undefined
.
When running larger test suites, the CLI output can be very verbose:
Finished 1 of 50 runs.
Finished 2 of 50 runs.
Finished 3 of 50 runs.
Finished 4 of 50 runs.
Finished 5 of 50 runs.
Finished 6 of 50 runs.
Finished 7 of 50 runs.
Finished 8 of 50 runs.
Finished 9 of 50 runs.
Finished 10 of 50 runs.
Finished 11 of 50 runs.
Finished 12 of 50 runs.
Finished 13 of 50 runs.
Finished 14 of 50 runs.
Finished 15 of 50 runs.
Finished 16 of 50 runs.
Finished 17 of 50 runs.
Finished 18 of 50 runs.
Finished 19 of 50 runs.
Finished 20 of 50 runs.
Finished 21 of 50 runs.
Finished 22 of 50 runs.
Finished 23 of 50 runs.
Finished 24 of 50 runs.
Finished 25 of 50 runs.
Finished 26 of 50 runs.
Finished 27 of 50 runs.
Finished 28 of 50 runs.
Finished 29 of 50 runs.
Finished 30 of 50 runs.
Finished 31 of 50 runs.
Finished 32 of 50 runs.
Finished 33 of 50 runs.
..
If we could consolidate the output to the following it would make it a lot easier to gauge what the status of the runs are
Running 1 of 50 (~15 minutes 20 seconds remaining)
duration phase estimated difference -247ms [-292ms to -201ms]
load phase no difference
boot phase estimated difference -191ms [-199ms to -183ms]
transition phase estimated difference -38ms [-57ms to -18ms]
render phase estimated difference -18ms [-33ms to -3ms]
paint phase no difference
The above output would render using something like https://github.com/vadimdemedes/ink to allow the output to constantly change based on the current run.
Often in the tracerbench PDF report when the confidence interval is [0,1] the result is flagged as 0
rather than (no difference)
vs when the confidence interval is [0,<0.5] the result is flagged as (no difference)
.
The expected resolve would be (no difference)
in both scenarios.
As a third-party developer I would benefit from reading and adhering to a contributor's guide. I find they help synchronize expectations.
The latest build of TracerBench is failing benchmarking against Chrome 90. The failure is coming from core
tracing; with:
Error: Could not find mark "navigationStart" in trace
.
The team is currently investigating this issue as primary.
If an error prevents the page load from completing, the run will hang forever. We should add a configurable timeout for preventing this.
CLI docs look like they're coming along!
Let's make sure that, as the primary documentation for this tool, the first reviewable draft includes
Within the PDF/HTML reports include screenshots for every performance marker. Similar to:
Spike code implementation credit @krisselden @chadhietala
const { writeFileSync } = require('fs');
const { spawnChrome } = require('chrome-debugging-client');
const fs = require('fs');
โ
function filterObjectByKeys(obj, keyArray) {
const o = Object.assign({}, obj);
const k = Object.keys(o);
k.forEach(c => {
if (!keyArray.includes(c)) {
delete o[c];
}
});
โ
return o;
}
โ
async function setCookies(page, cookies) {
for (let i = 0; i < cookies.length; i++) {
const cookie = filterObjectByKeys(cookies[i], ['name', 'value', 'domain']);
await page.send('Network.setCookie', cookie);
}
}
โ
async function screenShot(url, file) {
const chrome = spawnChrome({ headless: false });
try {
const browser = chrome.connection;
โ
await browser.send('Security.enable');
await browser.send('Security.setIgnoreCertificateErrors', { ignore: true });
โ
const { targetId } = await browser.send('Target.createTarget', {
url: 'about:blank',
});
const page = await browser.attachToTarget(targetId);
โ
// enable events for Page domain
await page.send('Page.enable');
await setCookies(page, JSON.parse(fs.readFileSync('cookies.json', 'utf8')));
await page.send('Page.setLifecycleEventsEnabled', { enabled: true });
โ
const result = await page.send('Page.getFrameTree');
const mainFrameId = result.frameTree.frame.id;
โ
await page.send('Tracing.start', {
traceConfig: {
includedCategories: ['blink.user_timing', 'disabled-by-default-devtools.screenshot']
}
});
โ
let events = [];
page.on('Tracing.dataCollected', ({ value }) => {
events = events.concat(value);
});
// concurrently wait until load and navigate
await Promise.all([
page.until('Page.loadEventFired'),
page.send('Page.navigate', { url }),
]);
โ
await page.until('Page.lifecycleEvent', event => {
return event.name === 'networkAlmostIdle' && event.frameId === mainFrameId;
});
โ
await new Promise(resolve => setTimeout(resolve, 4000));
โ
await Promise.all([
page.until('Tracing.tracingComplete'),
page.send('Tracing.end')
]);
โ
writeFileSync(file, JSON.stringify(events, null, 2), 'utf8');
โ
events.sort((a, b) => {
return a.ts - b.ts;
});
โ
let before;
let after;
let fmpMarker;
for (let i = 0; i < events.length; i++) {
let evt = events[i];
let type = evt.ph;
โ
if (type === 'O' && evt.name === 'Screenshot' && fmpMarker === undefined) {
before = evt;
continue;
}
โ
if (type === 'R' && evt.name.startsWith('FMP_')) {
fmpMarker = evt;
}
โ
if (type === 'O' && evt.name === 'Screenshot' && fmpMarker !== undefined) {
after = evt;
break;
}
}
โ
fs.writeFileSync('before.png', before.args.snapshot, 'base64');
fs.writeFileSync('after.png', after.args.snapshot, 'base64');
console.log('Delta: ', fmpMarker.ts - before.ts, after.ts - fmpMarker.ts);
// attempt graceful close
await chrome.close();
} finally {
// kill process if hasn't exited
await chrome.dispose();
}
โ
console.log(`${url} written to ${file}`);
}
โ
if (process.argv.length < 4) {
console.log(`usage: printToPDF.js url file`);
console.log(
`example: printToPDF.js https://en.wikipedia.org/wiki/Binomial_coefficient Binomial_coefficient.pdf`
);
process.exit(1);
}
โ
printToPDF(process.argv[2], process.argv[3]);
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.