zizzamia / perfume.js Goto Github PK
View Code? Open in Web Editor NEWWeb performance library for measuring all performance vitals metrics
Home Page: https://zizzamia.github.io/perfume/
License: MIT License
Web performance library for measuring all performance vitals metrics
Home Page: https://zizzamia.github.io/perfume/
License: MIT License
Hi @Zizzamia, what about a configurable analytics function to use perfume.js with any platform?
Changing the sendTiming
function in this way should be enough:
private sendTiming(metricName: string, duration: number) {
if (!this.config.analyticsLogger) {
return;
}
const durationInteger = Math.round(duration);
this.config.analyticsLogger(metricName, duration)
}
const perfume = new Perfume({
analyticsLogger: (metric, duration) => {
window.ga("send", "timing", metric, "name", duration);
}
});
const perfume = new Perfume({
analyticsLogger: (metric, duration) => {
_paq.push(['trackEvent', 'performance', 'timing', metric, duration])
}
});
In this way you can also:
P.s.
great talk at codemotion Rome
import Perfume from 'perfume.js';
results in
Uncaught SyntaxError: The requested module '../../tti-polyfill/tti-polyfill.js' does not provide an export named 'default'
In Perfume's default options add the "warning" setting, this should allow to turn on and off console.warn
.
Does this run on client-side? or node.js only?
Ex.
<script src="node_modules/prefume.js/dist/prefume.js"></script>
What does this library do at this point?
Thank you for writing this @Zizzamia. I am looking at leveraging this to get paint performance information for my app. From all the documentation I've read , one should register PerformanceObserver in the <head>
before any CSS loads to capture FP and FCP.
Important: you must ensure your PerformanceObserver is registered in the
<head>
of your document before any stylesheets, so it runs before FP/FCP happens.
This is needed until Level 2 of the spec comes up with the 'buffered' flag - which is in editor's draft right now. How is Perfume capturing paint entries without registering the observer in <head>
?
The current API for endPaint
only allows a metric name to be passed in, which is limiting the potential to report additional properties, especially when using custom analytics tracking.
The proposed syntax will accept the object, and surface it in logging statements and analytics tracking, e.g.
on the tracking side
perfume.endPaint(METRIC_NAME, { page: '/login' }
on the reporting site
export const perfumeOptions = {
firstContentfulPaint: true,
firstInputDelay: true,
analyticsTracker: (metricName: string, duration: number, browser?: *, customProperties: { [key:string]: string }) => {
track(`custom_tracking_${metricName}`, {
duration,
...customProperties
});
},
logging: false,
};
export const perfume = new Perfume(perfumeOptions);
I'll be following up this issue with a PR soon π
In case the FP or FCP doesn't happen, let's make sure to expose the debugging logs.
Add Idle Until Urgent strategy for logging and analytics.
https://github.com/GoogleChromeLabs/idlize
https://philipwalton.com/articles/idle-until-urgent/
We're loading GA via GTM which is added as an async script tag. As such, Perfume is logging a warning that GA hasn't been loaded in time. I think my scenario is pretty common, so can we address this as part of perfume?
Implementation thoughts
I guess we'd need to maintain a queue of ga
calls in perfume, with a setInterval polling for GA every second up to a configurable maximum.
So we could introduce two more settings, something like:
Add some private vars:
In trackTiming function, detect as it is now for window.ga but if not defined, add current timing request to queue and define gaPoller as a setInterval if not already with new settings.
Create gaPoller/setInterval callback that:
It's not ideal, but I can't see any other way around it. Alternative would be to define ga function ourselves, like they do in their snippet before loading the library, but that seems dodgy if they ever change their implementation.
What do you think? Happy to implement it myself and submit a PR if you agree with the above.
One of the main reasons we've historically optimized for metrics like load and DOMContentLoaded is because they're exposed as events in the browser and easy to measure on real users.
By contrast, a lot of other metrics have been historically very hard to measure. For example, this code is a hack we often see developers use to detect long tasks:
(function detectLongFrame() {
var lastFrameTime = Date.now();
requestAnimationFrame(function() {
var currentFrameTime = Date.now();
if (currentFrameTime - lastFrameTime > 50) {
// Report long frame here...
}
detectLongFrame(currentFrameTime);
});
}());
This code starts an infinite requestAnimationFrame loop and records the time on each iteration. If the current time is more than 50 milliseconds after the previous time, it assumes it was the result of a long task. While this code mostly works, it has a lot of downsides:
It adds overhead to every frame.
It prevents idle blocks.
It's terrible for battery life.
The most important rule of performance measurement code is that it shouldn't make performance worse.
Services like Lighthouse and Web Page Test have offered some of these new metrics for a while now (and in general they're great tools for testing performance on features prior to releasing them), but these tools don't run on your user's devices, so they don't reflect the actual performance experience of your users.
Luckily, with the addition of a few new browser APIs, measuring these metrics on real devices is finally possible without a lot of hacks or workarounds that can make performance worse.
These new APIs are PerformanceObserver, PerformanceEntry, and DOMHighResTimeStamp. To show some code with these new APIs in action, the following code example creates a new PerformanceObserver instance and subscribes to be notified about paint entries (e.g. FP and FCP) as well as any long tasks that occur:
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
// `entry` is a PerformanceEntry instance.
console.log(entry.entryType);
console.log(entry.startTime); // DOMHighResTimeStamp
console.log(entry.duration); // DOMHighResTimeStamp
}
});
// Start observing the entry types you care about.
observer.observe({entryTypes: ['resource', 'paint']});
What PerformanceObserver gives us that we've never had before is the ability to subscribe to performance events as they happen and respond to them in an asynchronous fashion. This replaces the older PerformanceTiming interface, which often required polling to see when the data was available.
Possible final code
const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
// `name` will be either 'first-paint' or 'first-contentful-paint'.
const metricName = entry.name;
const time = Math.round(entry.startTime + entry.duration);
}
});
observer.observe({entryTypes: ['paint']});
https://developer.mozilla.org/en-US/docs/Web/API/PerformanceObserver
In case any metric is bigger than 18 seconds, let's drop the result.
const perfumeConfig = {
maxMeasureTime: 18000
};
Add info like browser Name and OS
firstContentfulPaint.chrome.MacOS
I would like to also measure metrics using window.performance and report those to google analytics. Would it be possible to make the sendTiming method public?
Using this nifty library, I found the way timingVar
is defined during instantiation to be a little counter-intuitive. In my mind, a user timing variable
is used to label the event that is being measured and might differ for different calls to perfume.log()
.
How would you feel about a tiny PR to add a third parameter to .log()
for explicitely passing in a new timingVar
value?
The logging should all be handled through the proper function log()
that is already available. There's also a logSuffix
variable that is not used in the log()
function, and it is used as a prefix, not as suffix.
After Default Options create a new section for Perfume Observe.
Describe the values and methods:
Just added this into the of my site. Doing an F5 with 'Disabled cache' checked in Chrome DevTools nothing gets written to the console. Unchecked, I see the PerfumeJs messages with timings. Does the script have to load before something else?
I haven't got it on production yet, but will soon-ish, so will be able to send you the link to repro. Just thought I'd ask.
Hi, first of all, thank you for a great library π. I'm wondering what's the best practice in term of measuring async code. Let's say you measure the time for some async API requests and you run those requests simultaneously and you want to use just one event name.
Example code:
async function apiExample(url) {
perfume.start('api-example')
const response = await fetch(url)
perfume.end('api-example')
return response.json()
}
const first = apiExample('http://example.com/movies.json')
const second = apiExample('http://example.com/another.json')
Those two calls interfere with each other. What's the correct way of doing this?
Add Idle Until Urgent strategy for logging and analytics tasks.
https://github.com/GoogleChromeLabs/idlize
https://philipwalton.com/articles/idle-until-urgent/
Firstly, thanks for the awesome, simple library. Works a treat π
I'm using your library to add a few extra metrics to my existing analytics service. I'd like to hide the console logs that are shown by default.
I've had the issue where I cant seem to suppress the FP, FCP & TTI console logs, even with logging
in the config set to false.
e.g.
this.perfume = new Perfume({
logging: false,
firstPaint: true,
firstContentfulPaint: true,
timeToInteractive: true,
});
unsure if this is intentional, or I've missed something, but I think its due to the following two lines:
Line 203 in 3d3a201
Line 215 in 3d3a201
as neither of those callback functions seem to check the config logging status.
I've had to resort to overriding the prototype for the log
function to suppress the console logs at the moment. Happy to raise a small PR to fix this (if you'd like?).
I'm just leaving this here for now until it's shipped: https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/LN92hkaKCzw/Y-ryVxu5AgAJ
Is the TTI polyfill head script required? I was noticing that events weren't firing every page load. Once I added the head script from the TTI readme it seemed to be firing 100% of the time. If this is required should it be added to the docs somewhere?
This is the script I'm referring to from https://github.com/GoogleChromeLabs/tti-polyfill
<script>
!function(){if('PerformanceLongTaskTiming' in window){var g=window.__tti={e:[]};
g.o=new PerformanceObserver(function(l){g.e=g.e.concat(l.getEntries())});
g.o.observe({entryTypes:['longtask']})}}();
</script>
Server side rendering ends up with following error:
ReferenceError: addEventListener is not defined
at eval (webpack:///./node_modules/first-input-delay/src/first-input-delay.js?:179:4)
at Object../node_modules/first-input-delay/src/first-input-delay.js (<anonymous>:12488:1)
at __webpack_require__ (<anonymous>:20:30)
at Module.eval (webpack:///./node_modules/perfume.js/dist/perfume.es5.min.js?:2:122)
Could you please provide some example configs how to get it running only on client side?
Give both an async and sync option to retrieve the basic metrics.
Based on https://developers.google.com/web/updates/2018/05/first-input-delay, it is advised to calculate TTI during Lab Measurements and use FID for RUM.
TTI is a metric that can be measured without any users present, which means it's ideal for lab environments like Lighthouse or WebPageTest. Unfortunately, lab metrics, by their very nature, cannot measure real user pain.
FID, on the other hand, directly represents user painβevery single FID measurement is an instance of a user having to wait for the browser to respond to an event. And when that wait time is long, users will get frustrated and often leave.
For these reasons we recommend both metrics, but we recommend you measure TTI in lab and you measure FID in the wild, with your analytics tool.
Explain better the value of each public API method.
In case the tab is hidden metrics can return wrong values, a solution might be to cancel all the sendTiming callbacks.
Another possible solution is to stop calculate all the metrics as soon the tab become hidden.
I can see this library tries to polyfill the Paint Timing API but not sure if the implementation is accurate.
Here's the code I'm referring to: https://github.com/Zizzamia/perfume.js/blob/master/src/emulated-performance.ts#L33-L37
Would it be better to simply not return FP and FCP if the browser doesn't support Paint Timing? Maybe I'm missing something but how can a simple setTimeout
capture anything related to paint timing?
Worth mentioning since it wasn't brought up in the documentation is that if a site is doing prerenders, then FCP and FP do not fire. Once I emulate LTE or slower speeds, the prerender content gets in FCP before the Angular packages have finished downloading and, by extension, before PerfumeJS is in play. As a result, those two metrics never fire.
I would imagine this may also be the case for people who do shell content within their index.html as recommended by many Angular guides out there.
Not sure how you would easily resolve this without having a separate bundle that is HEAD DOM-blocking specifically for FCP and FP. May want to mention it in the documentation, though.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.