Giter Site home page Giter Site logo

googlechromelabs / lighthousebot Goto Github PK

View Code? Open in Web Editor NEW
2.2K 34.0 127.0 319 KB

Run Lighthouse in CI, as a web service, using Docker. Pass/Fail GH pull requests.

License: Apache License 2.0

Shell 4.53% JavaScript 69.75% CSS 15.10% HTML 7.19% Dockerfile 3.43%
lighthouse ci pwa testing headless-chrome travis docker

lighthousebot's Introduction

Lighthouse Bot (deprecated)

Update: LighthouseBot has been deprecated and we now recommend using the official Lighthouse CI project to automate running Lighthouse for every commit, view the changes, and prevent regressions

Historical README below

This repo contained the frontend and backend for running Lighthouse in CI and integration with Github Pull Requests. An example web service is hosted for demo purposes.

Auditing GitHub Pull Requests

Please note: This drop in service is considered Beta. There are no SLAs or uptime guarantees. If you're interested in running your own CI server in a Docker container, check out Running your own CI server.

Lighthouse can be setup as part of your CI on Travis only. As new pull requests come in, the Lighthouse Bot tests the changes and reports back the new score.

Run Lighthouse on Github PRs

To audit pull requests, do the following:

1. Initial setup

Add the lighthousebot to your repo

First, add lighthousebot as a collaborator on your repo. Lighthouse CI uses an OAuth token scoped to the repo permission in order to update the status of your PRs and post comments on the issue as the little Lighthouse icon.

* Until Lighthousebot accepts your invitation to collaborate, which is currently a lengthy manual process, it does not have permission to update the status of your PRs. However, it will post a comment on your PR.

Get an API Key

Request an API Key. API keys will eventually be enforced and are necessary so we can contact you when there are changes to the CI system.

Once you have a key, update Travis settings by adding an LIGHTHOUSE_API_KEY environment variables with your key:

Travis LIGHTHOUSE_API_KEY env variable

The lighthousebot script will include your key in requests made to the CI server.

2. Deploy the PR

We recommend deploying your PR to a real staging server instead of running a local server on Travis. A staging environment will produce realistic performance numbers that are more representative of your production setup. The Lighthouse report will be more accurate.

In .travis.yml, add an after_success that deploys the PR's changes to a staging server.

after_success:
  - ./deploy.sh # TODO(you): deploy the PR changes to your staging server.

Since every hosting environment has different deployment setups, the implementation of deploy.sh is left to the reader.

Tip: Using Google App Engine? Check out deploy_pr_gae.sh which shows how to install the GAE SDK and deploy PR changes programmatically.

3. Call lighthousebot

Install the script:

npm i --save-dev https://github.com/GoogleChromeLabs/lighthousebot

Add an NPM script to your package.json:

"scripts": {
  "lh": "lighthousebot"
}

Next, in .travis.yml call npm run lh as the last step in after_success:

install:
  - npm install # make sure to install the deps when Travis runs.
after_success:
  - ./deploy.sh # TODO(you): deploy the PR changes to your staging server.
  - npm run lh -- https://staging.example.com

When Lighthouse is done auditing the URL, the bot will post a comment to the pull request containing the updated scores:

Lighthouse Github comment

You can also opt-out of the comment by using the --no-comment flag.

Failing a PR when it drops your Lighthouse score

Lighthouse CI can prevent PRs from being merged when one of the scores falls below a specified value. Just include one or more of --pwa, --perf, --seo, --a11y, or --bp:

after_success:
  - ./deploy.sh # TODO(you): deploy the PR changes to your staging server.
  - npm run lh -- --perf=96 --pwa=100 https://staging.example.com

Options

$ lighthouse-ci -h

Usage:
runlighthouse.js [--perf,pwa,seo,a11y,bp=<score>] [--no-comment] [--runner=chrome,wpt] <url>

Options:
  Minimum score values can be passed per category as a way to fail the PR if
  the thresholds are not met. If you don't provide thresholds, the PR will
  be mergeable no matter what the scores.

  --pwa        Minimum PWA score for the PR to be considered "passing". [Number]
  --perf       Minimum performance score for the PR to be considered "passing". [Number]
  --seo        Minimum seo score for the PR to be considered "passing". [Number]
  --a11y       Minimum accessibility score for the PR to be considered "passing". [Number]
  --bp         Minimum best practices score for the PR to be considered "passing". [Number]

  --no-comment Doesn't post a comment to the PR issue summarizing the Lighthouse results. [Boolean]

  --runner     Selects Lighthouse running on Chrome or WebPageTest. [--runner=chrome,wpt]

  --help       Prints help.

Examples:

  Runs Lighthouse and posts a summary of the results.
    runlighthouse.js https://example.com

  Fails the PR if the performance score drops below 93. Posts the summary comment.
    runlighthouse.js --perf=93 https://example.com

  Fails the PR if perf score drops below 93 or the PWA score drops below 100. Posts the summary comment.
    runlighthouse.js --perf=93 --pwa=100 https://example.com

  Runs Lighthouse on WebPageTest. Fails the PR if the perf score drops below 93.
    runlighthouse.js --perf=93 --runner=wpt --no-comment https://example.com

Running on WebPageTest instead of Chrome

By default, lighthousebot runs your PRs through Lighthouse hosted in the cloud. As an alternative, you can test on real devices using the WebPageTest integration:

lighthousebot --perf=96 --runner=wpt https://staging.example.com

At the end of testing, your PR will be updated with a link to the WebPageTest results containing the Lighthouse report!

Running your own CI server

Want to setup your own Lighthouse instance in a Docker container?

The good news is Docker does most of the work for us! The bulk of getting started is in Development. That will take you through initial setup and show how to run the CI frontend.

For the backend, see builder/README.md for building and running the Docker container.

Other changes, to the "Development" section:

  • Create a personal OAuth token in https://github.com/settings/tokens. Drop it in frontend/.oauth_token.
  • Add a LIGHTHOUSE_CI_HOST env variable to Travis settings that points to your own URL. The one where you deploy the Docker container.

Development

Initial setup:

  1. Ask an existing dev for the oauth2 token. If you need to regenerate one, see below.
  • Create frontend/.oauth_token and copy in the token value.

Run the dev server:

cd frontend
npm run start

This will start a web server and use the token in .oauth_token. The token is used to update PR status in Github.

In your test repo:

  • Run npm i --save-dev https://github.com/GoogleChromeLabs/lighthousebot
  • Follow the steps in Auditing Github Pull Requests for setting up your repo.

Notes:

  • If you want to make changes to the builder, you'll need Docker and the GAE Node SDK.
  • To make changes to the CI server, you'll probably want to run ngrok so you can test against a local server instead of deploying for each change. In Travis settings, add a LIGHTHOUSE_CI_HOST env variable that points to your ngrok instance.
Generating a new OAuth2 token

If you need to generate a new OAuth token:

  1. Sign in to the lighthousebot Github account. (Admins: the credentials are in the usual password tool).
  2. Visit personal access tokens: https://github.com/settings/tokens.
  3. Regenerate the token. Important: this invalidates the existing token so other developers will need to be informed.
  4. Update token in frontend/.oauth_token.

Deploy

By default, these scripts deploy to Google App Engine Flexible containers (Node). If you're running your own CI server, use your own setup :)

Deploy the frontend:

npm run deploy YYYY-MM-DD frontend

Deploy the CI builder backend:

npm run deploy YYYY-MM-DD builder

Source & Components

This repo contains several different pieces for the Lighthouse Bot: a backend, frontend, and frontend UI.

UI Frontend

Quick way to try Lighthouse: https://lighthouse-ci.appspot.com/try

Relevant source:

Bot CI server (frontend)

Server that responds to requests from Travis.

REST endpoints:

  • https://lighthouse-ci.appspot.com/run_on_chrome
  • https://lighthouse-ci.appspot.com/run_on_wpt

Example

Note: lighthousebot does this for you.

POST https://lighthouse-ci.appspot.com/run_on_chrome
Content-Type: application/json
X-API-KEY: <YOUR_LIGHTHOUSE_API_KEY>

{
  testUrl: "https://staging.example.com",
  thresholds: {
    pwa: 100,
    perf: 96,
  },
  addComment: true,
  repo: {
    owner: "<REPO_OWNER>",
    name: "<REPO_NAME>"
  },
  pr: {
    number: <PR_NUMBER>,
    sha: "<PR_SHA>"
  }
}

Relevant source:

  • frontend/server.js - server which accepts Github pull requests and updates the status of your PR.

CI backend (builder)

Server that runs Lighthouse against a URL, using Chrome.

REST endpoints:

  • https://lighthouse-ci.appspot.com/ci

Example

Note: lighthousebot does this for you.

curl -X POST \
  -H "Content-Type: application/json" \
  -H "X-API-KEY: <YOUR_LIGHTHOUSE_API_KEY>" \
  --data '{"output": "json", "url": "https://staging.example.com"}' \
  https://builder-dot-lighthouse-ci.appspot.com/ci

FAQ

Why not deployment events?

Github's Deployment API would be ideal, but it has some downsides:

  • Github Deployments happen after a pull is merged. We want to support blocking PR merges based on a LH score.
  • We want to be able to audit changes as they're add to the PR. pull_request/push events are more appropriate for that.
Why not a Github Webhook?

The main downside of a Github webhook is that there's no way to include custom data in the payload Github sends to the webhook handler. For example, how would Lighthouse know what url to test? With a webhook, the user also has to setup it up and configure it properly.

Future work: Lighthouse Bot could define a file that developer includes in their repo. The bot's endpoint could pull a .lighthouse_ci file that includes meta data {minLighthouseScore: 96, testUrl: 'https://staging.example.com'}. However, this requires work from the developer.

lighthousebot's People

Contributors

0xflotus avatar addyosmani avatar azu avatar brendankenny avatar ebidel avatar eointraynor avatar fengzilong avatar paulirish avatar piperchester avatar prayagverma avatar shaneog avatar stuartsan avatar trollepierre avatar yuichielectric avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lighthousebot's Issues

Run on non-PR's

Thanks for a great package! In my case my site is not deployed through Travis, instead my deploy service (Netlify) triggers a web hook to run e2e and lighthouse after deploy.

Would it be possible to allow this to be ran on non-pull request events in Travis? Then it could fetch PR number, etc from the GH API; like https://api.github.com/repos/ebidel/lighthouse-ci/pulls

Slack integration

Would be interesting to get a summary in Slack. Maybe even allow some settings, like only notify if lighthouse thresholds were not met.

Public-facing URL: needed?

Does this work with a localhost:XX-type URL?

The example shows https://staging.example.com, which looks to me like a public URL.

It might be a good idea to clarify this in the doc. I can contribute with the doc change if that's the case.

Will lighthousebot supports the authenticated web pages?

Scenario: I need to audit a page which needs authentication. Say i have to give my username and password to go that particular url, if not it will redirect back to login page.

For the above scenario, how can i use lighthousebot?

Run in multiple build stages?

How can this be run in both a test and deploy stage? I get the following error:

non-unique build for this hash

I tried working around it by adding this to my .travis.yml:

- export LHCI_BUILD_CONTEXT__CURRENT_HASH="${TRAVIS_BUILD_STAGE_NAME}_${TRAVIS_PULL_REQUEST_SHA}"

Error: Unable to determine commit message with git log --format=%s -n 1

Which allows it to pass the health check but then it fails with something else:

Runtime error encountered: Protocol error (Page.setLifecycleEventsEnabled): 'Page.setLifecycleEventsEnabled' wasn't found

Edit:

I added latest chrome with

    addons:
      chrome: stable

But now I'm getting the following error:

Error: Unable to determine commit message with `git log --format=%s -n 1`

Lighthouse Bot in Enterprise (Private) Repos?

Hello,

Kudos to you for building this. I'm trying to use this inside my company's Git Enterprise repo. It's protected with LDAP / AD based logins. Lighthouse bot does not have an account at my company. :) Is there anything I can do to use Lighthouse CI without lighthouse bot?

Thanks!

Nate

Submit lighthouse_ci to Docker hub?

Great work, this tool will be extremely useful for CI builds!

I see you already have Docker source code here:
https://github.com/ebidel/lighthouse-ci/tree/master/builder

Can you submit an official Docker built image to Docker hub? There are a couple of benefits to this approach:

  1. This will save everyone the step of downloading and building themselves, and provide an official release which is testable.

  2. When using with CI tools, not only does it saves build times, but can be spun up as a service with one line of code, like this selenium example for gitlab and bitbucket pipelines:

.gitlab-ci.yml

test_functional:
  image: python
  stage: test
  services:
    - selenium/standalone-chrome
  script:
    - SELENIUM="http://selenium__standalone-chrome:4444/wd/hub" BASE_URL="https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com" DRIVER="headless_chrome" python -m unittest discover -s qa

bitbucket-pipelines.yml

pipelines:
  default:
    - step:
        services:
          - selenium
        script:
          - SELENIUM="http://127.0.0.1:4444/wd/hub" BASE_URL="http://$BITBUCKET_BRANCH.$AWS_PROJECT.us-east-1.elasticbeanstalk.com" DRIVER="headless_chrome" python -m unittest discover -s qa

definitions:
  services:
    selenium:
      image: selenium/standalone-chrome

Error: connect ECONNREFUSED 127.0.0.1:36715

I got this error when I was trying to run lighthouse programmatically in my docker container.

FROM phusion/baseimage:0.9.22

USER root 

# Install yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
    echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update && \
    apt-get install -y yarn wget

# Install Node
ADD nodesource/setup_8.x /tmp/setup_8.x
RUN bash /tmp/setup_8.x
RUN apt-get update && \
    apt-get autoremove && \
    apt-get install -y build-essential \
                       libfontconfig \
                       nodejs

# Install latest chrome dev package.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
    && sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
    && apt-get update \
    && apt-get install -y google-chrome-unstable --no-install-recommends \
    && rm -rf /var/lib/apt/lists/* \
    && rm -rf /src/*.deb

ADD https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64 /usr/local/bin/dumb-init
RUN chmod +x /usr/local/bin/dumb-init

# Set colume & working directory
VOLUME ['/www/web']
WORKDIR /www/

# Globally install gulp-cli, nodemon
# cache bust so we always get the latest version of LH when building the image.
ARG CACHEBUST=1
RUN yarn global add gulp-cli nodemon

# install packages
ADD package.json /tmp/package.json
ADD yarn.lock /tmp/yarn.lock
RUN cd /tmp && yarn 
RUN mkdir -p /www && cd /www && ln -s /tmp/node_modules
ADD package.json /www/package.json
ADD yarn.lock /www/yarn.lock

# Add the simple server.
COPY server.js /
RUN chmod +x /server.js

COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh

# Add a chrome user and setup home dir.
RUN groupadd -r chrome && useradd -r -m -g chrome -G audio,video chrome && \
    mkdir -p /www/reports && \
    chown -R chrome:chrome /www

USER chrome

# Disable Lighthouse error reporting to prevent prompt.
# ENV CI=true

EXPOSE 36377

# commands run on container starts
ADD start.sh /www/start.sh

ENTRYPOINT ["dumb-init", "--", "/entrypoint.sh"]

I copied some code from this RP. Also include "lighthouse" in the package.json, so it's installed locally.
But not sure which part is wrong, could you help?
Thanks

LHError: INVALID_URL

I was hoping to run lighthouse in my CI so I tried to build the image from the Dockerfile on master.
I had to apply the fix as per PR #84 (so it would be nice if the PR was approved and merged) and when i get a working image I run it using docker-compose (no much difference from docker on its own but it is just nice as it can call other containers using service references) with the following setup:

  speedtest:
    #image: justinribeiro/lighthouse
    build: docker/lighthouse_ci
    volumes:
      - reports:/home/chrome/reports
#    command: ['lighthouse', '--chrome-flags="--headless --disable-gpu "', 'http://web/component/FullVerticalLayout']
    command: ['lighthouse_ci', 'http://web/component/FullVerticalLayout']
    depends_on:
      - web
    cap_add:
      - SYS_ADMIN

and when running it I get error

$ docker-compose logs speedtest
Attaching to myreviews-styleguide_speedtest_1
speedtest_1  | Tue, 31 Mar 2020 17:17:48 GMT ChromeLauncher No debugging port found on port 9222, launching a new Chrome.
speedtest_1  | Tue, 31 Mar 2020 17:17:48 GMT ChromeLauncher Waiting for browser.
speedtest_1  | Tue, 31 Mar 2020 17:17:48 GMT ChromeLauncher Waiting for browser...
speedtest_1  | Tue, 31 Mar 2020 17:17:48 GMT ChromeLauncher Waiting for browser.....
speedtest_1  | Tue, 31 Mar 2020 17:17:49 GMT ChromeLauncher Waiting for browser.......
speedtest_1  | Tue, 31 Mar 2020 17:17:49 GMT ChromeLauncher Waiting for browser.......βœ“
speedtest_1  | Tue, 31 Mar 2020 17:17:49 GMT config:warn IFrameElements gatherer requested, however no audit requires it.
speedtest_1  | Tue, 31 Mar 2020 17:17:49 GMT config:warn MainDocumentContent gatherer requested, however no audit requires it.
speedtest_1  | Tue, 31 Mar 2020 17:17:49 GMT ChromeLauncher Killing Chrome instance 27
speedtest_1  | Runtime error encountered: The URL you have provided appears to be invalid.
speedtest_1  | LHError: INVALID_URL
speedtest_1  |     at Function.run (/usr/local/lib/node_modules/lighthouse/lighthouse-core/runner.js:78:17)
speedtest_1  |     at lighthouse (/usr/local/lib/node_modules/lighthouse/lighthouse-core/index.js:48:17)
speedtest_1  |     at runLighthouse (/usr/local/lib/node_modules/lighthouse/lighthouse-cli/run.js:193:32)
speedtest_1  |     at process._tickCallback (internal/process/next_tick.js:68:7)

http://web/component/FullVerticalLayout is a perfectly healthy and working url as in docker-compose the service with name web is mapped as a local resolved domain.

For instance if i exec in one of the containers i can easily curl the web service/container

I have no name!@022c7770bd41:/var/www/html$ curl -fv http://web/component/FullVerticalLayout?render=1 | head
* Expire in 0 ms for 6 (transfer 0x55bde8e01dc0)
* Expire in 1 ms for 1 (transfer 0x55bde8e01dc0)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 1 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 1 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 1 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 1 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 1 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 1 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 2 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 2 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 2 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
* Expire in 0 ms for 1 (transfer 0x55bde8e01dc0)
*   Trying 172.26.0.6...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55bde8e01dc0)
* Connected to web (172.26.0.6) port 80 (#0)
> GET /component/FullVerticalLayout?render=1 HTTP/1.1
> Host: web
> User-Agent: curl/7.64.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Server: nginx/1.17.9
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Powered-By: PHP/7.3.15
< Cache-Control: no-cache, private
< Date: Tue, 31 Mar 2020 17:30:09 GMT
< 
{ [2549 bytes data]
100  2542    0  2542    0     0  25676      0 --:--:-- --:--:-- --:--:-- 25676
* Connection #0 to host web left intact
<!DOCTYPE html>
<html>
    <head>
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
...

Is it possible to run lighthouse ci in CI style and so with a perfectly working url like the one above?

TypeError: Cannot read property 'split' of undefined

Hey when I try to run it I get the following error:

/Users/lukasoppermann/Code/veare/node_modules/lighthouse-ci/runlighthouse.js:100
    owner: repoSlug.split('/')[0],

I just installed it locally using npm, I hope that is fine. Am I doing something wrong or is that a legit bug? Can I somehow provide more info?

Valid URL error not very descriptive--users confused on how to proceed

When using https://developers.google.com/web/tools/lighthouse/run with a URL without the leading http:// or https://, the report will not run and the user is given a error message that reads "URL is not valid".

It would be more useful if some simple guidance was given like:

"URL is not valid. Please ensure it is in the following format: http://www.domain.com or https://www.domain.com"

I would imagine omitting the leading http(s) is likely where the majority of these issues arise.

Azure Pipelines support

We are looking at moving to use Azure Pipelines instead of Travis CI. Ideally, we could use this tool from Azure Pipelines as well as Travis. I don't have cycles to implement this at the moment, but I thought I would open an issue so I could track progress if anyone else decides to work on this.

Error from CI backend. invalid json response body

I left the same comment here not sure if it would be seen on a closed issue. #41

Hi @ebidel
I have recently had the same or a similar issue running lh-bot, i was hoping you might be able to help me troubleshoot it.

This is the error:
(I have had the same error occur on 3 different PRs, and have not yet been able to get the bot to comment in the pr)
"Error from CI backend. invalid json response body at https://builder-dot-lighthouse-ci.appspot.com/ci reason: Unexpected token U in JSON at position 0"

This is the PR in staging: https://website-ssr-pr-14.herokuapp.com/

Travis yaml:
sudo: false
language: node_js
node_js:

'10'
install:
node -v
before_script:
yarn
cache:
directories:
    node_modules
    yarn: true
    git:
    depth: 3
    script:
    yarn build
    after_success:
    sleep 4m
    yarn run lhbot https://website-ssr-pr-$TRAVIS_PULL_REQUEST.herokuapp.com/ 

End of Travis Job Log:
$ sleep 4m
after_success.2
$ yarn run lhbot https://website-ssr-pr-$TRAVIS_PULL_REQUEST.herokuapp.com/
yarn run v1.15.2
$ lighthousebot -- --pwa=95 --perf=95 --seo=95 --a11y=95 --bp=95 https://website-ssr-pr-14.herokuapp.com/
Using runner: chrome
New Lighthouse scores:
"Error from CI backend. invalid json response body at https://builder-dot-lighthouse-ci.appspot.com/ci reason: Unexpected token U in JSON at position 0" Done. Your build exited with 0.

Please let me know if you need any other information from me im really keen to get this working πŸ˜ƒ

Invalid JSON error on builder-dot-lighthouse-ci

I'm currently implementing Lighthouse in our CI. However, it's a bit more complicated as we're deploying via Netlify and having a dynamically generated URL for each PR.
That's why I'm manually sending a POST request to https://lighthouse-ci.appspot.com/run_on_chrome

This works more or less fine. However, I just ran the first request from Travis and getting an error in the list of checks on the GitHub PR.

screen shot 2018-09-24 at 14 56 47

Error. invalid json response body at https://builder-dot-lighthouse-ci.appspot.com/ci reason: Unexpected token < in JSON at position 0

As I'm not making any requests to https://builder-dot-lighthouse-ci.appspot.com/ci I guess this is triggered by lighthouse-ci.appspot.com
That's why I think that there's something not working at your end

EDIT: The second time it ran successfully. So, it doesn't seem to be a general issue.

Can't run on mac

lh work on our ci but not on our mac

> [email protected] lh /Users/martindonadieu/Projects/angular-well-config
> lighthouse-ci "/home/travis/build/WildCodeSchool/laloupe-0218-mastermin"

Using runner: chrome
/Users/martindonadieu/Projects/angular-well-config/node_modules/lighthouse-ci/runlighthouse.js:106
    owner: repoSlug.split('/')[0],
                    ^

TypeError: Cannot read property 'split' of undefined
    at getConfig (/Users/martindonadieu/Projects/angular-well-config/node_modules/lighthouse-ci/runlighthouse.js:106:21)
    at Object.<anonymous> (/Users/martindonadieu/Projects/angular-well-config/node_modules/lighthouse-ci/runlighthouse.js:150:16)
    at Module._compile (module.js:649:30)
    at Object.Module._extensions..js (module.js:660:10)
    at Module.load (module.js:561:32)
    at tryModuleLoad (module.js:501:12)
    at Function.Module._load (module.js:493:3)
    at Function.Module.runMain (module.js:690:10)
    at startup (bootstrap_node.js:194:16)
    at bootstrap_node.js:666:3
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] lh: `lighthouse-ci "/home/travis/build/WildCodeSchool/laloupe-0218-mastermin"`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] lh script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /Users/martindonadieu/.npm/_logs/2018-03-29T12_10_44_403Z-debug.log

lighthousebot not failing PRs when score < minScore

I debugged this by running the frontend server in my local and the reason is when I add lighthousebot as a collaborator, the request isn't accepted by the bot and hence it doesn't have the rights and fails. It can only post a comment.

I even checked github api to add a webhook and a handler in server to auto accept requests but at this time there isn't a way we can automate this process.

lighthouse times out even though URL is accessible via normal curl

While debugging lighthouse-ci, I came across the following behavior. The https://www.google.com site is pingable and retrievable via a normal curl. However, lighthouse-ci can't seem to reach it.

The docker container was launched via:

docker run -it --entrypoint bash -v $(pwd)/reports:/home/chrome/reports:rw --rm --cap-add=SYS_ADMIN lighthouse_ci

where lighthouse_ci is an up2date image based on the master branch.

Question about testing pr

Hi

Thanks for making this.
Im making a realworld sample app with a hobby project framework.
This project uses the ghpages to display the page. (demo here )
Anyone have a good suggestion how I can test the PR and compare it to what I have running ?
Though I could ask in case some have a good/easy setup for this.

Atm when I do a PR I only get result for my main, so not very useful information πŸ˜‚

Lighthouse CI score: undefined

Hey, so now I ran it on travis. I get no real errors, but I do get a message Lighthouse CI score: undefined. I also do not get a comment in the PR (which makes sense, since the score is undefined). I did not receive a key yet, but the readme sounds like it is not required yet, is that true?

https://travis-ci.org/lukasoppermann/veare/builds/275630036?utm_source=github_status&utm_medium=notification

I want to run the app on travis and test it.

I currently have the following in my travis.yml file.

before_script: npm install
script:
  - xvfb-run npm run travis
  - NODE_ENV=testing NODE_PORT=8888 node app.js &
  - node node_modules/lighthouse-ci/runlighthouse.js http://localhost:8888
  - pkill node

Can you tell me what I am doing wrong? Thank you in advance (btw. I want to run it locally first, if it works fine for a while I will invest some time into the setup with a remote staging server).

Score DiffΓ©rence in chrome and ci

when we run lighthouse in chrome audit the grade is better than the CI
Our Pr
In chrome:
screen shot 2018-03-30 at 14 13 45
In Travis CI:
screen shot 2018-03-30 at 14 02 36

Also for some project the result in the CI look always the same .
We can understrand the performance can change between server and local but not accessibility

Inline Render blocking scripts

How to get time taken by the inline render blocking scripts time.
I wrote tiny inline scripts in my HTML file. I want to know the scripting time and the scripts block the parsing of the html? I am trying to know the scripting time of the inline script. But I cant the chrome dev tool takes inline scripting as parsing.

Thresholds

Allow users to specify failure thresholds for each report category. Right now it's purely based off the PWA scores.

How long does it take for the bot to accept the collaborator request?

I added lighthousebot as a collaborator and I have bee waiting for a response for close to 12 hours. How long does it take for the bot to accept the request? I would have thought it to be instant. Is this a manual process? Am I missing a step somewhere?

I went to settings > collaborators > entered the bot name and clicked on add collaborator.

Pass github error down to the client for debugging

Was having trouble getting this to work, when I realized that the github comment step was failing because of SSO. Unfortunately this wasn't obvious because it always surfaces a generic error message 'Error posting Lighthouse comment to PR.'

Proposal: Add link to the report in the bot comment

It would be great if the bot comment had a link to the report.

From:

Updated Lighthouse report for the changes in this PR:

Category Score
Progressive Web App 91
Performance 93
Accessibility 100
Best Practices 92

Tested with Lighthouse version: 2.3.0

To:

Updated Lighthouse report for the changes in this PR:

Category Score
Progressive Web App 91
Performance 93
Accessibility 100
Best Practices 92

Tested with Lighthouse version: 2.3.0

Error: no such file or directory, /home/chrome/reports/report.<hash>.json

Hi, thanks for putting this tool together. I tested it out using the public server, and ran into the output:

Error from CI backend. invalid json response body at https://builder-dot-lighthouse-ci.appspot.com/ci reason: Unexpected token < in JSON at position 0

Thinking it was a problem with the public server, I deployed a Docker container as described in the docs. The container had the same issue, though.

So I manually ran the curl command as so:

curl -X POST \
  -H "Content-Type: application/json" \
  -H "X-API-KEY: $LIGHTHOUSE_API_KEY" \
  --data '{"output": "json", "url": "http://example.com"}' \
  https://<container url>/ci

And I got the output:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Error: ENOENT: no such file or directory, stat &#39;/home/chrome/reports/report.1544322152932.json&#39;</pre>
</body>
</html>

Perhaps something changed in the way Lighthouse generated its reports?

Cannot run the lighthouse_ci container (nonheadless) locally

I am just trying to run the lighthouse locally for a sample any website and see report generated on my Ubuntu machine.

I followed ReadMe! for building the image for full-chrome

And then execute
docker run -it --rm --cap-add=SYS_ADMIN lighthouse_ci https://example.com

But getting the error as

  [ ok ] Starting system message bus: dbus.
sh: 0: Can't open /chromeuser-script_nonheadless.sh
  ChromeLauncher No debugging port found on port 9222, launching a new Chrome. +0ms
  ChromeLauncher Waiting for browser. +24ms
  ChromeLauncher Waiting for browser... +0ms
  ChromeLauncher Waiting for browser....................................................................................................... +501ms
  ChromeLauncher:error connect ECONNREFUSED 127.0.0.1:9222 +1ms
  ChromeLauncher:error Logging contents of /tmp/lighthouse.cZohn3u/chrome-err.log +0ms
  ChromeLauncher:error [61:61:0606/124640.016489:ERROR:zygote_host_impl_linux.cc(89)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.
  ChromeLauncher:error  +0ms

Not sure if it is a real issue but any help will be appreciable.

GithubActions docs

extend the docs on how the bot can be used with GitHubActions.

GithubActions is a neat and fast new way to interact with githubs, its worksflows etc.
its much more powerfull then the regular CI systems we are used to.

this would make it possible to add things like "trigger the lighthousebot via a keyword in a comment" or similar

Returning performance 0 (error) every time

When running lighthousebot on any of my PRs, Performance always is 0. Running it from Chrome Dev Tools, or from https://www.webpagetest.org/lighthouse, there is no such problem:

with lighthousebot on PR (flags --runner=wpt --pwa=100 --perf=77 --a11y=78 --bp=100 --seo=1):
image
image

WebPageTest:
image
image

Does CPU/Memory Power / Device emulation has anything to do with it?
What should I do?

Serverless frontend, backend and bot?

Has/is anyone, investigating porting these over to plain old serverless functions and API?
I have some experience in this and was interested if anyone has tried yet or has any incite.

Provide option to set throttling?

I am currently using lighthousebot for my project with no problems. However, I am not sure if all my scores are based on no throttling or not. Is it possible to add throttling as an option, similar to what you can do on chrome desktop?

Comparison with a production url

Hi! I found this amazing project and I started to use it a lot 😎 thank you so mucho for creating it.

My question is that if it's possible (somehow) to have one of these in the PR commment:

  1. The results of the production website and the stage deploy to compare
  2. The score's difference between a production url and the stage

Do you think that it's possible to have this information?
To be clear, I'm not asking for a feature. I just want to know if someone else have faced this particular case where I'd like a PR to be "rejected" (ask for improvements) when the stage's score is worse than production's score (I know that I can set a flag, but that's fixed, I want to compare with my own implementation).

Hope I was clear enough! 😊

Status Checks Never Sent

I have followed the instructions but we never get status checks attached to the PR. I can see the results in Travis (along with subsequent failure since it didn't meet thresholds) but that's it. The bot is a collaborator on our private repo and new accounts don't force 2FA. Any ideas what I'm doing wrong?

Appreciate the work you have done on this project!

Other options in json data on CURL -X POST request ?

Hello,

Thanks for your tool !

In your readme you're talking about the possibility to make a curl request (curl -X POST...). In this request, we have two parameters, "url" and "format".

I would like to know if we can provide other parameters? For example, can we increase the screenshot resolution preview or can we specify the user-agent?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.