Giter Site home page Giter Site logo

americanexpress / jest-image-snapshot Goto Github PK

View Code? Open in Web Editor NEW
3.8K 41.0 198.0 6.19 MB

✨ Jest matcher for image comparisons. Most commonly used for visual regression testing.

License: Apache License 2.0

JavaScript 100.00%
jest testing snapshot visual-comparison-testing one-app image-snapshots jest-snapshots image-comparisons

jest-image-snapshot's People

Contributors

10xlacroixdrinker avatar aloisklink avatar andrewleedham avatar anescobar1991 avatar ayc0 avatar cancerberosgx avatar chancesmith avatar corsicanec82 avatar dependabot[bot] avatar dmitriy-baltak avatar dogpatch626 avatar fringd avatar gaelan avatar k-yle avatar naoto-ida avatar narmeennaveedahmed avatar nellyk avatar oneamexbot avatar patrickhulce avatar phawxby avatar pixnbits avatar priyajeet avatar rajivshah3 avatar sbekrin avatar simonkberg avatar skywhale avatar thomasbertet avatar tklever avatar toptalo avatar tteltrab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jest-image-snapshot's Issues

customSnapshotsDir places directory in absolute path

Hi, awesome extension!

I wanted to mention that if you set the customSnapshotsDir to lets say __screenshots__,
it will create it as an absolute path, i.e.:

__screenshots__ <- originally '__image_snapshots__'
src
  __tests__
    whateveryouretesting.js

but what I think we expect is:

src
  __tests__
    __screenshots__
    whateveryouretesting.js

Thanks again!

pngjs error

some error is happend when i followed guide.
i searched manything, and i know that it is error from pngjs
2018-04-02 10 55 00
akabekobeko/npm-icon-gen#82

i tried downgrade, but jest-image-snapshot using latest version of pngjs.
so i can't solve problem. please tell me how solve this error.
i want to use jest-image-snapshot

examples git:(master) ✗ npm test

> [email protected] test /Users/XXXXX/Documents/XXXXXX/jest-image-snapshot/examples
> jest


 RUNS  __tests__/puppeteer-example.spec.js
 RUNS  __tests__/local-image.spec.js

Test Suites: 0 of 2 total
Tests:       0 total
Snapshots:   0 total
Time:        1s/usr/local/bin/node[61338]: ../src/node_zlib.cc:188:static void node::(anonymous namespace)::ZCtx::Write(const FunctionCallbackInfo<v8::Value> &) [async = false]: Assertion `Buffer::HasInstance(args[4])' failed.
 1: node::Abort() [/usr/local/bin/node]
 2: node::InternalCallbackScope::~InternalCallbackScope() [/usr/local/bin/node]
 3: void node::(anonymous namespace)::ZCtx::Write<false>(v8::FunctionCallbackInfo<v8::Value> const&) [/usr/local/bin/node]
 4: v8::internal::FunctionCallbackArguments::Call(void (*)(v8::FunctionCallbackInfo<v8::Value> const&)) [/usr/local/bin/node]
 5: v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle< RUNS  __tests__/puppeteer-example.spec.js
 RUNS  __tests__/local-image.spec.js

Test Suites: 0 of 2 total
Tests:       0 total
Snapshots:   0 total
Time:        3s/usr/local/bin/node[61337]: ../src/node_zlib.cc:188:static void node::(anonymous namespace)::ZCtx::Write(const FunctionCallbackInfo<v8::Value> &) [async = false]: Assertion `Buffer::HasInstance(args[4])' failed.
 1: node::Abort() [/usr/local/bin/node]
 2: node::InternalCallbackScope::~InternalCallbackScope() [/usr/local/bin/node]
 3: void node::(anonymous namespace)::ZCtx::Write<false>(v8::FunctionCallbackInfo<v8::Value> const&) [/usr/local/bin/node]
 4: v8::internal::FunctionCallbackArguments::Call(void (*)(v8::FunctionCallbackInfo<v8::Value> const&)) [/usr/local/bin/node]
 5: v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle< RUNS  __tests__/puppeteer-example.spec.js
 RUNS  __tests__/local-image.spec.js

Test Suites: 0 of 2 total
Tests:       0 total
Snapshots:   0 total
Time:        3966s

Test fails to read a snapshot after the first run, with Selenium node takeScreenshot buffer

node: v9.8.0
jest-image-snapshot: v2.4.0

Hi I have a similar behaviour as #46

  • first run good (generates, passes)
  • second run throws Invalid file signature in regenerator-runtime module

this is my code

const webdriver = require('selenium-webdriver')
const driver = new webdriver.Builder().forBrowser('chrome').build()
driver.get('/somepage')
...
test('msg', async () => {
  const buffer = await driver.takeScreenshot()
  expect(buffer).toMatchImageSnapshot()
})

in the selenium docs its explicitly says

returns Promise<string> -- A promise that will be resolved to the screenshot as a base-64 encoded PNG.

When i manually generate a .png file (util.promisify(fs.writeFile)('./file.png', stream, 'base64')) and then read it (util.promisify(fs.readFile)('./file.png')) the same thing happen

Test fails to read a snapshot after the first run, if it is given a JPEG image

I'm writing a test for a gulp task that adds a text onto a template image. And the source image happened to be in JPEG format.

After the first run the target image is created. A snapshot is also created alright. It is the same JPEG image as a source one. And its extension is changed to png. After that everything is broken.

I guess it is somewhat of a misuse of the module. But still it shouldn't just save a JPEG image with a png extension and then fail to read it.

Add option to clean up unnecessary diff output

We've got 150 or so visual regression tests. If a test passes, the diff isn't really necessary so remove it leaving only problem diffs behind saving the job hunting for the file that matches the failed test.

I considered doing the cleanup within my tests but the diff output path isn't returned in the result.

customSnapshotsDir as function

Would be good to have customSnapshotsDir specified as function

{ customSnapshotsDir: ({currentTestName, testPath}) => buildCustomSnapshotDirOnTheFly(currentTestName, testPath) }

so it is possible to generated path on the fly depending on testName, testPath and env variables

Use cases:

  1. Keep test folder structure in images folder. (I guess this can be even a separate option like "keepFolderStructure")
  2. Possible to use env variables, the same tests may produce different screenshots depending on env. IN my case: run the same test for components but different theme. Theme is set as env variable.

Remove diff output files from previous runs

Diff output files from previous runs should be removed before each assertion.

That way the user is not confused when a test is passing but there is still a diff output file in the __diff_output__ directory for said test (from a previous run that was failing).

Documentation on getting up and running

Hello,

I know you have documentation on how to get the plugin working but when I see this:

  it('renders correctly', async () => {
    const browser = await launchChromeHeadless();
    await browser.goTo('https://localhost:3000');
    const screenshot = await browser.takeScreenshot();

    expect(screenshot).toMatchImageSnapshot();
  });

I have no idea what comes behind 😢

Could you guys please add more docs in this part , showing the launchChromeHeadless function and how you implemented it ?

Thank You

request for documentation

I understand why the webdriver example is just pseudocode, but it would be helpful to see an example with an array buffer or PNGImage instance too in order to better understand the API.

I have yet to get something working:

  var image = PNGImage.createImage(100, 400);
  expect(image.getBlob()).toMatchImageSnapshot();
  // expect(image.getImage()).toMatchImageSnapshot(); also no dice
  // expect(image).toMatchImageSnapshot(); also no dice
  // expect(fs.createReadStream('image.png')).toMatchImageSnapshot(); also no dice

Without further examples its hard to know whether I'm misunderstanding the API (and how), as opposed to running into an edge case / experiencing a bug.

Invalid File Signature

Hey guys, I'm trying to pinpoint the matter here but whenever I create screenshots from my local and upload them to Git and be used by Jenkins, it's being reported that I have an invalid File Signature.

Curious if this is because of my setup (the way I take screenshots) or my workflow.

[End-to-end Tests]   ● Given a dealer is logging in › should pass visual regression
[End-to-end Tests] 
[End-to-end Tests]     Invalid file signature
[End-to-end Tests] 
[End-to-end Tests]       39 |         const image = await page.screenshot();
[End-to-end Tests]       40 | 
[End-to-end Tests]     > 41 |         expect(image).toMatchImageSnapshot({
[End-to-end Tests]       42 |             failureThreshold: '0.01',
[End-to-end Tests]       43 |             failureThresholdType: 'percent'
[End-to-end Tests]       44 |         });

Error: expect() argument is invalid, got object

Any suggestion where to start debugging?

    const image = fs.readFileSync(screenshotPath)
    expect(image).toMatchImageSnapshot()

Image variable output:

image <Buffer 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 02 ee 00 00 05 36 08 06 00 00 00 ca d7 33 06 00 00 00 01 73 52 47 42 00 ae ce 1c e9 00 00 40 00 ... >

Packages:

    "detox": "^7.1.0",
    "enzyme": "^3.3.0",
    "enzyme-adapter-react-16": "^1.1.1",
    "enzyme-to-json": "^3.3.3",
    "jasmine-fail-fast": "^2.0.0",
    "jest": "^22.4.2",
    "jest-image-snapshot": "^2.4.0",
    "jest-junit-reporter": "^1.1.0",
    "jest-serializer-enzyme": "^1.0.0",

npm test error, running against the examples

After following the instructions for the examples:

git clone https://github.com/americanexpress/jest-image-snapshot.git
cd jest-image-snapshot/examples
npm install
npm test

I get the following error with 'npm test', even though the file does appear to exist:

$ ls -alt node_modules/lodash/kebabCase.js
-rw-r--r-- 1 si wheel 659 Oct 26 1985 node_modules/lodash/kebabCase.js

$ npm test

> [email protected] test /tools/github/jest-image-snapshot/examples
> jest

 FAIL  __tests__/local-image.spec.js
  ● Test suite failed to run

    Cannot find module 'lodash/kebabCase' from 'index.js'

      13 |  */
      14 | /* eslint-disable no-underscore-dangle */
    > 15 | const kebabCase = require('lodash/kebabCase');
      16 | const merge = require('lodash/merge');
      17 | const path = require('path');
      18 | const Chalk = require('chalk').constructor;

      at Resolver.resolveModule (node_modules/jest-resolve/build/index.js:169:17)
      at Object.<anonymous> (../src/index.js:15:19)

 FAIL  __tests__/puppeteer-example.spec.js
  ● Test suite failed to run

    Cannot find module 'lodash/kebabCase' from 'index.js'

      13 |  */
      14 | /* eslint-disable no-underscore-dangle */
    > 15 | const kebabCase = require('lodash/kebabCase');
      16 | const merge = require('lodash/merge');
      17 | const path = require('path');
      18 | const Chalk = require('chalk').constructor;

      at Resolver.resolveModule (node_modules/jest-resolve/build/index.js:169:17)
      at Object.<anonymous> (../src/index.js:15:19)

Test Suites: 2 failed, 2 total
Tests:       0 total
Snapshots:   0 total
Time:        2.731s
Ran all test suites.
npm ERR! Test failed.  See above for more details.

configureToMatchImageSnapshot seems to not take effect

Currently setting configs like this works:

.toMatchImageSnapshot({
    customSnapshotsDir: './'
})

But like this does not:

const toMatchImageSnapshot = configureToMatchImageSnapshot({
  customSnapshotsDir: './'
});

Seems to just ignore the configs completly

Image comparison of snapshots crashes node v9.2.0

Node v9.2.0 crashes when attempting to use this jest matcher

/usr/bin/node[24137]: ../src/node_zlib.cc:189:static void node::{anonymous}::ZCtx::Write(const v8::FunctionCallbackInfo<v8::Value>&) [with bool async = false]: Assertion `Buffer::HasInstance(args[4])' failed.
 1: node::Abort() [node]                                                                                                               
 2: node::Assert(char const* const (*) [4]) [node]                                                                                     
 3: 0x55f886814175 [node]                                                                                                              
 4: v8::internal::FunctionCallbackArguments::Call(void (*)(v8::FunctionCallbackInfo<v8::Value> const&)) [node]                         
 5: 0x55f8861002df [node]                                                                                                              
 6: 0x55f88610082f [node]                                                                                                              
 7: 0x1755837042fd

After doing some research and investigation, the problem appears to be the pngjs package and how the package interacts with the internal zlib changes to node. Open issue tracking is here: pngjs/pngjs#95

Ability to choose alternative comparison library

Looking at github commits it appears blink-diff is pretty much dead, no activity in over a year and this issue suggests the author has left Yahoo.

Being able to pick a more modern comparison library while retaining blink-diff as the default would probably make sense.

Resemble-js is an option but again, there hasn't been much activity in over a year.

So I'm thinking pixelmatch would make sense, although it has less features than the other libraries it has an active community and by far the most popular image comparison library on npm.

Thoughts?

Do not create images automatically

Add option to fail if control image is not existing.

Why? We really don't want to fail silently if there is no screenshot created for story, in some cases. For example, when running on CI

Failure threshold needed?

With the pixelmatch upgrade we've maintained the threshold property however it appears they have different meanings between pixelmatch and blink-diff, the former relating to the per-pixel difference sensitivity and the latter the overall comparison sensitivity.

The difference being that if a single pixel fails in pixelmatch the test fails, where as with blink-diff we could allow for either x pixels or percentage of difference before the test fails.

In this example a 0.03% difference triggered a failure

  ● Visual regression: mobile › 99-visual-regression-hero-hero-video-fixed-aspect

    Expected image to match or be a close match to snapshot. 0% different
    See diff for details: visual-regression\__image_snapshots__\__diff_output__\mobile--hero-hero-video-fixed-aspect-diff.png

Pixeldiff output

{ 
  pass: false,
  diffOutputPath: 'visual-regression\\__image_snapshots__\\__diff_output__\\mobile--hero-hero-video-fixed-aspect-diff.png',
  diffRatio: 0.00034362139917695474,
  pixelCountDiff: 501 
}

Do you think we need to implement a version of threshold and threshold-type from blinkdiff?

Only update image snapshots with failing tests

Some of my images have minor differences (sometimes just 1 RGB value being off by 1, most of the time some anti aliasing stuff). For that reason I set the thresholds accordingly, so that this wouldn`t fail my tests. This works fine.

But if I have an expected images difference and I run jest --update all the images get update, even those with minor differences below the threshold. This leads to the screenshots of unchanged screens being in the commit and thus confusing at least to commit reviewer.

Am I doing something wrong, is this a bug or intended behaviour? What can I do to automatically only update those screenshots, where the test fails.

Invalid File Signature

Page.loadEventFired(async () => {
const screenshot = await Page.captureScreenshot({format: 'png'});
expect(screenshot.data).toMatchImageSnapshot();
}

This returns Invalid File Signature

I try this with Chromeless and it works but I am using chrome-remote-interface and after looking at the way Chromeless implemented it, it pretty much look's the same, why am I getting this error?

Image diffing options

From discussions on #38, it seems that there are a couple levels of diffing that we need here. This is primarily due to performance.

  1. Current image diff output - overlay with changed pixels highlighted in red
  2. Side-by-side diff output
  3. No image diffing output. There should ideally be an early-exit mode for continuous integration. In basic CI setups, the diff images are thrown away when the CI finishes, so there is little reason to spend the time rendering the whole diff image.

The biggest gains right now would be from adding the side-by-side diff output as an option.

How does it work?

In the how it works section, I don't understand what image is supposed to be in the following:

expect(image).toMatchImageSnapshot();

Is it the name of the image or the path? Does this mean that I have to provide a base image or does the library generate both the base and the comparison image?

Implement obsolete snapshot functionality

When I remove a test that is asserting using toMatchImageSnapshot() I should get an obsolete snapshot warning next time I run jest.

This should work exactly like jest's toMatchSnapshot()

How long should the image comparison typically take?

I have a really basic Jest test that navigates to a screen in our app using puppeteer, takes a screenshot, then uses toMatchImageSnapshot. Unfortunately, the test takes about thirty seconds after adding the toMatchImageSnapshot matcher. I have another script where I use puppeteer to take two screenshots to compare them with Resemble outside of a Jest environment and it only takes about four seconds. Any idea as to what might be causing this drastic increase in run time?

Generate HTML report?

Any plans for this?

I explored if it would have been possible to do via custom jest reporter but it seems that it's not possible to pass custom metadata (name of the diff file) from the toMatchImageSnapshot matcher to the reporters :/

Browser options

Hey,

I was wondering if this library just works with headlessChrome or is it possible to do testing with different browsers?

Test failing despite 0% Difference and 0 Differing Pixels

Hi,

I'm running the following test:

test("PD Test", async () => {
    await page.addStyleTag({
      content: `
        * { transition: none !important; }
        accordion__content { display: none !important; }
       #colour-thumbnails, #size-thumbnails { opacity: 0 !important; }
       #footer  { opacity: 1 !important; transition: none !important; }
      `
    });
    page.waitForNavigation({ waituntil: "domcontentloaded" });
  
    await pageHelpers.screenshotTest(page);
  }, 30000 );

where the pageHelper.screenshotTest() function is...

"screenshotTest": async (page, failureThreshold = "0.01") => {
    const screenshot = await page.screenshot({
      "fullPage": true,
      "omitBackground": true,
    });
    expect(screenshot)
      .toMatchImageSnapshot({
        "customDiffConfig": {
          "threshold": "0.5",
        },
        failureThreshold,
        "failureThresholdType": "percent",
      });
  },

And am seeing tests fail with this result:

Expected image to match or be a close match to snapshot but was 0% different from snapshot (0 differing pixels).

Full error:

Expected image to match or be a close match to snapshot but was 0% different from snapshot (0 differing pixels).
    See diff for details: __tests__/__image_snapshots-linux__/__diff_output__/baseline-04-pd-page-1-diff.png
      44 |     });
      45 |     expect(screenshot)
    > 46 |       .toMatchImageSnapshot({
      47 |         "customDiffConfig": {
      48 |           "threshold": "0.5",
      49 |         },
      
      at _callee4$ (__tests-config/page-helpers.js:46:8)
      at tryCatch (node_modules/regenerator-runtime/runtime.js:62:40)
      at Generator.invoke [as _invoke] (node_modules/regenerator-runtime/runtime.js:296:22)
      at Generator.prototype.(anonymous function) [as next] (node_modules/regenerator-runtime/runtime.js:114:21)
      at step (__tests-config/page-helpers.js:3:191)
      at __tests-config/page-helpers.js:3:361

Feature: document which image formats are supported

Hello,

Thanks for making such an excellent library! It would be great if the supported image formats were documented. I can see from the screencast that PNG files are supported. Is there anything else?

It would be great if SVG files were supported as well!

Thanks for considering!

Question: how to ignore certain areas/elements on the screenshot

Hi.
I'm considering switching from gemini to puppeteer+jest-image-snapshot.
In most visual regression testing tools there is some kind of ignoreElements option, which allows to ignore visual differences in certain page areas. I can't find any way how am I to do that using new stack. I've tried to search for the solution in puppeteer (do not include certain areas into the screenshot), this repo, and in pixelmatch (do not compare certain areas) with no luck. The closest thing I've found is this unresolved issue: mapbox/pixelmatch#9.
Any tip here? I believe it is crucial feature for visual regression testing.

I see that in the past jest-image-snapshot utilized blink-diff, which provided such a feature under the name black-out.

Test isn't errorring even though there is clearly a diff

Testing this out in our repo, and the screenshots and the diff get generated correctly, which is great. Here's a screenshot of the diff:

1

However - the test passes even though a diff has been generated.

1

Here is the test I wrote:

/* @flow */
import { toMatchImageSnapshot } from 'jest-image-snapshot';
import { Chromeless } from 'chromeless';
import fs from 'fs';

expect.extend({ toMatchImageSnapshot });

describe('classes landing page', () => {
  it('renders correctly', async () => {
    const chromeless = new Chromeless();
    const screenshotPath = await chromeless
      .goto('http://www.remind.dev/classes')
      .wait('.SignInForm-submit')
      .screenshot();
    await chromeless.end();
    const screenshot = fs.readFileSync(screenshotPath);
    const blinkDiffConfig = { threshold: 1 };
    // $FlowIssue
    expect(screenshot).toMatchImageSnapshot({
      customDiffConfig: blinkDiffConfig,
    });
  });
});

We're on Jest 20.0.0

Converting circular structure to JSON

Hello!

I'm receiving the error TypeError: Converting circular structure to JSON when attempting to diff images in the following versions:

  • 2.4.0
  • 2.3.0

I do not receive this error in the following versions:

  • 1.0.0
  • 1.0.1

Additional testing parameters:

  • all versions work with node:8.9
  • Versions 2.4 & 2.3 fail with node:9.11.1 and node:9.2.1

Full Error

 FAIL  src/tests/imageTests/ExampleForm2.test.js (16.907s)                                                                                                                                              [0/4193]
  Image Tests
    ✕ pass Image Snapshot tests for default form (11275ms)

  ● Image Tests › pass Image Snapshot tests for default form

    TypeError: Converting circular structure to JSON
        at JSON.stringify (<anonymous>)

      130 |   const screenshot = await getFile(screenshotPath);
      131 |   await chromeless.end();
    > 132 |   expect(screenshot).toMatchImageSnapshot({ customDiffConfig: { threshold: 0 } });
      133 | };
      134 |
      135 | export default runImageSnapshotTest;

      at diffImageToSnapshot (node_modules/jest-image-snapshot/src/diff-snapshot.js:161:134)
      at Object.toMatchImageSnapshot (node_modules/jest-image-snapshot/src/index.js:50:20)
      at Object.throwingMatcher [as toMatchImageSnapshot] (node_modules/expect/build/index.js:214:24)
      at _callee$ (src/tests/imageTests/runImageSnapshotTest.js:132:22)
      at tryCatch (node_modules/regenerator-runtime/runtime.js:62:40)
      at Generator.invoke [as _invoke] (node_modules/regenerator-runtime/runtime.js:296:22)
      at Generator.prototype.(anonymous function) [as next] (node_modules/regenerator-runtime/runtime.js:114:21)
      at step (node_modules/@babel/runtime/helpers/asyncToGenerator.js:12:30)
      at _next (node_modules/@babel/runtime/helpers/asyncToGenerator.js:27:9)

How can we obtain diff_output files from a cloud CI environment?

Hi,

We're very excited about the idea of setting up some new end to end tests with the combo of jest-image-snapshot / jest / puppeteer / Headless Chrome.

We're nearing (what I hope are) the final steps, but the image comparisons are failing in the CI environment (BitBucket Pipelines, but using the same Docker image that we're developing with locally).

I'm trying to figure out how I can get access to the diff_output files, since BitBucket Pipelines doesn't persist the file system. If one step fails (npm test) then the whole build fails, and there's no opportunity to upload/send the artifacts created in the previous step.

The only idea I've come up with so far is mounting an external file system to the docker image (something like https://github.com/s3fs-fuse/s3fs-fuse/), so that the files get sync'd/uploaded while they get written to the file system, rather than relying on a subsequent operation.

Does anyone have any other approaches for obtaining the diff_output files, or persisting them in a ephemeral CI context?

Cheers,
John

Assertion Error

I am getting an error when running the test for the second time:

AssertionError [ERR_ASSERTION]: zlib binding closed uitk-react-example: uitk-react-example: at Object.<anonymous>.test (test/visual.spec.js:30:23) uitk-react-example: at <anonymous> uitk-react-example: at process._tickCallback (internal/process/next_tick.js:169:7)

This is the test:

test('visual regression', async () => {
  const selector = 'a[href*="?selectedKind=Buttons&selectedStory="]';
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('http://localhost:8080');

let iframe = await page.evaluate(() => {
  const iframe = document.getElementById('storybook-preview-iframe');
  return iframe.getBoundingClientRect();
});

// console.log(iframe);
let storyLinks = await page.evaluate(() => {
  const stories = Array.from(document.querySelectorAll('a[href*="?selectedKind=Example&selectedStory="]'));
  const links = stories.map(story => {
    let name = story.text.replace(/,*\s+/g, "-");
    return {href: story.href, name: name};
  });
  return links;
});

for(let i = 0; i < storyLinks.length; i++){

 await page.goto(storyLinks[i].href);
 const screen = await page.screenshot({clip: {x:iframe.x, y: iframe.y, width: iframe.width, height: iframe.height}});
 await expect(screen).toMatchImageSnapshot({customSnapshotIdentifier: storyLinks[i].name});
}

 await browser.close();

I believe it has something to do with this project. It come from this line :
await expect(screen).toMatchImageSnapshot({customSnapshotIdentifier: storyLinks[i].name});

"Image size must match baseline" error

I'm trying to use this tool to compare a site at two different states using puppeteer's image screenshots. But due to css and content differences between the two different states, the heights of the full pages will differ. This then prevents the diff from running because the full height of the image files won't be of the same dimensions.

There are a bunch of arbitrary ways to fix this, like clipping the shots to the custom needs of any given difference. But it would be really useful if instead image-snapshot had a mode where it would just compare the diff of only the height of the shorter of the two snapshots, or in some other way allow heights to differ (and just count the differing height space as complete 100% pixel diff).

Diffs returning errors instead of fails

When there is a diff between the images, the tests throw an error instead of returning a failed test. Is that a normal behaviour or how shall I handle failed test?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.