Giter Site home page Giter Site logo

swift-snapshot-testing-stitch's Introduction

SnapshotTesting Stitch

Compatible with the latest Swift versions Compatible with iOS Contact @JamesSherlouk on Twitter

An extension to SnapshotTesting which allows you to create images combining the output of multiple snapshot strategies, assuming they all output to UIImage.

In essence, this allows you to have a single image which represents a single view in your application, shown in multiple different configurations. This might be useful, for example, where you want to visualize the same UIViewController on multiple devices or in light and dark mode.

Images may also have titles, allowing you to easily identify each configuration within the image.

An image demonstrating an example output of the package. It shows seven views coloured in blue with red borders with titles above each of them naming each of the views.

Usage

Once installed, no additional configuration is required. You can import the SnapshotTestingStitch module, call SnapshotTesting following their usage guide and simply provide our stitch strategy as below.

import SnapshotTesting
import SnapshotTestingStitch
import XCTest

class MyViewControllerTests: XCTestCase {
  func testMyViewController() {
    let vc = MyViewController()

    assertSnapshot(matching: vc, as: .stitch(strategies: [
      .image(on: .iPhone8),
      .image(on: .iPhone8Plus),
    ]))
  }
}

Titles

By default, if you simply provide an array of strategies then these will be untitled. If you instead provide a tuple containing a string and the strategy then the string will be positioned as a title above the image in the snapshot.

assertSnapshot(matching: vc, as: .stitch(strategies: [
  ("iPhone 8", .image(on: .iPhone8)),
  ("iPhone 8 Plus", .image(on: .iPhone8Plus)),
]))

Customization

An optional parameter of the stitch strategy is the "style". This value allows you to customize certain parts of the rendered snapshot generated by the package.

This includes the spacing around the images, the colors used, and an optional border which can surround each image. The border can be useful for clearly identifying the bounds of each image - especially if the image background is the same as the snapshot background.

Opinionated defaults have already been provided for you.

assertSnapshot(matching: vc, as: .stitch(strategies: [
  ("iPhone 8", .image(on: .iPhone8)),
  ("iPhone 8 Plus", .image(on: .iPhone8Plus)),
], style: .init(
  fontSize: 20,
  titleColor: .white,
  borderColor: .red,
  borderWidth: 5,
  itemSpacing: 32,
  framePadding: 32,
  titleSpacing: 32,
  backgroundColor: .black
)))

Installation

Xcode 11

⚠️ Warning: By default, Xcode will try to add the SnapshotTestingStitch package to your project's main application/framework target. Please ensure that SnapshotTestingStitch is added to a test target instead, as documented in the last step, below.

  1. From the File menu, navigate through Swift Packages and select Add Package Dependency….
  2. Enter package repository URL: https://github.com/Sherlouk/swift-snapshot-testing-stitch
  3. Confirm the version and let Xcode resolve the package
  4. On the final dialog, update SnapshotTestingStitch's Add to Target column to a test target that will contain snapshot tests (if you have more than one test target, you can later add SnapshotTestingStitch to them by manually linking the library in its build phase)

Swift Package Manager

If you want to use SnapshotTestingStitch in any other project that uses Swift Package Manager, add the package as a dependency in Package.swift:

dependencies: [
  .package(name: "SnapshotTestingStitch", url: "https://github.com/Sherlouk/swift-snapshot-testing-stitch.git", from: "1.0.0"),
]

Next, add SnapshotTestingStitch as a dependency of your test target:

targets: [
  .target(
    name: "MyApp"
  ),
  
  .testTarget(
    name: "MyAppTests", 
    dependencies: [
      .target(name: "MyApp"),
      .product(name: "SnapshotTestingStitch", package: "SnapshotTestingStitch"),
    ]
  ),
]

Other

We do not currently support distribution through CocoaPods or Carthage.

License

This library is released under the MIT license. See LICENSE for details.

swift-snapshot-testing-stitch's People

Contributors

alexey1312 avatar gxs-antran avatar mojtabahs avatar sherlouk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

swift-snapshot-testing-stitch's Issues

Add CocoaPods Support

I'm fortunate that all of my personal and work projects support Swift Package Manager in some part which enables us to use this project.

I understand not everyone is in the same boat, so if anybody has the need for CocoaPods then feel free to let me know here and I can add compatibility for you.

I won't do it unless someone asks though, always preferable to keep things light and easy!

Add `configure` block to allow input customisation

The idea of the library is to allow users to stitch multiple variants of the same value into a single image. This is possible using the different traits (such as dynamic text size or user interface style) or image strategies (such as device size).

However, it doesn't allow you to apply customisations to the view which are done at a custom class level. To be clearer, we should add the ability to change variables on the view itself for each test in the stitch.

assertSnapshot(matching: vc, as: .stitch(strategies: [
  ("iPhone 8", .image(on: .iPhone8), configure: { $0.theme = .light }),
  ("iPhone 8 Plus", .image(on: .iPhone8Plus), configure: { $0.theme = .dark }),
]))

Custom Snapshot Diffing

⚠️ I'm writing this down for my own sanity. It's just an idea, and not something I think somebody should actually pick up. Though I'm always interested if people have their own opinions.

Currently, as part of the initialisation of the Snapshotting type we provide the diffing engine as the same as SnapshotTesting's own .image.

This diffing solution will compare pixel to pixel across our entire outputted image... this works as expected, but it could be better with our own solution.

Specifically, if we had the ability to diff each 'nested image' independently, then this would enable us to provide considerably better feedback to the user - aiding in their work to fix the error!

How would we do this?

Well a diffing engine has two requirements.

First of all the ability to losslessly convert the 'Value' (in our case an instance of UIImage) to Data and back to the UIImage. This is in order to save the file to disk. For our use case we can copy what SnapshotTesting does and simply convert to/from PNG.

The second is more complex and is where we have to compare two versions of 'Value' to check if they're the same. We could start with a crude precision check, this would be our "happy path" and would allow the test to quickly succeed if they were identical.

If they were different however, we would need to break each image down into the separate 'nested images' and then run comparison on those. First checking that each set of images contain the same images, and then checking each image for likeness (with an optional precision). This would need access to the original metadata (title/strategy information) which we could pass through.

The key challenge will be trying to break a UIImage down into this object with metadata. Ideally we would store a JSON file alongside the image file in order to safely serve this data - I'm not sure this is feasible without substantial API changes in the SnapshotTesting library. Instead, I think we're going to need to encode this data directly into the image itself 😱

Essentially the first pixel would denote how many lines of pixels are used to decode the data. The rest of the pixels would then be parsed into a Data object, which would be in turn turned into a Codable struct. The exact science here is to be explored, but it is technically viable.

One of the downsides of this approach (besides being awfully complex, and potentially not outweighed by the benefits - more on this later) is that tools which do external diffing might pick up on the subtle changes to our data pixels. Now I don't think this is likely to cause problems though as the data we would encode would simply inform us of the coordinates used to store each image and the title attached. Both of these pieces of information would already cause a significant enough change as to be caught by these external tools.

Is this worth it?

From a nerdy perspective, I'd love to explore it. Though it's very easy to win an argument where we say this is completely unnecessary. I think, if this does get added, we should definitely make it optional (at least initially) and it should be packaged separately requiring users to explicitly opt-in.

Increase Test Coverage

We should increase the test coverage to demonstrate more real-world examples as opposed to flat blue blobs (which I'm pretty sure are against Apple's HIGs and wouldn't be allowed in production).

I want to demonstrate (and then document with images in the README) various different use cases including:

  • Different Themes (A view controller which responds to light/mode dark)
  • Different Localisations (A view controller which changes based on the user's locale)
  • Different Accessibility Modes (A view controller which supports dynamic text sizes, smart invert, etc)
  • Different Devices (A view controller which changes depending on the device or screen size)
  • Different Sizes (A view/cell which changes based on the rendered size)

other ideas welcome

I also want to add some extra tests including:

  • A performance test covering the entire stitch strategy
  • A test which uses a custom strategy which adds random timeouts to the snapshots (and as such will return out of order - testing our sorting solution)
  • A test which checks the title trimming approach
  • Tests to cover independent units of code throughout the package (especially the calculateImageSize function)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.