Giter Site home page Giter Site logo

haf / expecto Goto Github PK

View Code? Open in Web Editor NEW
656.0 27.0 96.0 37.19 MB

A smooth testing lib for F#. APIs made for humans! Strong testing methodologies for everyone!

License: Apache License 2.0

F# 99.03% C# 0.93% Shell 0.03% Batchfile 0.02%
testing property-based-testing stress-testing unit-testing performance-testing

expecto's Introduction

Expecto

An advanced testing library for F#

Linux Build Windows Build Nuget License - Apache 2.0


Expecto aims to make it easy to test CLR based software; be it with unit tests, stress tests, regression tests or property based tests. Expecto tests are parallel and async by default, so that you can use all your cores for testing your software. This also opens up a new way of catching threading and memory issues for free using stress testing.

Parallel by default

With Expecto you write tests as values. Tests can be composed, reduced, filtered, repeated and passed as values, because they are values. This gives the programmer a lot of leverage when writing tests. Setup and teardown are just simple functions, no need for attributes.

Expecto comes with batteries included with an integrated test runner, but it's still open for extension due to its compositional model.

Expecto comes with performance testing, making statistically sound performance comparison simple.

Expecto also provides a simple API for property based testing using FsCheck.

Quickstart

dotnet new install Expecto.Template::*
dotnet new expecto -n PROJECT_NAME -o FOLDER_NAME

Follow Smooth Testing on YouTube to learn the basics.

What follows is the Table of Contents for this README, which also serves as the documentation for the project.

Sample output

Installing

In your paket.dependencies:

nuget Expecto
nuget Expecto.BenchmarkDotNet
nuget Expecto.FsCheck
nuget Expecto.Hopac

Tests should be first-class values so that you can move them around and execute them in any context that you want.

Let's have look at what an extensive unit test suite looks like when running with Expecto:

Sample output from Logary

IDE integrations

There's a NuGet Expecto.VisualStudio.TestAdapter for Visual Studio integration.

.NET integration

You can use dotnet run or dotnet watch from the command line.

dotnet watch -p MyProject.Tests run -f net6.0

Prettify stacktraces/ship test logs

To get a complete logging solution and stacktrace highlighting, parsing and the ability to ship your build logs somewhere, also add these:

nuget Logary.Adapters.Facade prerelease

And in your tests:

open Hopac
open Logary
open Logary.Configuration
open Logary.Adapters.Facade
open Logary.Targets

[<EntryPoint>]
let main argv =
  let logary =
    Config.create "MyProject.Tests" "localhost"
    |> Config.targets [ LiterateConsole.create LiterateConsole.empty "console" ]
    |> Config.processing (Events.events |> Events.sink ["console";])
    |> Config.build
    |> run
  LogaryFacadeAdapter.initialise<Expecto.Logging.Logger> logary

  // Invoke Expecto:
  runTestsInAssemblyWithCLIArgs [] argv

Now, when you use Logary in your app, you can see your log messages together with the log output/summary/debug printing of Expecto, and the output won't be interlaced due to concurrency.

TestResults file

Use --nunit-summary TestResults.xml or --junit-summary TestResults.junit.xml (JUnit support is incomplete).

.NET support

Expecto has its own .NET template! You could create a base .NET project with expecto. How to do that? Simply write following lines:

dotnet new install 'Expecto.Template::*'
dotnet new expecto -n PROJECT_NAME -o FOLDER_NAME

How to run it?

dotnet restore
dotnet run

How to create expecto template

Testing "Hello world"

The test runner is the test assembly itself. It's recommended to compile your test assembly as a console application. You can run a test directly like this:

open Expecto

let tests =
  test "A simple test" {
    let subject = "Hello World"
    Expect.equal subject "Hello World" "The strings should equal"
  }

[<EntryPoint>]
let main args =
  runTestsWithCLIArgs [] args tests

No magic is involved here. We just created a single test and hooked it into the assembly entry point.

The Expect module contains functions that you can use to assert with. A testing library without a good assertion library is like love without kisses.

Now compile and run! xbuild Sample.fsproj && mono --debug bin/Debug/Sample.exe

Running tests

Here's a simple test:

open Expecto

let simpleTest =
  testCase "A simple test" <| fun () ->
    let expected = 4
    Expect.equal expected (2+2) "2+2 = 4"

Then run it like this, e.g. in the interactive or through a console app.

runTestsWithCLIArgs [] [||] simpleTest

which returns 1 if any tests failed, otherwise 0. Useful for returning to the operating system as error code.

It's worth noting that <| is just a way to change the associativity of the language parser. In other words; it's equivalent to:

testCase "A simple test" (fun () ->
  Expect.equal 4 (2+2) "2+2 should equal 4")

runTestsWithCLIArgs

Signature CLIArguments seq -> string[] -> Test -> int. Runs the passed tests and also overrides the passed CLIArguments with the command line parameters.

runTestsWithCLIArgsAndCancel

Signature CancellationToken -> ExpectoConfig -> Test -> int. Runs the passed tests and also overrides the passed CLIArguments with the command line parameters.

runTestsInAssemblyWithCLIArgs

Signature CLIArguments seq -> string[] -> int. Runs the tests in the current assembly and also overrides the passed CLIArguments with the command line parameters. All tests need to be marked with the [<Tests>] attribute.

runTestsInAssemblyWithCLIArgsAndCancel

Signature CancellationToken -> CLIArguments seq -> string[] -> int. Runs the tests in the current assembly and also overrides the passed CLIArguments with the command line parameters. All tests need to be marked with the [<Tests>] attribute.

Filtering with filter

You can single out tests by filtering them by name (e.g. in the interactive/REPL). For example:

open Expecto
open MyLib.Tests
integrationTests // from MyLib.Tests
|> Test.filter defaultConfig.joinWith.asString (fun z -> (defaultConfig.joinWith.format z).StartsWith "another test" ) // the filtering function
|> runTestsWithCLIArgs [] [||]

Shuffling with shuffle

You can shuffle the tests randomly to help ensure there are no run order dependencies. For example:

open Expecto
open MyLib.Tests
myTests // from MyLib.Tests
|> Test.shuffle defaultConfig.joinWith.asString
|> runTestsWithCLIArgs [] [||]

Stress testing

Tests can also be run randomly for a fixed length of time. The idea is that this will catch the following types of bugs:

  • Memory leaks.
  • Threading bugs running same test at same time.
  • Rare threading bugs.
  • Rare property test fails.

The default config will run FsCheck tests with a higher end size than normal.

Writing tests

Expecto supports the following test constructors:

  • normal test cases with testCase, testCaseAsync and testCaseTask
  • lists of tests with testList
  • test fixtures with testFixture, testFixtureAsync, testFixtureTask
  • pending tests (that aren't run) with ptestCase, ptestCaseAsync and ptestCaseTask
  • focused tests (that are the only ones run) with ftestCase, ftestCaseAsync and ftestCaseTask
  • sequenced tests with testSequenced and testSequencedGroup (tests inside a group are run in sequence w.r.t each other)
  • parametised tests with testParam
  • testCases with the workflow builder test, ptest, ftest supporting deterministic disposal, loops and such
  • property based tests with testProperty, testPropertyWithConfig and testPropertyWithConfigs, testPropertyWithConfigsStdGen, testPropertyWithConfigStdGen from Expecto.FsCheck
  • performance tests with Expecto.BenchmarkDotNet and benchmark<TBench> : string -> Test.
  • wrapping your test with a label with testLabel. If your root label is the same across your test project, you'll have an easier time filtering tests.

All of the above compile to a Test value that you can compose. For example, you can compose a test and a testCaseAsync in a testList which you wrap in testSequenced because all tests in the list use either Expect.fasterThan or they are using Expecto.BenchmarkDotNet for performance tests. You have to remember that the fully qualified names of tests need to be unique across your test project.

Normal tests

  • test : string -> TestCaseBuilder - Builds a test case in a computation expression.
  • testAsync : string -> TestAsyncBuilder - Builds an async test case in a computation expression.
  • testTask : string -> TestTaskBuilder - Builds a task test case in a computation expression.
  • testCase : string -> (unit -> unit) -> Test - Builds a test case from a test function.
  • testCaseAsync : string -> Async<unit> -> Test - Builds an async test case from an async expression.
  • testCaseTask : string -> (unit -> Task<unit>) -> Test - Builds an async test case from a function returning a task. Unlike async, tasks start right away and thus must be wrapped in a function so the task doesn't start until the test is run.

testList for grouping

Tests can be grouped (with arbitrary nesting):

let tests =
  testList "A test group" [
    test "one test" {
      Expect.equal (2+2) 4 "2+2"
    }

    test "another test that fails" {
      Expect.equal (3+3) 5 "3+3"
    }

    testAsync "this is an async test" {
      let! x = async { return 4 }
      Expect.equal x (2+2) "2+2"
    }

    testTask "this is a task test" {
      let! n = Task.FromResult 2
      Expect.equal n 2 "n=2"
    }
  ]
  |> testLabel "samples"

Also have a look at the samples.

Test fixtures

  • testFixture : ('a -> unit -> unit) -> (seq<string * 'a>) -> seq<Test>

The test fixture takes a factory and a sequence of partial tests. The 'a parameter will be inferred to the function type, such as MemoryStream -> 'a -> unit -> 'a.

Example:

testList "Setup & teardown 3" [
  let withMemoryStream f () =
    use ms = new MemoryStream()
    f ms
  yield! testFixture withMemoryStream [
    "can read",
      fun ms -> ms.CanRead ==? true
    "can write",
      fun ms -> ms.CanWrite ==? true
  ]
]
  • testFixtureAsync : ('a -> unit -> Async<unit>) -> (seq<string * 'a>) -> seq<Test>

The test fixture async takes a factory and a sequence of partial tests. The 'a parameter will be inferred to the function type, such as MemoryStream -> 'a -> 'a.

Example:

testList "Setup & teardown 4" [
  let withMemoryStream f = async {
    use ms = new MemoryStream()
    do! f ms
  }
  yield! testFixture withMemoryStream [
    "can read",
      fun ms -> async { return ms.CanRead ==? true }
    "can write",
      fun ms -> async { return ms.CanWrite ==? true }
  ]
]
  • testFixtureTask : ('a -> unit -> Task<unit>) -> (seq<string * 'a>) -> seq<Test>

The test fixture task takes a factory and a sequence of partial tests. The 'a parameter will be inferred to the function type, such as MemoryStream -> 'a -> 'a.

Example:

testList "Setup & teardown 5" [
  let withMemoryStream f = task {
    use ms = new MemoryStream()
    do! f ms
  }
  yield! testFixture withMemoryStream [
    "can read",
      fun ms -> task { return ms.CanRead ==? true }
    "can write",
      fun ms -> task { return ms.CanWrite ==? true }
  ]
]

Theory tests

  • testTheory : string -> seq<'a> -> ('a -> 'b) -> Test

The test theory takes a name and a sequence of cases to test against. The 'a parameter will be inferred to the sequence type, such as string -> seq<int> -> (int -> 'b) -> Test.

Example:

testList "theory testing" [
  testTheory "odd numbers" [1; 3; 5] <| fun x ->
    Expect.isTrue (x % 2 = 1) "should be odd"
]

Theory tests can simulate multiple parameters via tuples. For example, passing input with an expected result

Example:

testList "theory testing with an expected result" [
  testTheory "sum numbers" [(1,1),2; (2,2),4] <| fun ((a,b), expected) ->
    Expect.equal (a+b) expected "should be equal"
]

Pending tests

  • ptestCase
  • ptest
  • ptestAsync
  • ptestTask
  • ptestCaseAsync
  • ptestCaseTask
  • ptestTheory
  • ptestTheoryAsync
  • ptestTheoryTask

You can mark an individual spec or container as Pending. This will prevent the spec (or specs within the list) from running. You do this by adding a p before testCase or testList or P before Tests attribute (when reflection tests discovery is used).

open Expecto

[<PTests>]
let skippedTestFromReflectionDiscovery = testCase "skipped" <| fun () ->
    Expect.equal (2+2) 4 "2+2"

[<Tests>]
let myTests =
  testList "normal" [
    testList "unfocused list" [
      ptestCase "skipped" <| fun () -> Expect.equal (2+2) 1 "2+2?"
      testCase "will run" <| fun () -> Expect.equal (2+2) 4 "2+2"
      ptest "skipped" { Expect.equal (2+2) 1 "2+2?" }
      ptestAsync "skipped async" { Expect.equal (2+2) 1 "2+2?" }
    ]
    testCase "will run" <| fun () -> Expect.equal (2+2) 4 "2+2"
    ptestCase "skipped" <| fun () -> Expect.equal (2+2) 1 "2+2?"
    ptestList "skipped list" [
      testCase "skipped" <| fun () -> Expect.equal (2+2) 1 "2+2?"
      ftest "skipped" { Expect.equal (2+2) 1 "2+2?" }
    ]
  ]

Optionally, in the TestCode (function body):

  • Tests.skiptest
  • Tests.skiptestf

Focusing tests

Focusing can be done with

  • ftestCase
  • ftestList
  • ftestCaseAsync
  • ftestCaseTask
  • ftest
  • ftestAsync
  • ftestTask
  • ftestTheory
  • ftestTheoryAsync
  • ftestTheoryTask

It is often convenient, when developing to be able to run a subset of specs. Expecto allows you to focus specific test cases or tests list by putting f before testCase or testList or F before attribute Tests(when reflection tests discovery is used).

open Expecto
[<FTests>]
let someFocusedTest = test "will run" { Expect.equal (2+2) 4 "2+2" }
[<Tests>]
let someUnfocusedTest = test "skipped" { Expect.equal (2+2) 1 "2+2?" }

or

open Expecto

[<Tests>]
let focusedTests =
  testList "unfocused list" [
    ftestList "focused list" [
      testCase "will run" <| fun () -> Expect.equal (2+2) 4 "2+2"
      ftestCase "will run" <| fun () -> Expect.equal (2+2) 4 "2+2"
      test "will run" { Expect.equal (2+2) 4 "2+2" }
    ]
    testList "unfocused list" [
      testCase "skipped" <| fun () -> Expect.equal (2+2) 1 "2+2?"
      ftestCase "will run" <| fun () -> Expect.equal (2+2) 4 "2+2"
      test "skipped" { Expect.equal (2+2) 1 "2+2?" }
      ftest "will run" { Expect.equal (2+2) 4 "2+2" }
    ]
    testCase "skipped" <| fun () -> Expect.equal (2+2) 1 "2+2?"
  ]

Expecto accepts the command line argument --fail-on-focused-tests, which checks if focused tests exist. This parameter can be set in build scripts and allows CI servers to reject commits that accidentally included focused tests.

Sequenced tests

You can mark an individual spec or container as Sequenced. This will make sure these tests are run sequentially. This can be useful for timeout and performance testing.

[<Tests>]
let timeout =
  testSequenced <| testList "Timeout" [
    test "fail" {
      let test = TestCase(Test.timeout 10 (TestCode.Sync (fun _ -> Thread.Sleep 100)), Normal)
      async {
        let! eval = Impl.evalTests defaultConfig test
        let result = Impl.TestRunSummary.fromResults eval
        result.failed.Length ==? 1
      } |> Async.RunSynchronously
    }
    test "pass" {
      let test = TestCase(Test.timeout 1000 (TestCode.Sync ignore), Normal)
      async {
        let! eval = Impl.evalTests defaultConfig test
        let result = Impl.TestRunSummary.fromResults eval
        result.passed.Length ==? 1
      } |> Async.RunSynchronously
    }
  ]

You can also mark a test list as a Sequenced Group. This will make sure the tests in this group are not run at the same time.

[<Tests>]
let timeout =
  let lockOne = obj()
  let lockTwo = obj()
  testSequencedGroup "stop deadlock" <| testList "possible deadlock" [
    testAsync "case A" {
      lock lockOne (fun () ->
        Thread.Sleep 10
        lock lockTwo (fun () ->
          ()
        )
      )
    }
    testAsync "case B" {
      lock lockTwo (fun () ->
        Thread.Sleep 10
        lock lockOne (fun () ->
          ()
        )
      )
    }
  ]

Parameterised tests with testParam

  • testParam
testList "numberology 101" (
  testParam 1333 [
    "First sample",
      fun value () ->
        Expect.equal value 1333 "Should be expected value"
    "Second sample",
      fun value () ->
        Expect.isLessThan value 1444 "Should be less than"
] |> List.ofSeq)

Setup and teardown

A simple way to perform setup and teardown is by using IDisposable resources:

let simpleTests =
    testList "simples" [
        test "test one" {
            use resource = new MyDatabase()
            // test code
        }
    ]

For more complex setup and teardown situations we can write one or more setup functions to manage resources:

let clientTests setup =
    [
        test "test1" {
            setup (fun client store ->
                // test code
            )
        }
        test "test2" {
            setup (fun client store ->
                // test code
            )
        }
        // other tests
    ]

let clientMemoryTests =
    clientTests (fun test ->
        let client = memoryClient()
        let store = memoryStore()
        test client store
    )
    |> testList "client memory tests"

let clientIntegrationTests =
    clientTests (fun test ->
        // setup code
        try
            let client = realTestClient()
            let store = realTestStore()
            test client store
        finally
            // teardown code
    )
    |> testList "client integration tests"

Property based tests

Reference FsCheck and Expecto.FsCheck to test properties.

module MyApp.Tests

// the ExpectoFsCheck module is auto-opened by this
// the configuration record is in the Expecto namespace in the core library
open Expecto

let config = { FsCheckConfig.defaultConfig with maxTest = 10000 }

let properties =
  testList "FsCheck samples" [
    testProperty "Addition is commutative" <| fun a b ->
      a + b = b + a

    testProperty "Reverse of reverse of a list is the original list" <|
      fun (xs:list<int>) -> List.rev (List.rev xs) = xs

    // you can also override the FsCheck config
    testPropertyWithConfig config "Product is distributive over addition" <|
      fun a b c ->
        a * (b + c) = a * b + a * c
  ]

Tests.runTestsWithCLIArgs [] [||] properties

You can freely mix testProperty with testCase and testList. The config looks like the following.

type FsCheckConfig =
    /// The maximum number of tests that are run.
  { maxTest: int
    /// The size to use for the first test.
    startSize: int
    /// The size to use for the last test, when all the tests are passing. The size increases linearly between Start- and EndSize.
    endSize: int
    /// If set, the seed to use to start testing. Allows reproduction of previous runs.
    replay: (int * int) option
    /// The Arbitrary instances on this class will be merged in back to front order, i.e. instances for the same generated type at the front
    /// of the list will override those at the back. The instances on Arb.Default are always known, and are at the back (so they can always be
    /// overridden)
    arbitrary: Type list
    /// Callback when the test case had input parameters generated.
    receivedArgs: FsCheckConfig
               -> (* test name *) string
               -> (* test number *) int
               -> (* generated arguments *) obj list
               -> Async<unit>
    /// Callback when the test case was successfully shrunk
    successfulShrink: FsCheckConfig
                   -> (* test name *) string
                   -> (* shrunk new arguments *) obj list
                   -> Async<unit>
    /// Callback when the test case has finished
    finishedTest: FsCheckConfig
               -> (* test name *) string
               -> Async<unit>
  }

Here is another example of testing with custom generated data:

module MyApp.Tests

// the ExpectoFsCheck module is auto-opened by this
// the configuration record is in the Expecto namespace in the core library
open Expecto
open FsCheck

type User = {
    Id : int
    FirstName : string
    LastName : string
}

type UserGen() =
   static member User() : Arbitrary<User> =
       let genFirsName = Gen.elements ["Don"; "Henrik"; null]
       let genLastName = Gen.elements ["Syme"; "Feldt"; null]
       let createUser id firstName lastName =
           {Id = id; FirstName = firstName ; LastName = lastName}
       let getId = Gen.choose(0,1000)
       let genUser =
           createUser <!> getId <*> genFirsName <*> genLastName
       genUser |> Arb.fromGen

let config = { FsCheckConfig.defaultConfig with arbitrary = [typeof<UserGen>] }

let properties =
  testList "FsCheck samples" [

    // you can also override the FsCheck config
    testPropertyWithConfig config "User with generated User data" <|
      fun x ->
        Expect.isNotNull x.FirstName "First Name should not be null"
  ]

Tests.runTestsWithCLIArgs [] [||] properties

And a further example of creating constraints on generated values

open System
open Expecto
open FsCheck

module Gen =
    type Float01 = Float01 of float
    let float01Arb =
        let maxValue = float UInt64.MaxValue
        Arb.convert
            (fun (DoNotSize a) -> float a / maxValue |> Float01)
            (fun (Float01 f) -> f * maxValue + 0.5 |> uint64 |> DoNotSize)
            Arb.from
    type 'a ListOf100 = ListOf100 of 'a list
    let listOf100Arb() =
        Gen.listOfLength 100 Arb.generate
        |> Arb.fromGen
        |> Arb.convert ListOf100 (fun (ListOf100 l) -> l)
    type 'a ListOfAtLeast2 = ListOfAtLeast2 of 'a list
    let listOfAtLeast2Arb() =
        Arb.convert
            (fun (h1,h2,t) -> ListOfAtLeast2 (h1::h2::t))
            (function
                | ListOfAtLeast2 (h1::h2::t) -> h1,h2,t
                | e -> failwithf "not possible in listOfAtLeast2Arb: %A" e)
            Arb.from
    let addToConfig config =
        {config with arbitrary = typeof<Float01>.DeclaringType::config.arbitrary}

[<AutoOpen>]
module Auto =
    let private config = Gen.addToConfig FsCheckConfig.defaultConfig
    let testProp name = testPropertyWithConfig config name
    let ptestProp name = ptestPropertyWithConfig config name
    let ftestProp name = ftestPropertyWithConfig config name
    let etestProp stdgen name = etestPropertyWithConfig stdgen config name

module Tests =
    let topicTests =
        testList "topic" [
            testProp "float between 0 and 1" (fun (Gen.Float01 f) ->
                () // test
            )
            testProp "list of 100 things" (fun (Gen.ListOf100 l) ->
                () // test
            )
            testProp "list of at least 2 things" (fun (Gen.ListOfAtLeast2 l) ->
                () // test
            )
            testProp "list of at least 2 things without gen" (fun h1 h2 t ->
                let l = h1::h2::t
                () // test
            )
        ]

It will be translated to the FsCheck-specific configuration at runtime. You can pass your own callbacks and use Expecto.Logging like shown in the Sample to get inputs for tests and tests printed.

If a property fails, the output could look like this.

[11:06:35 ERR] samples/addition is not commutative (should fail) failed in 00:00:00.0910000.
Failed after 1 test. Parameters:
  2 1
Shrunk 2 times to:
  1 0
Result:
  False
Focus on error:
  etestProperty (1865288075, 296281834) "addition is not commutative (should fail)"

The output that Expecto gives you, lets you recreate the exact test (that's from the 18..., 29... seed numbers). It's also a good idea to lift inputs and the test-case/parameter combination that failed into its own test (which isn't a property based test).

FsCheck Arb.Register can't be used with Expecto because it is thread local and Expecto runs multithreaded by default. This could be worked around but Arb.Register is being deprecated by FsCheck. The recommended way to register and use custom generators is to define testPropertyWithConfig functions like testProp above for each area with common generator use. This ensures the library will always be used in a thread safe way.

Link collection

These are a few resources that will get you on your way towards fully-specified systems with property-based testing.

Code from FsCheck

These code snippets show a bit of the API usage and how to create Arbitrary instances (which encapsulate generation with Gen instances and shrinkage), respectively.

Expectations with Expect

All expect-functions have the signature actual -> expected -> string -> unit, leaving out expected when obvious from the function.

Expect module

This module is your main entry-point when asserting.

  • throws
  • throwsC
  • throwsT
  • throwsAsync
  • throwsAsyncC
  • throwsAsyncT
  • isNone
  • isSome
  • isChoice1Of2
  • isChoice2Of2
  • isOk - Expect the value to be a Result.Ok value
  • isError - Expect the value to be a Result.Error value
  • isNull
  • isNotNull
  • isNotNaN
  • isNotPositiveInfinity
  • isNotNegativeInfinity
  • isNotInfinity
  • isLessThan
  • isLessThanOrEqual
  • isGreaterThan
  • isGreaterThanOrEqual
  • notEqual
  • isFalse
  • isTrue
  • exists - Expect that some element from actual sequence satisfies the given asserter
  • all - Expect that all elements from actual satisfy the given asserter
  • allEqual - Expect that all elements from actual are equal to equalTo
  • sequenceEqual
  • floatClose : Accuracy -> float -> float -> string -> unit - Expect the floats to be within the combined absolute and relative accuracy given by abs(a-b) <= absolute + relative * max (abs a) (abs b). Default accuracy available are: Accuracy.low = {absolute=1e-6; relative=1e-3}, Accuracy.medium = {absolute=1e-8; relative=1e-5}, Accuracy.high = {absolute=1e-10; relative=1e-7}, Accuracy.veryHigh = {absolute=1e-12; relative=1e-9}.
  • floatLessThanOrClose : Accuracy -> float -> float -> string -> unit - Expect actual to be less than expected or close.
  • floatGreaterThanOrClose : Accuracy -> float -> float -> string -> unit - Expect actual to be greater than expected or close.
  • sequenceStarts - Expect the sequence subject to start with prefix. If it does not then fail with format as an error message together with a description of subject and prefix.
  • sequenceContainsOrder - Expect the sequence actual to contains elements from sequence expected in the right order.
  • isAscending - Expect the sequence subject to be ascending. If it does not then fail with format as an error message.
  • isDescending - Expect the sequence subject to be descending. If it does not then fail with format as an error message.
  • stringContains – Expect the string subject to contain substring as part of itself. If it does not, then fail with format and subject and substring as part of the error message.
  • isMatch - Expect the string actual to match pattern
  • isRegexMatch - Expect the string actual to match regex
  • isMatchGroups - Expects the string actual that matched groups (from a pattern match) match with matchesOperator
  • isMatchRegexGroups - Expects the string actual that matched groups (from a regex match) match with matchesOperator
  • isNotMatch - Expect the string actual to not match pattern
  • isNotRegexMatch - Expect the string actual to not match regex
  • stringStarts – Expect the string subject to start with prefix and if it does not then fail with format as an error message together with a description of subject and prefix.
  • stringEnds - Expect the string subject to end with suffix. If it does not then fail with format as an error message together with a description of subject and suffix.
  • stringHasLength - Expect the string subject to have length equals length. If it does not then fail with format as an error message together with a description of subject and length.
  • isNotEmpty - Expect the string actual to be not null nor empty
  • isNotWhitespace - Expect the string actual to be not null nor empty nor whitespace
  • isEmpty - Expect the sequence actual to be empty
  • isNonEmpty - Expect the sequence actual to be not empty
  • hasCountOf - Expect that the counts of the found value occurrences by selector in actual equals the expected.
  • contains : 'a seq -> 'a -> string -> unit – Expect the sequence to contain the item.
  • containsAll: 'a seq -> 'a seq -> string -> unit - Expect the sequence contains all elements from second sequence (not taking into account an order of elements).
  • distribution: 'a seq -> Map<'a, uint32> -> string -> unit - Expect the sequence contains all elements from map (first element in tuple is an item expected to be in sequence, second is a positive number of its occurrences in a sequence). Function is not taking into account an order of elements.
  • streamsEqual – Expect the streams to be byte-wise identical.
  • isFasterThan : (unit -> 'a) -> (unit -> 'a) -> string -> unit – Expect the first function to be faster than the second function with the passed string message, printed on failure. See the next section on Performance for example usage.
  • isFasterThanSub – Like the above but with passed function signature of Performance.Measurer<unit,'a> -> 'a, allowing you to do setup and teardown of your subject under test (the function) before calling the measurer. See the next section on Performance for example usage.
  • wantOk - Expect the result to be Ok and returns its value, otherwise fails.
  • wantError - Expect the result to be Error and returns its value, otherwise fails.

Also note, that there's a "fluent" API, with which you can pipe the test-subject value into the expectation:

open Expecto
open Expecto.Flip

let compute (multiplier: int) = 42 * multiplier

test "yup yup" {
  compute 1
    |> Expect.equal "x1 = 42" 42

  compute 2
    |> Expect.equal "x2 = 82" 84
}
|> runTestsWithCLIArgs [] [||]

Performance module

Expecto supports testing that an implementation is faster than another. Use it by calling Expect.isFasterThan wrapping your Test in testSequenced.

Sample output

This function makes use of a statistical test called Welch's t-test. It starts with the null hypothesis that the functions mean execution times are the same. The functions are run alternately increasing the sample size to test this hypothesis.

Once the probability of getting this result based on the null hypothesis goes below 0.01% it rejects the null hypothesis and reports the results. If the performance is very close the test will declare them equal when there is 99.99% confidence they differ by less than 0.5%. 0.01%/99.99% are chosen such that if a test list has 100 performance tests a false test failure would be reported once in many more than 100 runs.

This results in a performance test that is very quick to run (the greater the difference the quicker it will run). Also, because it is a relative test it can normally be run across all configurations as part of unit testing.

The functions must return the same result for same input. Note that since Expecto also has a FsCheck integration, your outer (sequenced) test could be the property test, generating random data, and your TestCode/function body/ actual test could be an assertion that for the same (random instance) of test data, one function should be faster than the other.

From Expect.isFasterThanSub, these results are possible (all of which generate a test failure, except the MetricLessThan case):

  type 'a CompareResult =
    | ResultNotTheSame of result1:'a * result2:'a
    | MetricTooShort of sMax:SampleStatistics * machineResolution:SampleStatistics
    | MetricLessThan of s1:SampleStatistics * s2:SampleStatistics
    | MetricMoreThan of s1:SampleStatistics * s2:SampleStatistics
    | MetricEqual of s1:SampleStatistics * s2:SampleStatistics

You can explore these cases yourself with Expecto.Performance.timeCompare, should you wish to.

Example

All of the below tests pass.

[<Tests>]
let performance =
  testSequenced <| testList "performance" [

    testCase "1 <> 2" <| fun () ->
      let test () =
        Expect.isFasterThan (fun () -> 1) (fun () -> 2) "1 equals 2 should fail"
      assertTestFailsWithMsgContaining "same" (test, Normal)

    testCase "half is faster" <| fun () ->
      Expect.isFasterThan (fun () -> repeat10000 log 76.0)
                          (fun () -> repeat10000 log 76.0 |> ignore; repeat10000 log 76.0)
                          "half is faster"

    testCase "double is faster should fail" <| fun () ->
      let test () =
        Expect.isFasterThan (fun () -> repeat10000 log 76.0 |> ignore; repeat10000 log 76.0)
                            (fun () -> repeat10000 log 76.0)
                            "double is faster should fail"
      assertTestFailsWithMsgContaining "slower" (test, Normal)

    ptestCase "same function is faster should fail" <| fun () ->
      let test () =
        Expect.isFasterThan (fun () -> repeat100000 log 76.0)
                            (fun () -> repeat100000 log 76.0)
                            "same function is faster should fail"
      assertTestFailsWithMsgContaining "equal" (test, Normal)

    testCase "matrix" <| fun () ->
      let n = 100
      let rand = Random 123
      let a = Array2D.init n n (fun () _ -> rand.NextDouble())
      let b = Array2D.init n n (fun () _ -> rand.NextDouble())
      let c = Array2D.zeroCreate n n

      let reset() =
        for i = 0 to n-1 do
            for j = 0 to n-1 do
              c.[i,j] <- 0.0

      let mulIJK() =
        for i = 0 to n-1 do
          for j = 0 to n-1 do
            for k = 0 to n-1 do
              c.[i,k] <- c.[i,k] + a.[i,j] * b.[j,k]

      let mulIKJ() =
        for i = 0 to n-1 do
          for k = 0 to n-1 do
            let mutable t = 0.0
            for j = 0 to n-1 do
              t <- t + a.[i,j] * b.[j,k]
            c.[i,k] <- t
      Expect.isFasterThanSub (fun measurer -> reset(); measurer mulIKJ ())
                             (fun measurer -> reset(); measurer mulIJK ())
                             "ikj faster than ijk"

    testCase "popcount" <| fun () ->
      let test () =
        Expect.isFasterThan (fun () -> repeat10000 (popCount16 >> int) 987us)
                            (fun () -> repeat10000 (popCount >> int) 987us)
                            "popcount 16 faster than 32 fails"
      assertTestFailsWithMsgContaining "slower" (test, Normal)
  ]

A failure would look like this:

[13:23:19 ERR] performance/double is faster failed in 00:00:00.0981990.
double is faster. Expected f1 (0.3067 ± 0.0123 ms) to be faster than f2 (0.1513 ± 0.0019 ms) but is ~103% slower.

Performance.findFastest

Expecto can use isFasterThan to find the fastest version of a function for a given int input. This can be useful for optimising algorithm constants such as buffer size.

[<Tests>]
let findFastest =
  testSequenced <| testList "findFastest" [

    testCase "different values gives an error" (fun _ ->
      Performance.findFastest id 10 20 |> ignore
    ) |> assertTestFailsWithMsgStarting "Expected results to be the same."

    testCase "find fastest sleep" (fun _ ->
      let f i = Threading.Thread.Sleep(abs(i-65)*10)
      let result = Performance.findFastest f 0 100
      Expect.equal result 65 "find min"
    )

    testCase "find fastest hi" (fun _ ->
      let f i = Threading.Thread.Sleep(abs(i-110)*10)
      let result = Performance.findFastest f 0 100
      Expect.equal result 100 "find min"
    )

    testCase "find fastest lo" (fun _ ->
      let f i = Threading.Thread.Sleep(abs(i+10)*10)
      let result = Performance.findFastest f 0 100
      Expect.equal result 0 "find min"
    )
  ]

main args and command line – how to run console apps examples

From code you can run:

Tests.runTestsInAssemblyWithCLIArgs [Stress 0.1;Stress_Timeout 0.2] [||]

From the command line you can run:

dotnet run -p Expecto.Tests -f net6.0 -c release -- --help
dotnet watch -p Expecto.Tests run -f net6.0 -c release -- --colours 256

Contributing and building

Please review the guidelines for contributing to Expecto; this document also includes instructions on how to build.

We'd specifically like to call out the following people for their great contributions to Expecto in the past:

  • @mausch — for building Fuchu which became the foundation of Expecto
  • @AnthonyLloyd — for maintaining Expecto for some years and drastically improving it

BenchmarkDotNet usage

The integration with BenchmarkDotNet.

open Expecto

type ISerialiser =
  abstract member Serialise<'a> : 'a -> unit

type MySlowSerialiser() =
  interface ISerialiser with
    member __.Serialise _ =
      System.Threading.Thread.Sleep(30)

type FastSerialiser() =
  interface ISerialiser with
    member __.Serialise _ =
      System.Threading.Thread.Sleep(10)

type FastSerialiserAlt() =
  interface ISerialiser with
    member __.Serialise _ =
     System.Threading.Thread.Sleep(20)

type Serialisers() =
  let fast, fastAlt, slow =
    FastSerialiser() :> ISerialiser,
    FastSerialiserAlt() :> ISerialiser,
    MySlowSerialiser() :> ISerialiser

  [<Benchmark>]
  member __.FastSerialiserAlt() = fastAlt.Serialise "Hello world"

  [<Benchmark>]
  member __.SlowSerialiser() = slow.Serialise "Hello world"

  [<Benchmark(Baseline = true)>]
  member __.FastSerialiser() = fast.Serialise "Hello world"

[<Tests>]
let tests =
  testList "performance tests" [
    test "three serialisers" {
      benchmark<Serialisers> benchmarkConfig (fun _ -> null) |> ignore
    }
  ]

In the current code-base I'm just printing the output to the console; and by default all tests are run in parallel; so you'll need to use --sequenced as input to your exe, or set parallel=false in the config to get valid results.

To read more about how to benchmark with BenchmarkDotNet, see its Getting started guide.

Happy benchmarking!

You're not alone!

Others have discovered the beauty of tests-as-values in easy-to-read F#.

Expecto VS Test Plugin

Testing hardware

People have been testing hardware with Expecto.

Expecto Hardware Testing

Sending e-mail on failure – custom printers

The printing mechanism in Expecto is based on the Logary Facade, which grants some privileges, like being able to use any Logary target to print. Just follow the above link to learn how to initialise Logary. Then if you wanted to get notified over e-mail whenever one of your tests fail, configure Logary with Logary.Targets.Mailgun:

open Logary
open Logary.Configuration
open Logary.Adapters.Facade
open Logary.Targets
open Hopac
open Mailgun
open System.Net.Mail

let main argv =
  let mgc =
    MailgunLogaryConf.Create(
      MailAddress("[email protected]"),
      [ MailAddress("[email protected]") ],
      { apiKey = "deadbeef-2345678" },
      "example.com", // sending domain of yours
      Error) // cut-off level

  use logary =
    withLogaryManager "MyTests" (
      withTargets [
        LiterateConsole.create LiterateConsole.empty "stdout"
        Mailgun.create mgc "mail"
      ]
      >> withRules [
        Rule.createForTarget "stdout"
        Rule.createForTarget "mail"
      ])
    |> run

  // initialise Logary Facade with Logary proper:
  LogaryFacadeAdapter.initialise<Expecto.Logging.Logger> logary

  // run all tests
  Tests.runTestsInAssemblyWithCLIArgs [] argv

About test parallelism

Since the default is to run all of your tests in parallel, it's important that you don't use global variables, global singletons or mutating code. If you do, you'll have to slow down all of your tests by sequencing them (or use locks in your testing code).

Furthermore, printfn and sibling functions aren't thread-safe, i.e. a given string may be logged in many passes and concurrent calls to printfn and Console.X-functions have their outputs interleaved. If you want to log from tests, you can use code like:

open Expecto.Logging
open Expecto.Logging.Message

let logger = Log.create "MyTests"

// stuff here

testCase "reading prop" <| fun () ->
  let subject = MyComponent()
  // this will output to the right test context:
  logger.info(
    eventX "Has prop {property}"
    >> setField "property" subject.property)
  Expect.equal subject.property "Goodbye" "Should have goodbye as its property"

What does 'expected to have type TestCode' mean?

If you get an error message like this:

This expression was expected to have type    'TestCode'    but here has type    'unit'

It means that you have code like testCase "abc" <| Expect.equal .... Instead you should create a function like so: testCase "abc" <| fun () -> Expect.equal ....

My tests are hanging and I can't see why

This might be due to how terminals/the locking thereof work: try running your tests with --no-spinner and see if that works.

Expecto expecto

expecto's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

expecto's Issues

Don't print 'started' for ignored tests

It would be better if tests that don't run (are ignored) don't print that they're starting (when they're not). This affect when running with --debug which prints all tests, or when overriding the logging infrastructure.

Relative statistical performance testing

I'm up for putting together a PR to add a function isFasterThan f1 f2 along the lines of my perf blog post.

One issue I can see is it would be best if these test cases run sequentially but should not stop the others from running in parallel. Maybe these could be run at the end of the run.

I also don't know if therefore the function should be on Expect or should be a new type of Test.

Let me know if this would be a good fit for the library and any help on the API.

.NET Core

referencing mausch/Fuchu#51

There is still some work ahead, mainly with Expecto.Tests. NUnit has .NET Core as a platform target now. FsCheck seems to be pending support (fscheck/FsCheck#293).

PerfUtil and MbUnit seem to be the main issues right now as none of them seem to be actively maintained anymore. @mausch mentioned possibly dropping support for MbUnit. The way I see it, the alternatives are either porting those libs to .NET Core and hoping that a PR can be accepted, forking them or dropping support for PerfUtil and MbUnit.

Expecto Gitter?

I know at least @adamchester and @b0urb4k1 have expressed interest in helping develop expecto further, a gitter channel would help coalesce the disparate conversations we've been having across slack, dm and other gitter channels.

Test the FTestAttribute behavior

We should test the FTestAttribute behavior:

  • discovery(by reflection) of tests marked with this attribute
  • propagation of the FocusedState to the child tests
  • Expecto skipping other tests that are Normal

Note: This flag tests can influence the behavior of other tests based on reflection discovery.

String diff support

Make output like this easier to parse for humans, by showing at what position the strings differ:

Should have both the fields as values, and the default value.. Actual value was "Processor.%\ Utilisation,unit=% Core\ 1=0.03,Core\ 2=0.06,Core\ 3=0.139,value=1i 1435362189575692182" but had expected it to be "Processor.%\ Utilisation,unit=% Core\ 1=0.03,Core\ 2=0.06,Core\ 3=0.139,value=1i 1435362189575692182".

runTests returns 0 when a test fails

Just tried my first expecto hello world: -

module Tests

open Expecto

[<Tests>]
let tests =
    testCase "hello world" <| fun _ ->
        Expect.equal 1 2 "Should be equal"


[<EntryPoint>]
let main args =
    runTests defaultConfig tests

However, whilst the output shows a test failure, the program returns 0 when (AFAIK) it should return 1 because of the test failure.

Change distribution error message

I don't think distribution error should display those part of message which are not not showing actual error for the case (so for example don't display Missing elements from actual: + empty line if there are no missing elements).

CC: @haf and @MNie

Flaky mode

After writing a number of UI tests, you’ll quickly notice that flakiness is a serious problem . One way we’re able to overcome this issue is by running failed tests multiple times.

The flaky gem has a number of run modes:
  all tests - run everything
  from file - run a set of tests as specified in a text file
  one test - run only one test
  two pass - run tests once, then run only the failures x amount of times.

from https://appium.io/slate/en/tutorial/android.html#introduction36

I'm in the process of writing https://github.com/fsprojects/Canopy.Mobile and I wonder if such an automatic retry mode would be part of expecto or if it should be done in Canopy.Mobile

/cc @lefthandedgoat

Print current test

I have a testsuite that runs for a while. it would be nice if expecto would print status of every test after it is run

Public runTests function should be initialized with argu params

In https://github.com/haf/expecto/blob/master/Expecto/Expecto.fs#L879 we initialize the runTestsInAssembly config with the command line parameters that were given to expecto. We should do the same with the public runTests function. I don't see a reason why these should behave differently (apart for the fact that runTestsInAssembly calls runTests).

@haf would you accept a PR with a fix? This would help me with my other focused test detection on CI.

Support async tests

Schedule all Job/Alt/Async/Task thingies and await their completions from e.g. Expecto.Hopac. This would probably require a 2-stage process with the first discovering the test and the second doing an execute-and-fan-in/join operation.

  1. Support non-monadic parallelism
  2. Support async/task/job etc
  3. Support limiting concurrency via configuration flag.

Naming of nuget packages

Hi,

just get confused about results of searching Expecto on nuget:
https://www.nuget.org/packages?q=expecto i got following results.

I am getting multiple packages Expecto Expecto

 Expecto icon
Expecto Expecto By: henrik feldt
Last Published: 2017-02-18 | Latest Version: 4.0.1
Expecto is a smooth test framework for F#, cloned from Fuchu with added functionality for making it easier to use.
6,804 total downloads Tags testing fsharp assert expect

 Expecto.FsCheck icon
Expecto Expecto By: henrik feldt
Last Published: 2017-02-18 | Latest Version: 4.0.1
Expecto is a smooth test framework for F#, cloned from Fuchu with added functionality for making it easier to use.
5,105 total downloads Tags testing fsharp assert expect

 Expecto.BenchmarkDotNet icon
Expecto Expecto By: henrik feldt
Last Published: 2017-02-18 | Latest Version: 4.0.1
Expecto is a smooth test framework for F#, cloned from Fuchu with added functionality for making it easier to use.
2,075 total downloads Tags testing fsharp assert expect

 Expecto.PerfUtil icon
Expecto Expecto By: henrik feldt
Last Published: 2016-10-28 | Latest Version: 1.0.12
Expecto is a smooth test framework for F#, cloned from Fuchu with added functionality for making it easier to use.
1,124 total downloads Tags testing fsharp assert expect

When i look at https://www.nuget.org/packages/Expecto.BenchmarkDotNet/
it looks to me that that this is a separate module for benchmarking,

Expecto Expecto 4.0.1
Install-Package Expecto.BenchmarkDotNet

Seem like there is a display name in nuget is Expecto Expecto for the package Expecto.BenchmarkDotNet.

I can live with it, but maybe you like to look into this, and clean it up.
Cheers 😸

[Feature Request] Surface Area/Semantic Versioning Testing

It'd be nice to have a way to make sure that updates to a project aren't breaking semantic versioning rules. This could be validated by storing the metadata for a project's public API and checking against it. Maybe it could run in an analytic mode just to tell you what the version upgrade should be based on the current state of the assembly vs the stored metadata?

The offending changes, or just the analytic output could be presented as a signature diff across the public api kind of like this one for the netstandard changes

Would this be an appropriate feature for expecto?

Newbie questions

Hi @haf,

Many thanks for creating Expecto! I'm just starting my first spare-time project with F# an decided to give it a try. I have to admit I'm still very much an F# noob, halfway through @isaacabraham's book.

Here are a couple of things I noticed:

  1. Compiling the specs to an executable is awesome
  2. As a noob I wonder why your advise on installing Expecto.BenchmarkDotNet and Expecto.FsCheck. Especially the latter -- why is it needed?
  3. I'd like some more guidance around structuring tests. I started looking at https://github.com/haf/expecto/blob/master/Expecto.Sample/Expecto.Sample.fs and saw the whole shebang of what's possible. Some comments around the "blocks" of tests (e.g. testList) would be helpful. Especially, what's the diff between test and testCase here
  4. Isaac's book shows how F# supports piping and how wonderfully clean the code gets. Coming from mspec and rspec I am used to define assertions fluently: 42.ShouldEqual(42), 42.should eq(42), expect(42).to eq(42). With Expecto I tried
    42 |> Expect.equal 42 "These should equal"
    but the last arg is a string that I usually do not care about. No dice with piping the input. Expect.equal x y reminds me of my NUnit days with lots of Yoda-esque assertions.
  5. I saw Expecto supports focused test. Excellent, I have been using that a lot with rspec. But as soon as I tried them Expecto being called from FAKE failed telling me there should not be any focused tests (I don't pass --fail-on-focused-tests)!

Thanks for listening :)

Parametised test suites

  • Support globally parametising, e.g. a TcpStreamFactory
  • Support globally having a lock server, that given a category, returns an object that can be locked on, to ensure the critical region only is executed from one thread in the test suite
  • Support having a port generator, to fix issues like SuaveIO/suave#341 but for testing, to avoid having to run Suave's test suite in a sequenced manner

This issue is to support passing data to every TestCode that supports input parameters. There's an implementation for this in Suave that should be ported/imported into Expecto.

What does the --debug parameter ?

Hi again 😄 ,

I don't really know what the --debug parameter is supposed to do,
but with defaultConfig it does nothing.

> mono src/eexpecto/bin/Release/eexpecto.exe  --help
--debug               extra verbose printing. Useful to combine with --sequenced.

> mono src/eexpecto/bin/Release/eexpecto.exe                                                                                                     [07:47:31 INF] EXPECTO? Running tests...
[07:47:31 INF] EXPECTO! 3 tests run in 00:00:00.0839285 – 3 passed, 0 ignored, 0 failed, 0 errored. ᕙ໒( ˵ ಠ ╭͜ʖ╮ ಠೃ ˵ )७ᕗ

> mono src/eexpecto/bin/Release/eexpecto.exe  --debug                                                                                            [07:47:22 INF] EXPECTO? Running tests...
[07:47:22 INF] EXPECTO! 3 tests run in 00:00:00.0846192 – 3 passed, 0 ignored, 0 failed, 0 errored. ᕙ໒( ˵ ಠ ╭͜ʖ╮ ಠೃ ˵ )७ᕗ

> mono src/eexpecto/bin/Release/eexpecto.exe  --debug --sequenced                                                                                [07:51:50 INF] EXPECTO? Running tests...
[07:51:50 INF] EXPECTO! 3 tests run in 00:00:00.0359414 – 3 passed, 0 ignored, 0 failed, 0 errored. ᕙ໒( ˵ ಠ ╭͜ʖ╮ ಠೃ ˵ )७ᕗ

This is on version 4.0.1, even if it says

> mono src/eexpecto/bin/Release/eexpecto.exe  --version                                                                                          EXPECTO version

Fuzz testing

I'd like expecto to support parametisation for three things:

  • system properties
  • configuration – like specifying a variable that all tests taking an instance of that type are parametised over (combinatorially combined)
  • system fuzzing for safety – no crashes and no hangs – this issue

Preferably this issue can start the implementation of the F# version for the American Fuzzy Lop fuzzer. There's an existing project called Fizil aiming to do the same, by @CraigStuntz – and generators (and shrinkers) are well implemented by @kurtschelfthout and @mausch – perhaps fed by FuzzDB. Can logic from these be folded into Expecto or be orchestrated by a separate nuget?

In the end, I want people to write articles like CloudFlare's for DNS servers written in F#. Or with this infrastructure, maybe I could combine freya-machines with Suave.Testing to allow people to auto-test their REST interfaces and really have the testing framework guide their implementation. Or we could finally implement TLS 1.2 support on managed Mono/.Net Core without being afraid to release it. (And now TLS 1.3 of course!) We could pass expecto a flag to fuzz rather than antagonising.

Other existing work in the F# space is F* by @catalin-hritcu, @msprotz, @s-zanella amongst others. Can the session types and their modelling of protocols or Z3, the SMT solver, be used to inform the fuzzing as a fuzzing strategy or by modelling control flow?

The aim of Expecto is to bring strong testing methodologies to everyone. It's brought about from my perceived high friction of building bug-free software on .Net.

Use cases:

  • Fuzzing parsers
  • Fuzzing protocols (in conjunction with FsCheck model based tests) e.g. for [http2] [ASN1], etc... In my case I want to fuzz Suave and Logary, modelling the protocol.
  • After discovering a bug/DoS vuln/crash bug, the library maintainer would fix it; Expecto's benchmark integration with BenchmarkDotNet could ensure the fix doesn't regress performance.
  • Making the protocol specification for testing and specification as the implementation isomorphic to each other, allowing F# programs to be correct by construction.

Can we all cooperate to make Expecto the go-to place for strong testing methodologies on .Net and .Net Core, removing friction and making it simple, even fun, to write secure, stable and well-tested software?

Setup and teardown

SF: Also I wonder if test lists could contain setup and teardown code?

@haf: It's possible. You could do some sketches of it and we can discuss. Another way is to use hole-in-the-middle factory functions like we do in the suave tests.

SF: Holes sounds awesome. Go for it ;-)

I just thought about a different way:

    let tests =
        testList "android tests" [
            testList "session tests" [
                testCase "can get device UUID" <| fun () ->
                    ...

                testCase "can get dictionary data" <| fun () ->
                    ...
            ]

            testList "session tests 2" [
                testCase "can get device UUID" <| fun () ->
                    ...

                testCase "can get dictionary data" <| fun () ->
                    ...
            ]
        ]

How about providing "combinators" that inject setup and teardown.

    tests
    |> withTestSetup (fun () -> ....)        // before every test
    |> withTestTearDown (fun () -> ....) // after every test
    |> withTestListSetup (fun () -> ....)  // only at beginning of outer list

Fix async printing

Remove the code that is both imperative and parallel and concurrent at the same time – what can go wrong!

Add development features

Add some development features:

  • an easy way to enable/disable some tests or test cases by either marking them with pending
/// Allows to mark a test as Pending (will be skipped/ignored if no other TestAttribute is present)
/// Is a fast way to exclude some tests from running.  
/// Should be used just during debugging sessions and when a feature is not yet ready for testing (that adds noise to the test results).
[<AttributeUsage(AttributeTargets.Method ||| AttributeTargets.Property ||| AttributeTargets.Field)>]
type PTestsAttribute() = inherit Attribute()
  • an easy way to focus(check just some tests or test cases and ignore the rest of the test suite) on development task at hand
/// Allows to ignore all tests and test cases that are not market as Focused
/// Is a fast way to include/exclude some tests from running. 
/// Should be used just during debugging sessions.
[<AttributeUsage(AttributeTargets.Method ||| AttributeTargets.Property ||| AttributeTargets.Field)>]
type FTestsAttribute() = inherit Attribute()

The statistics should keep track of the disabled tests (marked with Pending or excluded by some Focused tests)

Incorrect summary details for test run

I'm running a test group: -

let mathsTests =
    testList "Maths Tests" [
        testCase "2 * 2" <| fun _ -> Expect.equal (2 * 2) 4 "Should be equal"
        testCase "1 = 1" <| fun _ -> Expect.equal 1 2 "Should be equal"
    ]
[20:14:40 INF] EXPECTO? Running tests...
[20:14:40 ERR] Maths Tests/1 = 1 failed in 00:00:00.0079133. 
Should be equal. Actual value was 1 but had expected it to be 2.
  c:\Users\Isaac\Source\Repos\expecto-exp\src\Tests\Tests.fs(8,1): [email protected](String msg)

[20:14:40 INF] EXPECTO! 1 tests run in 00:00:00.0486770 - 1 passed, 0 ignored, 0 failed, 0 errored.
  • The output shows that 1 test passed (correct) but incorrectly shows that 0 tests failed.
  • The output shows that only 1 test was run, when 2 tests were run.

`--summary` not working

--summary flag is not working for me any more on latest version - this bug is reproduced in latest builds on Travis. Last build which is displaying summary is https://travis-ci.org/haf/expecto/jobs/188807194 (for 6fd15ac), next one is not displaying summary any more (https://travis-ci.org/haf/expecto/jobs/188808798).... But it doesn't make any sense for me since dd99ef5 (commit for which build is not displaying summary anymore) was just version bump, without any changes in code.

@haf, any ideas?

first LogLevel is remembered in IRB session

Hi, I often run my tests in irb like this:

#r"bin/release/wba.net.dll"
#load "../../tests/wba.test/Scripts/load-references-release.fsx"
#I "../../tests/wba.test"
open Expecto
open Swensen.Unquote

#load "convert.fs" "convert_test.fs" 
Wba.Test.Convert.test
|> runTests {defaultConfig with verbosity = Logging.LogLevel.Debug}

When I change the LogLevel to Info on subsequent runs, this will not reflect in the output (still [dbg] is logged, LogLevel of first run is remembered).

I wondered about this behaviour, which seems to be so Non-functional.

Is this intended, or why is that ?

PS Please do not bother, I am a newbie, but eager to understand.

Allow specifying the Assembly to run tests in

I've added a new function to my fork of Expecto - runTestsInDifferentAssembly
My particular use case is where I have the tests in a F# class library, but I need to run the tests from within a c# exe.

It's meeting my need, happy to contribute a PR if you believe it'd be of general use.

  /// Runs tests in a passed in assembly with the supplied command-line options.
  /// Returns 0 if all tests passed, otherwise 1
  let runTestsInDifferentAssembly config args (assembly:Assembly) =
    let config = { config with locate = getLocation (assembly) }
    testFromDifferentAssembly assembly
    |> Option.orDefault (TestList ([], Normal))
    |> runTestsWithArgs config args

Make `Expect.sequenceEqual` failure output easier to inspect

Expect.sequenceEqual should make it easier see the data from both sequences.

Currently I see this:

[08:33:48 ERR] generic/literate default tokeniser can yield exception tokens from the 'errors' and 'exn' fields, even with an empty template failed in 00:00:00.1280369.
literate tokenised parts must be correct.
        Expected value was:
        [("[", DarkGray); ("08:33:48", Gray); (" ", Gray); ("INF", White);
 ("] ", DarkGray); ("
", White); ("System.Exception: exn field", White);
 ("
", White); ("System.Exception: errors field 1", White); ("
", White);
 ("System.Exception: errors field 2", White)]
        Actual value was:
        [("[", DarkGray); ("08:33:48", Gray); (" ", Gray); ("INF", White);
 ("] ", DarkGray); ("
", White); ("System.Exception: exn field", White);
 ("
", White); ("System.Exception: errors field 1", White); ("
", White);
 ("System.Exception: errors field 2", White); ("
", White)]
    Sequence actual longer than expected, at pos 11 found item ("
", White).
  C:\My\repo\logary\src\tests\Logary.Facade.Tests\Facade.fs(68,1): Logary.Facade.Tests.Expect.literateMessagePartsEqual(String template, FSharpMap`2 fields, FSharpList`1 expectedMessageParts, FSharpOption`1 options)

I propose something like this instead, where each item the sequence is numbered and starting on a new line:

[08:33:48 ERR] generic/literate default tokeniser can yield exception tokens from the 'errors' and 'exn' fields, even with an empty template failed in 00:00:00.1280369.
literate tokenised parts must be correct.
        Expected value was:
        [0] ("[", DarkGray)
        [1] ("08:33:48", Gray)
        [2] (" ", Gray)
        [3] ("INF", White)
        [4] ("] ", DarkGray)
        [5] ("
", White)
        [6] ("System.Exception: exn field", White)
        [7] ("
", White)
        [8] ("System.Exception: errors field 1", White)
        [9] ("
", White)
        [10] ("System.Exception: errors field 2", White)

        Actual value was:
        [0] ("[", DarkGray)
        [1] ("08:33:48", Gray)
        [2] (" ", Gray)
        [3] ("INF", White)
        [4] ("] ", DarkGray)
        [5] ("
", White)
        [6] ("System.Exception: exn field", White)
        [7] ("
", White)
        [8] ("System.Exception: errors field 1", White)
        [9] ("
", White)
        [10] ("System.Exception: errors field 2", White)
        [11] ("
", White)

    Sequence actual longer than expected, at pos 11 found item ("
", White).
  C:\My\repo\logary\src\tests\Logary.Facade.Tests\Facade.fs(68,1): Logary.Facade.Tests.Expect.literateMessagePartsEqual(String template, FSharpMap`2 fields, FSharpList`1 expectedMessageParts, FSharpOption`1 options)

Stress testing

I'd like to propose a feature I'm interested in building and would welcome any feedback or suggestions.

It would be to add a command line switch like: --stress 02:00:00

This would run the parallel tests for 2 hours at full worker load adding a randomly picked test when a test finishes. FsCheck will need a bit of consideration as I think their MaxTest and size will need to be randomly picked.

The idea is that this will catch the following types of bugs:

  • Memory leaks
  • Threading bugs running same test at same time
  • Rare threading bugs
  • Rare property test fails

When the run finishes it will output the number of tests run and any fails (number and example by test) and stats on memory use.

I'd like to get started soon and will probably start after doing #68 in the next few days.

Possible follow up features:

  • Save execution time, memory, gc statistics from the stress run timestamped
  • Test for any major shift in these values to last run
  • Load the latest stress test report to make a faster normal test run (order parallel tests by longest first, I think this can give up to ~25% max speed up)

Float assertions

PR this

module Expect =
  /// Expect the passed float to be a number.
  let isNotNaN f format =
    if Double.IsNaN f then Tests.failtestf "%s. Float was the NaN (not a number) value." format

  /// Expect the passed float not to be positive infinity.
  let isNotPositiveInfinity actual format =
    if Double.IsPositiveInfinity actual then Tests.failtestf "%s. Float was positive infinity." format

  /// Expect the passed float not to be negative infinity.
  let isNotNegativeInfinity actual format =
    if Double.IsNegativeInfinity actual then Tests.failtestf "%s. Float was negative infinity." format

  /// Expect the passed float not to be infinity.
  let isNotInfinity actual format =
    isNotNegativeInfinity actual format
    isNotPositiveInfinity actual format
    // passed via excluded middle

  /// Expect the passed string not to be empty.
  let isNotEmpty (actual : string) format =
    Expect.isNotNull actual format
    if actual.Length = 0 then Tests.failtestf "%s. Should not be empty." format

FakeHelper

Hi, great and simple testing library.
Currently I have make-it-done-solution like above. But may be is there any FAKEHelper which I have overlooked?

Target "RunExpectoTests" (fun _ ->
    let processTimeout = System.TimeSpan.MaxValue // Don't set a process timeout.  The timeout is per test.
    let testsAssemblies = "build/*Tests.exe"
    let args = [ "--debug"
                 //"--sequenced"
                 "--parallel"
               ] |> String.concat " "
    let res = 
        !! testsAssemblies
        |> Seq.map (fun testAssembly -> 
            testAssembly, ExecProcess(fun info ->
                info.FileName <- testAssembly
                info.WorkingDirectory <- buildDir
                info.Arguments <- args
            ) processTimeout)
        |> Seq.filter( snd >> (<>) 0)
        |> Seq.toList
    match res with
    | [] -> ()
    | failed -> 
        failed
        |> List.map (fun (asm,exitCode) -> sprintf "\t- Expecto test %s failed. Process finished with exit code %d." asm exitCode)
        |> String.concat System.Environment.NewLine 
        |> FailedTestsException |> raise
)

Run Sequenced test lists in parallel

I have two test lists, both of which are sequenced, because the tests within them affect data that is recreated before / torn down after each test.

However, each of the two test lists work on different data sets and could happily run in parallel. However at the moment when I place both test lists into a top-level test list, they are run one after the other.

Is there a way to run two sequenced test lists in parallel (but still run the tests within the lists in sequence, if you get my meaning)?

Tests hanging if config.``parallel`` = false

After updating Expecto from 1.1.2 to 3.1.0, when running the tests, the program just hangs and no breakpoint into any test gets hit. Apparently the problem is we're disabling the parallel flag in the configuration (note this worked in the old version):

runTests { defaultConfig with ``parallel`` = false } all

The problem goes away when just using the default config and using testSequenced for the tests we want to run in sequence. Note: something like the following doesn't work to run all tests in sequence:

all |> testSequenced |> runTests defaultConfig

Could not find .pdb file

@Krzysztof-Cieslak I think we have another bug from your changes. Could you have a look at it, please?

Unhandled Exception: System.IO.FileNotFoundException: Could not find file 'C:\projects\logary\src\tests\Logary.Adapters.Facade.Tests\bin\Release\Logary.Adapters.Facade.Tests.pdb'.
   at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost)
   at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share)
   at Mono.Cecil.Pdb.PdbReaderProvider.GetSymbolReader(ModuleDefinition module, String fileName)
   at Mono.Cecil.ModuleReader.ReadSymbols(ModuleDefinition module, ReaderParameters parameters)
   at Mono.Cecil.ModuleReader.CreateModuleFrom(Image image, ReaderParameters parameters)
   at Mono.Cecil.ModuleDefinition.ReadModule(String fileName, ReaderParameters parameters)
   at Expecto.Impl.getSourceLocation(Assembly asm, String className, String methodName)
   at [email protected](Exception _arg2)
   at [email protected](AsyncParams`1 args)
--- End of stack trace from previous location where exception was thrown ---
   at Microsoft.FSharp.Control.AsyncBuilderImpl.commit[a](Result`1 res)
   at Microsoft.FSharp.Control.CancellationTokenOps.RunSynchronously[a](CancellationToken token, FSharpAsync`1 computation, FSharpOption`1 timeout)
   at Microsoft.FSharp.Control.FSharpAsync.RunSynchronously[T](FSharpAsync`1 computation, FSharpOption`1 timeout, FSharpOption`1 cancellationToken)
   at Microsoft.FSharp.Primitives.Basics.List.map[T,TResult](FSharpFunc`2 mapping, FSharpList`1 x)
   at [email protected](FSharpFunc`2 fn, FSharpList`1 ts)
   at Expecto.Impl.runEval(ExpectoConfig config, Test tests)
rake aborted!
Albacore::CommandFailedError: Command failed with status (82):

Via https://ci.appveyor.com/project/haf/logary/build/4.0.545

See

Commandline parameters for filtering not working (for me:)

Hi, and please bare with me if I miss something, but ...

Can't get filtering working

$ mono src/eexpecto/bin/Release/eexpecto.exe --help
...
--filter-test-list <substring>                        filters the list of test lists by a substring.
--filter-test-case <substring>                          filters the list of test cases by a substring.
--run [<tests>...]    runs only provided tests.

$ mono src/eexpecto/bin/Release/eexpecto.exe --filter-test-list aa --summary                                                                     
...
Passed:  3
	tl1/tc1
	tl1/tc2
	tc3

$ mono src/eexpecto/bin/Release/eexpecto.exe --filter-test-case  aa --summary                                                                    
...
Passed:  3
	tl1/tc1
	tl1/tc2
	tc3

$ mono src/eexpecto/bin/Release/eexpecto.exe --run  aa --summary                                                                                 
...
Passed:  3
	tl1/tc1
	tl1/tc2
	tc3

This is nuget 4.0.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.