Giter Site home page Giter Site logo

badgerati / edison Goto Github PK

View Code? Open in Web Editor NEW
3.0 3.0 0.0 389 KB

Edison is an open source unit/integration test framework for .NET

License: MIT License

C# 98.86% PowerShell 1.14%
edison test-assembly parallel assert tests test-framework testdriven dotnet chocolatey nuget unit-testing continuous-testing regression-testing integration-testing system-testing

edison's Introduction

Edison

Build Status Build status Code Climate MIT licensed

Edison is designed to be a more performant unit/integration testing framework for .NET projects. Many features, such as Attributes, are similar to other test frameworks for a more fluid transition.

Features

  • Framework with Attributes and Assert class for writing unit/integration tests.
  • Can run tests in parallel.
  • Output can be in XML, JSON, CSV, HTML, Markdown or just plain text.
  • Ability to send test results, in real-time, to a URL for later analysis.
  • Console application from which to run your tests.
  • GUI for a more visual look on running tests.
  • Ability to re-run tests that have failed, for ones that may pass if run seconds later.
  • Run test cases and repeats in parallel.
  • Supply a solution file for extracting test assemblies.
  • Ability to store versioned console parameters in an Edisonfile.
  • Basic support for browser testing.
  • Support for TestDriven.NET.
  • Send test results to specific Slack channels, or to users directly.

Installing Edison

You can now install Edison using Chocolatey using the following:

choco install edison

Also, you can now get the framework and console on NuGet as well:

Install-Package Edison.Framework
Install-Package Edison.TestDriven # < this is the core framework, but has support for TestDriven.NET
Install-Package Edison.Console

To use the Edison framework and TestDriven.NET, you'll need to install the second package above. Then you'll need to manually add a reference to the Edison.Framework.dll within the Edison.TestDriven\tools directory of the project's NuGet packages directory.

Usage

Framework

Using Edison is very similar to other test frameworks. You have a [Test] Attribute with varying other Attributes to create your tests. An example would be:

[TestFixture]
public class TestClass
{
    [Setup]
    public void Setup()
    {
        //stuff
    }

    [Teardown]
    public void Teardown(TestResult result)
    {
        //stuff
    }

    [Test]
    [Category("Name")]
    [TestCase(1)]
    [TestCase(2)]
    public void Test(int value)
    {
        AssertFactory.Instance.AreEqual(2, value, "Argh no it's an error!!!1");
    }
}

Here you can see that this is similar to other test frameworks. Edison has been designed this way to make it easier for people to transition over.

In the example above we have:

  • A TestFixture which contains multiple Tests to be run
  • A Setup method which is run before each Test
  • A Teardown method which is run after each Test. This can optionally take a TestResult object.
  • And one Test method, which has a Category of "Name", and two possible TestCases to run the Test with as 1 and 2.

Furthermore, there's the Asserts class. In Edison the main Assert class implements the IAssert interface. To use the Assert class you can either create an instance of it for each Test, or you can use the AssertFactory class. The AssertFactory class contains a lazy Instance property which returns the IAssert class being used for the test assembly. This means you can create your own CustomAssert class that inherits IAssert and do AssertFactory.Instance = new CustomAssert() and any calls to AssertFactory.Instance will return your CustomAssert. This makes it far simpler to have your own assert logic in your test framework. If you don't set the AssertFactory.Instance then this is default to be the inbuilt Assert logic.

Console and Engine

Edison has the inbuilt functionality to run tests in parallel threads. By default tests are run in a single thread however, by suppling the --t <value> parameter from the command-line the tests will be run in that many threads. If you supply a number of threads that exceeds the number of TestFixtures, then the number of threads will become the number of TestFixtures.

Edison has the following flow when running tests per assembly:

SetupFixture -> Setup
 |
TestFixture -> TestFixtureSetup
 |
TestFixture -> Setup
 |
TestFixture -> Tests and TestCases
 |
TestFixture -> Teardown
 |
TestFixture -> TestFixtureTeardown
 |
SetupFixture -> Teardown

Example of running a test assembly from the command-line:

.\Edison.Console.exe --a path/to/test/assembly.dll --ft 2 --ot json

This will run the test fixtures across two threads (--ft) from the assembly.dll file (--a). The results of the tests will be output to the working directory in json format (--ot).

Do you have your own in-house test history storage? You can post the test results from Edison.Console to a given URL. Also you can specify a Test Run ID to help uniquely identify the run the results came from:

.\Edison.Console.exe --a path/to/test/assembly.dll --ft 2 --dfo --dco --ot json --url http://someurl.com --tid 702

Again this will run the test fixtures across two threads however, this time we won't be creating an output file (--dfo) or outputting the results to the console (--dco). Instead, the results will be posted to the passed URL (--url) and also use the test run ID specified (--tid).

To see more parameters use:

.\Edison.Console.exe --help

Edisonfile

The Edisonfile allows you to save, and version control, the arguments that you can supply to the console application. The file is of YAML format and should be saved at the root of your repository. The following is an example of the format:

assemblies:
  - "./path/to/test.dll"
  - "./path/to/other/test.dll"

disable_test_output: true
console_output_type: dot
fixture_threads: 2

To run the application, just run Edison.Console.exe at the root, with no arguments supplied. The application will locate the Edisonfile and populate the parameters accordingly.

For example, the above will be just like running:

Edison.Console.exe --a ./path/to/test.dll, ./path/to/other/test.dll --dto --cot dot --ft 2

Bugs and Feature Requests

For any bugs you may find or features you wish to request, please create an issue in GitHub.

License

Edison is completely open sourced and free under the MIT License.

edison's People

Contributors

badgerati avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

edison's Issues

Framework to support Browser testing

using the InternetExplorer.Application COM Object in the SHDocVw.dll, allow the Edison.Framework to support browser testing.

Basically, this will be incorporating Monocle into Edison's main framework.

Ability to send Test result(s) to Slack

Include a new attribute into Edison that will allow a user to specify a Slack channel to send a specific test's results to. Aka:

[Slack("Team1", SlackTestResult.Any)]
public void Test()
{
    AssertFactory.Instance.AreEqual(1, 1);
}

The above will send a Slack message to the Team1 channel, and will send the message no matter the test result. Other options for SlackTestResult should be:

  • Any
  • Success
  • Error
  • Failure

Inconclusive and Ignored are skipped, and only reported on Any.

The Slack token to allow Edison to send messages will be a CLI/EdisonFile argument. The GUI will be ignored for now.

If feasible, try and get the message to be colour coded using Slack attachments.

Test assemblies should be run in separate AppDomain with configs loaded

Currently all of the test assemblies are loaded onto the current AppDomain, this is all right until a test wants to reference values within the AppSettings of that assembly's app.config. The issue is that the app.config for that AppDomain will actually be the console's or the GUI's.

Some references on how to do this can be found here: http://stackoverflow.com/questions/11993546/app-config-for-dynamically-loaded-assemblies and here: http://stackoverflow.com/questions/658498/how-to-load-an-assembly-to-appdomain-with-all-references-recursively

Repeat value of 1 needlessly appends repeat value to TestResult FullName

When you don't set a repeat value for a test/fixture, a value of -1 is used. If the repeat index for the test is -1, then no index value is appended within the TestResult for the FullName. However, if you set the repeat value to 1, the value is appended when it really isn't needed because the test only ran the once.

Idea is to make it so that if the value is 1 then don't append the index, else append if greater. This means we can change the GetRepeatValue in the ReflectionRepository to just return the RepeatAttribute, or return a constant one with a repeat of 1.

Need a new TestRunEnvironment identifier for test results

This one is short an simple. A new TestRunEnvironment identifier to help pinpoint which environment a test run occurred.

Where the tests run locally? or where they run on one of your 2 or 3 regression or performance environments?

The value could be something like "DEV1/REG1/PERF3" etc. Or even just the name of the server/machine they were run on.

Value will only be initially send on the "test run has started" callout.

This value can be set on CLI or Edisonfile. If the value is not set, and the run is configured to send results to a URL, just default to the machine's name.

Allow for repeated tests to run in parallel

Currently you can run tests repeatedly in serial using the [Repeat] attribute. With this issue I suggest adding the concept of being able to run repeated tests in parallel, maybe with a [ParallelRepeat] attribute, or add a parallel flag to the [Repeat] attribute:

[Repeat(3, Parallel = true)]

// or

[ParallelRepeat(3)]

Personally I prefer the first one.

Ability to listen for TestResult events

When the tests are started via EdisonContext -> Run, you have to wait till after the tests have all finished running or have a separate thread polling the EdisonContext's ResultQueue for chances.

It would be nicer if you could attach a listener to the EdisonContext that would be called on TestResult events after each test, for external updating.

Any other kind of events would be useful also.

Update the wiki and docs

Right now the wiki and docs contain some information, but not much. This needs to be updated to included proper references and examples for the console application, and writing tests.

When a test run starts, Edison should send a callout to tell the website a run has started

The idea here is that when a test run starts, Edison should send a callout to the website.

This callout should be to an endpoint that informs the endpoint that a run has started, to create any initial data - like the test run name/ID/project etc. - as well as to return a "SessionId" that should be used when sending test results later on. As well as the "test run has ended" callout.

This new "SessionId" is because a test run could have the same ID and Name, so determining which to assign results to is difficult.

Introduce the notion of having a Suite of tests

Currently the only way to aggregate tests together it via the Category attribute or by namespace. It would be ideal if it were possible to have a Suite attribute for classes do define "dev" test suites, or "performance" test suites, etc.

This should be possible from the command line, and the GUI.

Only one suite can be selected at a time, so no need for include/exclude suite functionality.

Additional string assertions needed

There are some string assertions which are missing:

  • Contains
  • NotContains
  • IsMatch
  • IsNotMatch
  • StartsWith
  • NotStartsWith
  • EndsWith
  • NotEndsWith
  • AreEqualIgnoreCase
  • AreNotEqualIgnoreCase

Add Recently Opened sub-menu to GUI

On the GUI, you have to select the Open option and search for you file every time. It would be nice to have a "Recently Opened" sub-menu with a list of recently open files.

Maybe have an option to clear the list, as well.

Callouts sent to a TestResultUrl/Slack/etc should be run in a separate thread

Currently when a test finishes and we have the result, the result is synchronously sent out to the TestResultUrl or Slack if they're configured.

These callouts should be done asynchronously, so as to now impact main test running performance.

The callouts for the "test run started" and "test run ended" will still be synchronous, as data is required back from them.

Furthermore, Edison should be able to wait for a results to be sent; should the tests finish running and a couple of results are still being sent out.

Also, if a callout to an endpoint fails 5 times, no more callouts should be sent to that URL - this is to stop pointless callouts and to prevent 60sec timeouts.

TestCases should work on TestFixtures

Currently the TestCase attribute specifies that it can be applied to both methods and classes however, assigning a TestCase to a TestFixture does nothing. It works on Tests.

Ideally when a TestCase is assigned to a TestFixture, the parameters should be passed into the TestFixture's constructor.

Implement basic Load Testing functionality

Implement basic features of load testing into Edison. This could be done using different fixture/test attributes such as [LoadTestFixture] and [LoadTest], where [LoadTest]` accepts a seconds-to-run, and max-rps.

These tests will be run completely separate to number tests for stability, lightweight running. They will also run sequentially.

For example:

[LoadTest(30, 100)]
public void Test()
{
    //call some URL
}

This will call some URL, rapidly, for 30 seconds with a simulated 100 requests per second maximum. There could even be a configurable delay between each call.

After each load test has finished report:

  • Minimum response time
  • Maximum response time
  • Average response time
  • Total number of requests
  • Total successful requests
  • Total failed requests

The timing will basically be how long that "test" took to complete, and same for number of requests being number of tests.

Framework .NET Dependency, and others need changing

Currently the Edison.Framework only works with projects on .NET 4.5 or higher, this should be lowered to .NET 2.0 if possible, or at most .NET 4.0.

The engine, GUI and others have a reliance on .NET 4.5.2 even though none of the new features are used. If possible, drop this to .NET 4.0 also.

Ability to pass in Tests/Fixtures to run as files

Currently the only way to pass in specific tests/fixtures to run is via the command line. If you have one or two to pass this is OK, but if you need to pass a lot, or the namespaces are long then this can reach the max length for console input.

The idea here, is you can supply a path to a file instead, which contains the tests/fixtures to run, one line per test/fixture.

The same could also be done for assemblies, as well.

Need a new TestRunProject identifier for test results

Having a TestRunId of "v1.2" and a TestRunName of "v1.2.a1b2bc3d" is great, but what if you have two different projects with the same ID/Name?

Such as, having tests run for the core project and the website project in the same version?

This is to add a new TestRunProject identifier, which could be set to anything like "core/site/console/gui/etc.".

This value will only be passed on the initial "test run has started" callout.

This value can be set on CLI or Edisonfile.

[EPIC] Upgrades to engine for a new Test Results website for Edison

One of the features of Edison is the ability to post test results to a URL.
It would be ideal if Edison was shipped with a website that could be hosted, and supported an endpoint we could post to by default.

I've had a few thoughts about this, and the website itself will be written in node.js - mostly because I hate ASP.NET, and so that the site can be containerised/run on any OS more easily.

For now, whilst I think a little more, this is an epic to cover all of the new features, enhancements and bugs to fix to prepare for the site:

  • #61: New UrlOutputType needed, rather than using file's OutputType
  • #62: When a test run starts, Edison should send a callout to tell the website a run has started
  • #63: When a test run ends, Edison should send a callout to tell the website the run has ended - and why
  • #64: Need a new TestRunName, which can be more informative than TestRunId
  • #65: Need a new TestRunProject identifier for test results
  • #66: Need a new TestRunEnvironment identifier for test results
  • #67: Callouts sent to a TestResultUrl/Slack/etc should be run in a separate thread

Add logic for conditional assertions

Say if you want to assert that either AssertFactory.Instance.AreEqual(...) or AssertFactory.Instance.IsLessThan(...) passes, and even if one fails it's considered a pass. Where as if both fail it's a proper assertion failure.

For now just an Or will do, as an And is technically just consecutive Assert calls.

This new AssertFactory.Instance.Or(...) should take a params of assertion actions.

Allow Edison to be run using TestDriven.NET

The only way to run Edison tests at the moment is by either using the console app, or the GUI app.

This enhancement will allow users to run tests via TestDriven.NET. The TestDriven.Framework.dll can be found in the Program Files directory that it was installed to.

Allow for Test test cases to run in parallel

Test cases for Tests are currently run sequentially, even if they're in within a parallel repeat. This issue is for the implementation of a [ParallelTestCase] attribute, which will allow all test cases for a Test to be run in parallel.

This will not apply to TestFixtures likewise with parallel repeats, as it will cause sequential only running tests to be forced to run in parallel.

Move parameter validation from Console into the Engine Context

Currently the properties for the EdisonContext have their validation split across the Console and GUI. This validation needs to be pulled together and placed within the EdisonContext's Run method.

This allows the console/GUI, or third party program using the engine, to just set the properties to what's passed; allowing the engine to validate everything upfront before running.

Validation should also be possible before calling Run, if validation is required without actually running the tests.

Possibility of having Threading at the Test Level

At the moment, threading is done at the TestFixture level. This is OK if you have numerous TestFixtures with few test however, if you have one TestFixture with numerous tests then threading doesn't work.

It'd be ideal if you could specify number of threads to run fixtures across, and then a secondary number of threads to run inner tests on. The concurrency attribute would need to be updated to apply to tests, too.

Need a new TestRunName, which can be more informative than TestRunId

This is more of an informative feature addition.

The idea of the TestRunId is to be able to group certain runs together: ie, "All run for a TestRunId of v1.2"

However, every run having the same "name", such as "v1.2" isn't very informative. This is where the TestRunName comes in handy. You can pass a TestRunId as "v1.2", and the name could just be the same but with the short commit hash: "v1.2.a1b2c3d". This now points to a specific commit the tests where run over, and is grouped together with "v1.2".

This value can be set on CLI or Edisonfile.

Edisonfile for the console application

The Edisonfile will be of YAML format and will contain all the values that would normally be supplied on the command line. This file is only respected if Edison.Console.exe is run with no arguments, or the --ef argument is supplied to the location of the Edisonfile (or if it has a different name).

An example could be:

assemblies:
  - "./some/path/test.dll"
  - "./some/other/test.dll"

output_type: "xml"

This will allow versioned arguments for Edison in a repository, for when more assemblies are added/removed or categories need to be excluded etc.

Singular thread after main threads which re-runs failed tests

This feature should be toggled.

When enabled, after the main parallel and singular threads have run, this thread will re-run all tests that failed, updating their results before completion.

Idea behind this, is to limit the amount of tests that fail because of "environmental" issues, yet pass when run mere seconds later.

Might also be wise to include a threshold property; so that if so many tests have failed above a certain percentage, then don't re-run.

Allow for a test to build up multiple failed assertions

Currently when a test comes across an assertion that fails, an AssertException is thrown and the test fails/stops.

With this idea, a batch of assertions could be run in a block of some kind, and any failures/errors could be listed up and only reported on test end, or a non-blocked assertion failed.

TestResult will need to be altered to support multiple error/failure messages. Also need to concider ideas on how to do the assertions blocks - maybe delegates?

Use proper libraries to serialize output

Currently the JSON/CSV and XML output is all mashed-up and rigged together with string concatenation. This functionality works however, it would be better to use proper libraries such as Json.NET and CsvHelper.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.