Giter Site home page Giter Site logo

harvey's People

Contributors

aarmour avatar delynn avatar ginman86 avatar jkrukoff avatar m4tty avatar mac- avatar nisaacson avatar samplacette avatar tschwecke avatar wasbazi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

harvey's Issues

Ability to specify test IDs to wait on

There is a potential for multiple tests to need to operate on the same data, which would cause them to conflict if run in parallel. In that case there's a need to be able to run some tests in a serial fashion.

Better multipart/form-data Upload Support

It would be great if harvey could handle multipart/form-data uploads by reading the specified files via fs and sending the resulting Buffer(s) in the request that is sent to the service under test.

Need to See StatusCodes and Messages in Console

We need a -v --verbose mode that displays exactly what each request and each response is for each test executed. As it is now, I can't tell at all why a request is failing - only that whatever validation specified in the test is failing.

Variable Value Persists within a Suite of Tests

I have several tests in one json document. Each test uses the variable, Collection and each test assigns a different value to the variable. It appears that the first value assigned is persisted throughout all tests.

Should be able to reassign the value of the variable in each test - otherwise the use of the template is moot.

Fail test if extra properties

I think it would be useful to be able to have a test fail if there are extra properties returned in the actual response that are not in the expected response. Currently this does not seem to be the case.

If there are valid use cases for allowing extra properties, perhaps this could be configurable.

Loop on the results of a service call

It would be useful if you could read the results of a service call, and loop through the first X items and do the other calls for each individual item (GET/PUT/etc).

Use of Request Template in a Setup Template

I have a set up template called ExchangeCollectionPut.Template. As part of this template I will create a collection for exchange and assign that collection to a variable. I could not call the Exchange.CollectionPutRequest however but had to spell out the Method, protocol, Host and resource in the Request section. Test for the setup template looked like this:

"id": "ExchangeCollectionPut.Template",
"variables" : {
"collectionID" : "GC22",
"license":"license information",
"trialPeriod":"88"
},
"request" : {
"method": "PUT",
"protocol": "https",
"host": "${Exchange.CatalogHostName}",
"resource": "/api/collections/GC22",
"body": {
"license":"license information",
"trialPeriod":"88",
"id" : "GC22"
}
},
"expectedResponse" : {
"statusCode" : 200
},
"actions" : [{
"$set" : {
"id2" : {
"$extract" : "body.id"
}

Repeating/looping tests

When running tests, it is not uncommon to want to validate a single aspect, for instance authentication or authorization, against a whole slew of endpoints. In that respect, it would be very helpful to define a construct that would support this within a test. An example of a test definition that might work in this way:

        {
            "id": "MissingAuthentication",
            "request": {
                "method": "${authenticatedRoutes[*].verb}",
                "protocol": "${protocol}",
                "host": "${host}:${port}",
                "resource": "${authenticatedRoutes[*].path}"
            },
            "expectedResponse": {
                "statusCode": { "$in": [401, 500] }
            }
        }

And a config to support the above might look like:

        {
            "authenticatedRoutes":[
                {
                    "verb":"POST",
                    "path":"/items"
                },
                {
                    "verb":"PUT",
                    "path":"/items/12345"
                }
                {
                    "verb":"DELETE",
                    "path":"/items/12345"
                }
            ]
        }

Another way to do the same, but with an array of arrays rather than array of objects:

        {
            "id": "MissingAuthentication",
            "request": {
                "method": "${authenticatedRoutes[*][0]}",
                "protocol": "${protocol}",
                "host": "${host}:${port}",
                "resource": "${authenticatedRoutes[*][1]}"
            },
            "expectedResponse": {
                "statusCode": { "$in": [401, 500] }
            }
        }

With config of:

        {
            "authenticatedRoutes":[
                ["POST","/items"],
                ["PUT","/items/12345"],
                ["DELETE","/items/12345"]
            ]
        }

Getting the exact expected values in the results

It be nice if the exact (after variable substitution) expected value was available on the results so that we could perform a diff of the actual vs. expected values (for the html reporter), using a tool like: https://github.com/kpdecker/jsdiff

Just looking at the code, it'd be hard to extract that info without changing a substantial portion, so it's probably worth talking about.

Suite setup/teardown still when when providing an invalid test id to the --tags option

When running a test by providing the --tags option and the test id provided is not an existing test, Harvey still runs the suiteSetup and suiteTeardown.

When using the console reporter, this is the output:

Time elapsed: 791 ms
0 tests complete, 0 failures.

Other than "0 tests complete" in the output, there's no indication the test id provided does not exist

Support polling

Be able to set a request to poll for a set period of time until an acceptable response is returned, or until some configurable timeout is reached.

Console reporter doesn't tell me what the actual result was when a test failed

When my tests against the response code failed, I had no way of discerning what the discrepancy was when I was using the console reporter, as it only told me the test that failed and its expectation, but not the actual result that conflicted with said expectation. The json reporter did, however, give me this actual result information.

Support for specifying a proxy for making http calls

It is sometimes helpful to insert something like Fiddler or Charles or some other proxy when testing one's endpoints, or understanding why a particular test failed. If we could specify a proxy to use during the run, that would be helpful. It might even be nice to specify a proxy per request in a given test, but I think it would most likely be sufficient, at least for the debugging aspect, to simply support it as a global command-line setting when calling harvey.

Support 'harvey init'

Add a 'harvey init' command that will create tests to start from based on a HAR file or from a swagger endpoint

Mongo-style queries doesn't seem to be working

I tried to run a test using mongo-style queries to verify a range of acceptable responses for an http result code, but received a parsing error from the test runner.

Example 1:

        {
            "id": "test1",
            "request": {
                "method": "GET",
                "protocol": "${protocol}",
                "host": "${host}:${port}",
                "resource": "/parents/${fake_guid}/items"
            },
            "expectedResponse": {
                "statusCode": { "$gte": 400 }
            }
        }

Harvey Output:
Error: tests['MissingAccessToken_GET_courses-id-items'].expectedResponse: the value of statusCode must be a number

Example 2:

        {
            "id": "test1",
            "request": {
                "method": "GET",
                "protocol": "${protocol}",
                "host": "${host}:${port}",
                "resource": "/parents/${fake_guid}/items"
            },
            "expectedResponse": {
                "statusCode": { "$in": [401, 500] }
            }
        }

Harvey Output:
Error: tests['MissingAccessToken_GET_courses-id-items'].expectedResponse: the value of statusCode must be a number

Add "zombie" mode

This would allow you to run harvey for a specified amount of time, and it would run random tests over random intervals during that time span. Something like this:

$ harvey --zombie 300 myTests.json

would run for 5 minutes.

"zombie" is just my name for it... call it whatever makes sense.

Mulitple tests run when a substring match found in tag name

When using the --tags option, if another test id has a substring match of the id provided in the --tags option, all tests that match will run.

For example, if tests are written with the following ids:

{
    "tests": [ 
        {
            "id": "first.test",
            ...
        },
        {
            "id": "first.test.copy",
            ...
        }
    ]
}

and first.test.copy is provided to the --tags option:

$ harvey ... --tags "first.test.copy"

both tests will run.

✓ first.test
    Set up phase: 0 validations passed, 0 validations failed
    Test phase: 3 validations passed, 0 validations failed
    Tear down phase: 1 validations passed, 0 validations failed
✖ first.test.copy
    Set up phase: 0 validations passed, 0 validations failed
    Test phase: 2 validations passed, 1 validations failed (body)
    Tear down phase: 1 validations passed, 0 validations failed

Support for arrays as config values

It would be helpful to be able to define lists of items in config, to be referenced by index. For instance:
{
"authenticatedRoutes":[
"/things",
"/things/12345",
"/things/12345/items"
]
}
Even better if you can do arrays within arrays, as in:
{
"authenticatedRoutes":[
["POST", "/things"],
["PUT", "/things/12345"],
["GET", "/things/12345/items"]
]
}
This is especially useful if we can also then use these array-driven values for some automated test execution. More on that to come in a different "issue" submission.

Uncaught error when referencing a template from an added test file

When referencing a template from an added test file and the template exists in a section that is not defined in the main test file, an uncaught exception is thrown.

For example, if I create a test file that references a template that exists in the "setupAndTeardowns" section of an added test file and the main test file does not define the "setupAndTeardowns" section, an error is thrown.

schema validation happens before variables are interpreted

for example if I define a request template like so:

{
  "id": "base",
  "protocol": "${protocol}",
  "host": "${host}"
}

and config like so:

{
  "protocol": "http",
  "host": "myhost.com:8080"
}

I get a schema validation error:

Error: requestTemplates['base']: the value of protocol must be one of undefined, https, http

Now why undefined is allowed, I have no idea... maybe a bug in Joi?

Issue with Validating Response Headers?

I am seeing a possible issue with validating Headers. The following test specifies a response headers as the criteria for validation. The test passes when it should not. Looking at the validation portion of the JSON it appears that something is wrong with the validation of headers and the portion of the report that indicates that validation of the headers occurred is empty.

Test:

"tests": [{
        "id": "OPTIONS /me",
        "setup": [],
        "request": {
            "templates": ["request"],
            "method" : "OPTIONS",
            "resource": "/me"
        },
        "expectedResponse": {
            "headers": {
                "Content-Type": "application/json"
            }
        }
    }],

Report:

 "testResults": {
        "passed": true,
        "suiteStepResults": [
            [{
                "id": "OPTIONS /me",
                "passed": true,
                "testStepResults": [{
                    "id": "OPTIONS /me",
                    "testPhase": "test",
                    "passed": true,
                    "timeSent": "2013-09-16T13:20:11.676Z",
                    "responseTime": 10.937978,
                    "rawRequest": "OPTIONS http://localhost:8000/me HTTP 1.1\nContent-Type: application/json\n",
                    "rawResponse": "HTTP/1.1 200\naccess-control-allow-origin: *\naccess-control-max-age: 1\ncontent-length: 0\ncache-control: no-cache\ndate: Mon, 16 Sep 2013 13:20:11 GMT\nconnection: keep-alive\n",
                    "validationResults": [],
                    "error": null
                }]
            }]
        ]
    }

Support for recurive de-referencing of config values

I'd like to be able to reference a config value within another config value and have both values be parsed. For example, consider the following config:
{
"dummyUuid":"6dfd65fa-4cb2-11e3-8e77-ce3f5508acd9",
"someRoute":"/items/${dummyUuid}/name"
}

If I then reference ${someRoute}, I'd like it to come back as "/items/6dfd65fa-4cb2-11e3-8e77-ce3f5508acd9/name"

Be able to associate a setup with it's corresponding teardown

Setups and teardowns are often called in conjunction with one another. To help prevent the accidental exclusion of a teardown, let a setup specify it's corresponding teardown. Then that teardown will get called automatically during the teardown phase.

Support for separate test files in a single test execution

Whether by supporting a top-level test runner file that can reference other test files, or by supporting multiple test files being passed in a single execution request such that all results are grouped in one output response, it would be very beneficial when constructing and organizing tests to allow for multiple disparate test files that can still be run as a single cohesive execution when doing an automated regression.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.