tschwecke / harvey Goto Github PK
View Code? Open in Web Editor NEWEasy and fast integration testing of RESTful web services
Easy and fast integration testing of RESTful web services
...
There is a potential for multiple tests to need to operate on the same data, which would cause them to conflict if run in parallel. In that case there's a need to be able to run some tests in a serial fashion.
It would be great if harvey could handle multipart/form-data uploads by reading the specified files via fs and sending the resulting Buffer(s) in the request that is sent to the service under test.
Add a 'verifications' section to separate the calls that are made to verify the side effects of the call under test from the teardowns.
We need a -v --verbose mode that displays exactly what each request and each response is for each test executed. As it is now, I can't tell at all why a request is failing - only that whatever validation specified in the test is failing.
I have several tests in one json document. Each test uses the variable, Collection and each test assigns a different value to the variable. It appears that the first value assigned is persisted throughout all tests.
Should be able to reassign the value of the variable in each test - otherwise the use of the template is moot.
Add a console-debug reporter that displays the full details of test parts that fail
Got a SSL_GET_SERVER error and ECONNRESET when using harvey against secure endpoints.
I think it would be useful to be able to have a test fail if there are extra properties returned in the actual response that are not in the expected response. Currently this does not seem to be the case.
If there are valid use cases for allowing extra properties, perhaps this could be configurable.
It would be useful if you could read the results of a service call, and loop through the first X items and do the other calls for each individual item (GET/PUT/etc).
I have a set up template called ExchangeCollectionPut.Template. As part of this template I will create a collection for exchange and assign that collection to a variable. I could not call the Exchange.CollectionPutRequest however but had to spell out the Method, protocol, Host and resource in the Request section. Test for the setup template looked like this:
"id": "ExchangeCollectionPut.Template",
"variables" : {
"collectionID" : "GC22",
"license":"license information",
"trialPeriod":"88"
},
"request" : {
"method": "PUT",
"protocol": "https",
"host": "${Exchange.CatalogHostName}",
"resource": "/api/collections/GC22",
"body": {
"license":"license information",
"trialPeriod":"88",
"id" : "GC22"
}
},
"expectedResponse" : {
"statusCode" : 200
},
"actions" : [{
"$set" : {
"id2" : {
"$extract" : "body.id"
}
When running tests, it is not uncommon to want to validate a single aspect, for instance authentication or authorization, against a whole slew of endpoints. In that respect, it would be very helpful to define a construct that would support this within a test. An example of a test definition that might work in this way:
{
"id": "MissingAuthentication",
"request": {
"method": "${authenticatedRoutes[*].verb}",
"protocol": "${protocol}",
"host": "${host}:${port}",
"resource": "${authenticatedRoutes[*].path}"
},
"expectedResponse": {
"statusCode": { "$in": [401, 500] }
}
}
And a config to support the above might look like:
{
"authenticatedRoutes":[
{
"verb":"POST",
"path":"/items"
},
{
"verb":"PUT",
"path":"/items/12345"
}
{
"verb":"DELETE",
"path":"/items/12345"
}
]
}
Another way to do the same, but with an array of arrays rather than array of objects:
{
"id": "MissingAuthentication",
"request": {
"method": "${authenticatedRoutes[*][0]}",
"protocol": "${protocol}",
"host": "${host}:${port}",
"resource": "${authenticatedRoutes[*][1]}"
},
"expectedResponse": {
"statusCode": { "$in": [401, 500] }
}
}
With config of:
{
"authenticatedRoutes":[
["POST","/items"],
["PUT","/items/12345"],
["DELETE","/items/12345"]
]
}
It be nice if the exact (after variable substitution) expected value was available on the results so that we could perform a diff of the actual vs. expected values (for the html reporter), using a tool like: https://github.com/kpdecker/jsdiff
Just looking at the code, it'd be hard to extract that info without changing a substantial portion, so it's probably worth talking about.
When running a test by providing the --tags option and the test id provided is not an existing test, Harvey still runs the suiteSetup and suiteTeardown.
When using the console reporter, this is the output:
Time elapsed: 791 ms
0 tests complete, 0 failures.
Other than "0 tests complete" in the output, there's no indication the test id provided does not exist
Be able to set a request to poll for a set period of time until an acceptable response is returned, or until some configurable timeout is reached.
It would be nice to be able to specify querystring parameters like this:
"querystring": [{
"name": "skip",
"value": "20"
}, {
"name": "limit",
"value": "10"
}]
Provides the ability to store data in arrays.
When my tests against the response code failed, I had no way of discerning what the discrepancy was when I was using the console reporter, as it only told me the test that failed and its expectation, but not the actual result that conflicted with said expectation. The json reporter did, however, give me this actual result information.
It is sometimes helpful to insert something like Fiddler or Charles or some other proxy when testing one's endpoints, or understanding why a particular test failed. If we could specify a proxy to use during the run, that would be helpful. It might even be nice to specify a proxy per request in a given test, but I think it would most likely be sufficient, at least for the debugging aspect, to simply support it as a global command-line setting when calling harvey.
Add a 'harvey init' command that will create tests to start from based on a HAR file or from a swagger endpoint
Currently, when an error is thrown during a test the whole suite is stopped. Rather that test should be marked as failed and suite execution should continue.
I tried to run a test using mongo-style queries to verify a range of acceptable responses for an http result code, but received a parsing error from the test runner.
Example 1:
{
"id": "test1",
"request": {
"method": "GET",
"protocol": "${protocol}",
"host": "${host}:${port}",
"resource": "/parents/${fake_guid}/items"
},
"expectedResponse": {
"statusCode": { "$gte": 400 }
}
}
Harvey Output:
Error: tests['MissingAccessToken_GET_courses-id-items'].expectedResponse: the value of statusCode must be a number
Example 2:
{
"id": "test1",
"request": {
"method": "GET",
"protocol": "${protocol}",
"host": "${host}:${port}",
"resource": "/parents/${fake_guid}/items"
},
"expectedResponse": {
"statusCode": { "$in": [401, 500] }
}
}
Harvey Output:
Error: tests['MissingAccessToken_GET_courses-id-items'].expectedResponse: the value of statusCode must be a number
This would allow you to run harvey for a specified amount of time, and it would run random tests over random intervals during that time span. Something like this:
$ harvey --zombie 300 myTests.json
would run for 5 minutes.
"zombie" is just my name for it... call it whatever makes sense.
When using the --tags option, if another test id has a substring match of the id provided in the --tags option, all tests that match will run.
For example, if tests are written with the following ids:
{
"tests": [
{
"id": "first.test",
...
},
{
"id": "first.test.copy",
...
}
]
}
and first.test.copy is provided to the --tags option:
$ harvey ... --tags "first.test.copy"
both tests will run.
✓ first.test
Set up phase: 0 validations passed, 0 validations failed
Test phase: 3 validations passed, 0 validations failed
Tear down phase: 1 validations passed, 0 validations failed
✖ first.test.copy
Set up phase: 0 validations passed, 0 validations failed
Test phase: 2 validations passed, 1 validations failed (body)
Tear down phase: 1 validations passed, 0 validations failed
It would be helpful to be able to define lists of items in config, to be referenced by index. For instance:
{
"authenticatedRoutes":[
"/things",
"/things/12345",
"/things/12345/items"
]
}
Even better if you can do arrays within arrays, as in:
{
"authenticatedRoutes":[
["POST", "/things"],
["PUT", "/things/12345"],
["GET", "/things/12345/items"]
]
}
This is especially useful if we can also then use these array-driven values for some automated test execution. More on that to come in a different "issue" submission.
When referencing a template from an added test file and the template exists in a section that is not defined in the main test file, an uncaught exception is thrown.
For example, if I create a test file that references a template that exists in the "setupAndTeardowns" section of an added test file and the main test file does not define the "setupAndTeardowns" section, an error is thrown.
for example if I define a request template like so:
{
"id": "base",
"protocol": "${protocol}",
"host": "${host}"
}
and config like so:
{
"protocol": "http",
"host": "myhost.com:8080"
}
I get a schema validation error:
Error: requestTemplates['base']: the value of protocol must be one of undefined, https, http
Now why undefined
is allowed, I have no idea... maybe a bug in Joi?
I am seeing a possible issue with validating Headers. The following test specifies a response headers as the criteria for validation. The test passes when it should not. Looking at the validation portion of the JSON it appears that something is wrong with the validation of headers and the portion of the report that indicates that validation of the headers occurred is empty.
Test:
"tests": [{
"id": "OPTIONS /me",
"setup": [],
"request": {
"templates": ["request"],
"method" : "OPTIONS",
"resource": "/me"
},
"expectedResponse": {
"headers": {
"Content-Type": "application/json"
}
}
}],
Report:
"testResults": {
"passed": true,
"suiteStepResults": [
[{
"id": "OPTIONS /me",
"passed": true,
"testStepResults": [{
"id": "OPTIONS /me",
"testPhase": "test",
"passed": true,
"timeSent": "2013-09-16T13:20:11.676Z",
"responseTime": 10.937978,
"rawRequest": "OPTIONS http://localhost:8000/me HTTP 1.1\nContent-Type: application/json\n",
"rawResponse": "HTTP/1.1 200\naccess-control-allow-origin: *\naccess-control-max-age: 1\ncontent-length: 0\ncache-control: no-cache\ndate: Mon, 16 Sep 2013 13:20:11 GMT\nconnection: keep-alive\n",
"validationResults": [],
"error": null
}]
}]
]
}
I'd like to be able to reference a config value within another config value and have both values be parsed. For example, consider the following config:
{
"dummyUuid":"6dfd65fa-4cb2-11e3-8e77-ce3f5508acd9",
"someRoute":"/items/${dummyUuid}/name"
}
If I then reference ${someRoute}, I'd like it to come back as "/items/6dfd65fa-4cb2-11e3-8e77-ce3f5508acd9/name"
Setups and teardowns are often called in conjunction with one another. To help prevent the accidental exclusion of a teardown, let a setup specify it's corresponding teardown. Then that teardown will get called automatically during the teardown phase.
Include a 'length' operator for arrays that can work in conjunction with $gte, $lte
Whether by supporting a top-level test runner file that can reference other test files, or by supporting multiple test files being passed in a single execution request such that all results are grouped in one output response, it would be very beneficial when constructing and organizing tests to allow for multiple disparate test files that can still be run as a single cohesive execution when doing an automated regression.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.