eykrehbein / strest Goto Github PK
View Code? Open in Web Editor NEW⚡️ CI-ready tests for REST APIs configured in YAML
License: MIT License
⚡️ CI-ready tests for REST APIs configured in YAML
License: MIT License
my_request:
url: foo.com
method: GET
log_request: true
This would result in the request object and resolved uri being logged. This would be helpful to have a copy of the json and to show the URI with environment vars and values populated.
It might be good to create a log object as:
my_request:
url: foo.com
method: GET
log:
output: true
file: true
request: true
Create a UI
strest-ui /tests
loads all requests into visualization (use similar layout to insomnia UI)
Allow stepping through requests and displaying results
Requests can be modified and saved back to disk
Values of variables are displayed along side the 'code': i.e.
Env(FOO)bar
Repeat a request until a criteria is met.
requests:
userRequest:
url: http://localhost:3001/user
method: GET
data:
params:
name: testUser
max_retries: 3
delay: 3000
validate:
json:
somethingCompleted: "true"
The above request executes a maximum of 3 times before the failure is propagated
When running strest
against a directory, it would be beneficial to use response data from file 1 as a Value() in file 2 request.
If the response's format is XML, convert it into JSON before validation to be able to validate it with the standard JSON syntax.
validate:
json:
...
Running express is a bit heavy to test this library. Postman-echo has a great set of api's that can be used.
https://docs.postman-echo.com
In my cases my API takes files and performs some operations with them, after which some new file is returned.
It would be good if strest supported form-data, which would work the same as with other data.
One would provide the name of the parameter and then the filepath (either absolute or relative to the directory where strest is executed).
Add feature to validate the response status code
version: 1
requests:
test:
url: https://echo.getpostman.com/status/400
method: GET
validate:
code: 2xx # status code needs to be in range of 200 - 299
execution order should be:
test_dir
A_subdir
3.strest.yml
4.strest.yml
Z_subdir
5.strest.yml
6.strest.yml
1.strest.yml
2.strest.yml
Executing strest tests/success/
results in the following execution order:
✔ Testing todoOne succeeded (0.188s)
✔ Testing todoTwo succeeded (0.107s)
✔ Testing todoOne succeeded (0.221s)
✔ Testing todoTwo succeeded (0.102s)
✔ Testing postman succeeded (0.293s)
✔ Testing my_variable_request succeeded (0.102s)
✔ Testing environment succeeded (0.101s)
✔ Testing fake succeeded (0.195s)
✔ Testing arr succeeded (0.571s)
✔ Testing arr1 succeeded (0.106s)
✔ Testing value1 succeeded (0.215s)
✔ Testing value2 succeeded (0.184s)
✔ Testing basic_auth succeeded (0.19s)
✔ Testing logging succeeded (0.197s)
✔ Testing if_Set succeeded (0.372s)
✔ Testing skipped skipped (0.001s)
✔ Testing executed succeeded (0.105s)
✔ Testing codeValidate succeeded (0.102s)
✔ Testing code404 succeeded (0.671s)
✔ Testing headersValidate succeeded (0.124s)
✔ Testing jsonValidate succeeded (0.1s)
✔ Testing jsonpath succeeded (0.264s)
✖ Testing maxRetries failed to validate. Retrying... (1.192s)
✔ Testing maxRetries succeeded (0.193s)
✔ Testing raw succeeded (0.191s)
✔ Testing login succeeded (0.188s)
✔ Testing verify_login succeeded (0.41s)
✔ Testing verify_login_chained succeeded (0.282s)
✔ Testing chaining_var1 succeeded (0.096s)
✔ Testing chaining_var2 succeeded (0.108s)
Based on current tests, this should be the order:
✔ Testing todoOne succeeded (0.188s)
✔ Testing todoTwo succeeded (0.107s)
✔ Testing postman succeeded (0.293s)
✔ Testing todoOne succeeded (0.221s)
✔ Testing todoTwo succeeded (0.102s)
✔ Testing my_variable_request succeeded (0.102s)
✔ Testing basic_auth succeeded (0.19s)
✔ Testing login succeeded (0.188s)
✔ Testing verify_login succeeded (0.41s)
✔ Testing verify_login_chained succeeded (0.282s)
✔ Testing chaining_var1 succeeded (0.096s)
✔ Testing chaining_var2 succeeded (0.108s)
Same as the connected api .. just instead of Value ... Fake({{text }})
I thought maybe using faker and use the faker.fake() api .
i.e
data:
json:
title: Fake({{name.firstName}})
So far I see no unit tests for strest and no Travis CI / AppVeyor setup.
We should have this to catch any bugs.
Say the returned response data has the following format,
[
{
"id": "5b9763b3ef7c4000018387cd",
"from": "Bank",
"to": "Bank",
"amount": 0,
"gameID": "SomePreDefiniedValue"
}
]
How do I verify that "gameId" == "SomePreDefiniedValue"?
Generate an HTML report with collapsible sections that show:
Summary
Request
Curl equivalent
Request details
Response details
This can be used as a design guide:
https://github.com/postmanlabs/newman-reporter-html
Option to add a parameter in each test file which defines in which order the files should be executed.
# File 1
version: 1
order: 1 # will be executed first
requests:
...
# File 2
version: 1
order: 2 # will be executed after file 1
requests:
...
For failed responses debugging would be useful to print to console the response data including headers. This usually include a request id and validation messages in case of 4XX exit codes.
This would be good to support global logging on/off
log: Env(LOG_STREST)
^ fails schema
Add validation for headers in the response.
strest execute_all_this.yml
where execute_all_this.yml
contents are:
---
- path/to/folder/with/strest_dot_yml_files
- path/to/folder/my.strest.yml
- path/to/folder/with/more/strest_dot_yml_files
Provide a codegen function.
strest --output curl
would result in the equivalent curl commands to perform the actions
Use this lib after it is accepted:
curlconverter/curlconverter#105
On Mac OS 10.13.6, with node 8.11.3, I have a test such as
requests:
login:
url: Env(URL)
method: POST
....
....
I set URL in my env to http://foo.com/bar, but when I run the test it errors saying it can't talk to 127.0.0.1 for the url. Exact error:
strest ./post.yml
[ Strest ] Found 1 test file(s)
[ Strest ] Schema validation: 1 of 1 file(s) passed
✖ Testing login failed (0.041s)
Error: connect ECONNREFUSED 127.0.0.1:80
I took the exact same version of strest, same strest test file, same env variable set, on a linux box with node 6.14.0 and the test works correctly (uses http://foo.com/bar). It seems as though the Env call on node 8 doesn't recognize my environment variable on node8+MacOS but does on node6+Linux. I did some googling but couldn't find if there was some changes or issues with getting the env in node 8 vs. 6. On node 8 I made a one-liner node app to print process.env and I see my environment (including URL) so I don't think there is. Maybe an issue with the templating/yaml parsing between versions where the Env() call exists?
Assume 2 files where the goal is to output an env var from the request in file 1 and use it in file 2
File 1:
requests:
test:
url: https://postman-echo.com/get
method: GET
data:
params:
foo: bar
output_env_vars:
FILE_1_OUTPUT: args.foo
output:
{"args":{"foo":"bar"}}
requests:
test:
url: https://postman-echo.com/get
method: GET
data:
params:
foo: Env(FILE_1_OUTPUT)
Add an option to the cli command to create a log file. Formatted as Json array with an entry for each request where response code, data, headers and a possible error is logged?
Assume a request returns:
[ {"foo":1, "bar": 1}, {"foo":2, "bar": 2} ]
It would be beneficial to extract the value of key bar
given a value for foo.
This is typically handled in code using loops or find functions.
Describe the bug
I've written a request that fails schema validation but the strest run does not fail.
To Reproduce
Expected behavior
Additional context
[ Strest ] Found 3 test file(s)
[ Strest ] Schema validation: 2 of 3 file(s) passed
Executing tests in ./
Executing tests in: /app/tests/
Executing tests in: /app/tests/tokens/
✔ Testing userToken succeeded (0.573s)
[ Strest ] ✨ Done in 0.634s
The only indication that something went wrong is 2 of 3 file(s) passed
at the top, which is easy to miss, especially in longer scripts.
error:
/ # strest foo.yml
[ Strest ] Found 1 test file(s)
[ Strest ] Schema validation: 1 of 1 file(s) passed
✖ Testing login failed (0.434s)
Error: self signed certificate
[ Strest ] Failed before finishing all requests
solution:
add a param to allow self-signed certs:
axios/axios#535
I haven't tested this, so maybe it is already possible.
Support n levels deep of this:
Value(foo.Env(BAR))
strest foo.postman.json --postman
This would convert a postman collection into a set of directories and files.
There's no reasonable way to convert all the functionality (javascripts) so this should focus on creating the requests only.
This is a common format for http structures.
https://en.wikipedia.org/wiki/.har
http://www.softwareishard.com/blog/har-12-spec/#request
Google Chrome uses it to archive a request also.
This should be the schema used for the request object. After implementation is complete, this issue can be implemented as a standard:
Possibly add this to a readme.
Assuming Dark+ theme
using this plugin:
https://github.com/fabiospampinato/vscode-highlight
add these settings:
"highlight.regexes": {
"(Value\\(.*?\\))": {
"regexFlags": "g",
"filterFileRegex": ".*\\.strest\\.yml",
"decorations": [
{ "color": "#9CDCFE" }
]
},
"(Env\\(.*?\\))": {
"regexFlags": "g",
"filterFileRegex": ".*\\.strest\\.yml",
"decorations": [
{ "color": "#C586C0" }
]
}
}
Hi guys,
With Postman, I follow BDD style Given...When...Then for naming request in a flow. Example:
Given user has a balance
When user topup +$10
Then user balance will be increased by $10
I suggest adding a new field to describe the above style.
If could, please let me know which is the fastest way to add a new field I can help to contribute a PR.
Thanks for your awesome efforts.
installing node/npm can be avoided for those using Docker
Hi, thanks for writing this! I like the clean yaml approach to writing tests. So I started to do a little testing to see how this would fit in our processes for API validation / testing.
When running a simple test when I get an HTTP 400+ response I would expect this would result in setting the exit code in terminal.
Success
version: 1
requests:
get-test-todo:
url: https://jsonplaceholder.typicode.com/todos/1
method: GET
_ strest test.yml
[ Strest ] Found 1 test file(s)
[ Strest ] Schema validation: 1 of 1 file(s) passed
_ Testing get-app-version succeeded
[ Strest ] _ Done in 0.131s
_
_ echo $?
0
Failure
version: 1
requests:
get-test-todo:
url: https://jsonplaceholder.typicode.com/todos/230
method: GET
_ strest test.yml
[ Strest ] Found 1 test file(s)
[ Strest ] Schema validation: 1 of 1 file(s) passed
_ Testing get-test-todo succeeded
_ Testing get-more-todo failed
Error: Request failed with status code 404
[ Strest ] _ Done in 0.453s
_
_ echo $?
0
If there are other good ways to act on failing a test I am open for suggestions.
Thanks again for this project.
Thanks for this nice tool!
When writing tests it's also important to test failures, e.g. making sure that submitting invalid credentials does not give an auth token. It would be nice if you could do more advanced assertions on the response like so:
requests:
login: # a successful login
url: http://localhost:8080/auth
method: POST
body:
json:
username: alice
password: secret
response:
code: [200-300) # expect a response code in range 200 (inclusive) to 300 (exlusive)
body:
type: Base64
login using params:
url: http://localhost:8080/auth
method: POST
params:
username: alice
password: secret
response:
code: [200-300) # expect a response code in range 200 (inclusive) to 300 (exlusive)
body:
type: Base64
login wrong password:
url: http://localhost:8080/auth
method: POST
body:
json:
username: alice
password: 123
response:
code: [400-500) # expect a response code in range 400 (inclusive) to 500 (exlusive)
body:
json:
error: wrong username or password
Why don't add a possibility for stress testing permiting tests writer define how much requests per second wants to simulate ?
I think it is a good idea to be able to extract repeatable tests in dedicated files and import them.
Ex: I want to modularize some tests by domain, and each domain needs to have an initial authentication request that is common to all tests.
Add feature that converts a curl command to a request in yaml syntax
support this:
version: 1
requests:
login:
url: https://postman-echo.com/basic-auth
Auth:
basic:
username: postman
password: password
method: GET
This would result is a header:
Authorization: Basic cG9zdG1hbjpwYXNzd29yZA==
When executing against a directory, showing the filename (and possibly the sub-directories' names) would be beneficial
Support this:
version: 1
requests:
todoOne:
url: https://jsonplaceholder.typicode.com/posts
method: POST
data:
json:
myArray:
- foo: 1
bar: 1
- foo: 2
bar: 2
validate:
jsonpath:
myArray.1.foo: 2
Postman provides this capability:
https://www.getpostman.com/docs/v6/postman/collection_runs/building_workflows
Typically, this is used after a response param is evaluated
When executing strest foo_dir/
file2 will run tests even if there is a failure from a request in file1
Option 1
Support strest --abort foodir/
Option 2
Change default behavior to abort directory execution if a request fails.
This might also require the ability to override this behavior in a per file basis:
version: 1
continue: true
...
This would allow the defined env vars to be stored in an initial test file.
version: 1
environment:
STREST_URL: https://jsonplaceholder.typicode.com
requests:
environment:
url: Env(STREST_URL)/todos/1
method: GET
it looks like a lot of the YAML needed overlaps with what's in an OpenAPI spec, do you have plans for a converter or just using the OpenAPI spec directly?
Assume the filename is postman-echo.strest.yml
Access the filename in the request:
version: 1 # only version at the moment
requests: # all test requests will be listed here
testRequest: # name the request however you want
url: https://Filename().com/get # required
method: GET # required
logger:
url: foo.com
method: GET
log_to_file: true
This would result in a file called logger.json
that contains the output of the request
Use Environment Variables to replace {{}}
version: 1 # only version at the moment
requests: # all test requests will be listed here
foo:
url: "{{MY_ENV_VAR}}/api/v1/bar"
method: GET # required
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.