Giter Site home page Giter Site logo

alexfernandez / loadtest Goto Github PK

View Code? Open in Web Editor NEW
2.5K 39.0 204.0 630 KB

Runs a load test on the selected URL. Fast and easy to use. Can be integrated in your own workflow using the API.

License: MIT License

JavaScript 100.00%
loadtesting nodejs performance testing-tools

loadtest's Introduction

run on repl.it

NPM

Package quality

loadtest

Runs a load test on the selected HTTP or WebSockets URL. The API allows for easy integration in your own tests.

Installation

Install globally as root:

# npm install -g loadtest

On Ubuntu or Mac OS X systems install using sudo:

$ sudo npm install -g loadtest

For access to the API just install it in your npm package as a dev dependency:

$ npm install --save-dev loadtest

Compatibility

Versions 6 and later should be used at least with Node.js v16 or later:

  • Node.js v16 or later: ^6.0.0
  • Node.js v10 or later: ^5.0.0
  • Node.js v8 or later: 4.x.y
  • Node.js v6 or earlier: ^3.1.0
  • ES5 support (no let, const or arrow functions): ^2.0.0.

Usage

Why use loadtest instead of any other of the available tools, notably Apache ab? loadtest allows you to configure and tweak requests to simulate real world loads.

Basic Usage

Run as a script to load test a URL:

$ loadtest [-n requests] [-c concurrency] [-k] URL

The URL can be "http://", "https://" or "ws://". Set the max number of requests with -n, and the desired level of concurrency with the -c parameter. Use keep-alive connections with -k whenever it makes sense, which should be always except when you are testing opening and closing connections.

Single-dash parameters (e.g. -n) are designed to be compatible with Apache ab, except that here you can add the parameters after the URL.

To get online help, run loadtest without parameters:

$ loadtest

Usage Dos

The set of basic options are designed to be compatible with Apache ab. But while ab can only set a concurrency level and lets the server adjust to it, loadtest allows you to set a rate or requests per second with the --rps option. Example:

loadtest -c 10 --rps 200 http://mysite.com/

This command sends exactly 200 requests per second with concurrency 10, so you can see how your server copes with sustained rps. Even if ab reported a rate of 200 rps, you will be surprised to see how a constant rate of requests per second affects performance: no longer are the requests adjusted to the server, but the server must adjust to the requests! Rps rates are usually lowered dramatically, at least 20~25% (in our example from 200 to 150 rps), but the resulting figure is much more robust.

loadtest is also quite extensible. Using the provided API it is very easy to integrate loadtest with your package, and run programmatic load tests. loadtest makes it very easy to run load tests as part of systems tests, before deploying a new version of your software. The result includes mean response times and percentiles, so that you can abort deployment e.g. if 99% of all requests don't finish in 10 ms or less.

Usage Don'ts

loadtest performance has improved significantly, but it is still limited. loadtest saturates a single CPU pretty quickly, so it uses half the available cores in your processor. The Node.js processes can reach 100% usage in top, which happens approx. when your load is above 4000~7000 rps per core. In this case please adjust the number of cores. So for instance with eight cores you can expect to get a maximum performance of 8 * 5000 = 40 krps.

You can measure the practical limits of loadtest on your specific test machines by running it against a simple test server and seeing when it reaches 100% CPU. Run the following commands on two different consoles:

$ node bin/testserver.js
$ node bin/loadtest.js -n 1000000 -c 100 http://localhost:7357/

If you have reached the limits of loadtest even after using all cores, there are other tools that you can try.

  • AutoCannon: also an npm package, awesome tool with an interface similar to wrk.
  • Apache ab has great performance, but it is limited by a single CPU performance. Its practical limit is somewhere around ~40 krps.
  • weighttp is also ab-compatible and is supposed to be very fast (the author has not personally used it).
  • wrk is multithreaded and highly performance. It may need installing from source though, and its interface is not ab-compatible.
  • wrk2: evolution of wrk.

Regular Usage

The following parameters are compatible with Apache ab.

-t, --maxSeconds

Max number of seconds to wait until requests no longer go out. Default is 10 seconds, applies only if no --maxRequests is specified.

Note: this is different than Apache ab, which stops receiving requests after the given seconds.

Warning: max seconds used to have no default value, so tests would run indefinitely if no --maxSeconds and no --maxRequests were specified. Max seconds was changed to default to 10 in version 8.

-n, --maxRequests

Number of requests to send out. Default is no limit; will keep on sending until the time limit in --maxSeconds is reached.

Note: the total number of requests sent can be bigger than the parameter if there is a concurrency parameter; loadtest will report just the first n.

-c, --concurrency

loadtest will create a certain number of clients; this parameter controls how many. Requests from them will arrive concurrently to the server. Default value is 10.

Note: requests are not sent in parallel (from different processes), but concurrently (a second request may be sent before the first has been answered). Does not apply if --requestsPerSecond is specified.

Beware: if concurrency is too low then it is possible that there will not be enough clients to send all the supported traffic, adjust it with -c if needed.

Warning: concurrency used to have a default value of 1, until it was changed to 10 in version 8.

-k, --keepalive

Open connections using keep-alive: use header Connection: keep-alive instead of Connection: close.

Note: Uses agentkeepalive, which performs better than the default node.js agent.

-C, --cookie cookie-name=value

Send a cookie with the request. The cookie name=value is then sent to the server. This parameter can be repeated as many times as needed.

-H, --header header:value

Send a custom header with the request. The line header:value is then sent to the server. This parameter can be repeated as many times as needed. Example:

$ loadtest -H user-agent:tester/0.4 ...

Note: if not present, loadtest will add a few headers on its own: the "host" header parsed from the URL, a custom user agent "loadtest/" plus version (loadtest/1.1.0), and an accept header for "*/*".

Note: when the same header is sent several times, only the last value will be considered. If you want to send multiple values with a header, separate them with semicolons:

$ loadtest -H accept:text/plain;text-html ...

Note: if you need to add a header with spaces, be sure to surround both header and value with quotes:

$ loadtest -H "Authorization: Basic xxx=="

-T, --contentType

Set the MIME content type for POST data. Default: text/plain.

-P, --postBody

Send the string as the POST body. E.g.: -P '{"key": "a9acf03f"}'

-A, --patchBody

Send the string as the PATCH body. E.g.: -A '{"key": "a9acf03f"}'

-m, --method

Set method that will be sent to the test URL. Accepts: GET, POST, PUT, DELETE, PATCH, and lowercase versions. Default is GET. Example: -m POST.

--data body

Add some data to send in the body. It does not support method GET. Requires setting the method with -m and the type with -T. Example: --data '{"username": "test", "password": "test"}' -T 'application/x-www-form-urlencoded' -m POST

-p, --postFile

Send the data contained in the given file in the POST body. Remember to set -T to the correct content-type.

If POST-file has .js extension it will be imported. It should be a valid node module and it should export a default function, which is invoked with an automatically generated request identifier to provide the body of each request. This is useful if you want to generate request bodies dynamically and vary them for each request.

Example:

export default function request(requestId) {
  // this object will be serialized to JSON and sent in the body of the request
  return {
	key: 'value',
	requestId: requestId
  }
}

See sample file in sample/post-file.js, and test in test/body-generator.js.

-u, --putFile

Send the data contained in the given file as a PUT request. Remember to set -T to the correct content-type.

If PUT-file has .js extension it will be imported. It should be a valid node module and it should export a default function, which is invoked with an automatically generated request identifier to provide the body of each request. This is useful if you want to generate request bodies dynamically and vary them for each request. For examples see above for -p.

-a, --patchFile

Send the data contained in the given file as a PATCH request. Remember to set -T to the correct content-type.

If PATCH-file has .js extension it will be imported. It should be a valid node module and it should export a default function, which is invoked with an automatically generated request identifier to provide the body of each request. This is useful if you want to generate request bodies dynamically and vary them for each request. For examples see above for -p.

-r, --recover

Recover from errors. Always active: loadtest does not stop on errors. After the tests are finished, if there were errors a report with all error codes will be shown.

-s, --secureProtocol

The TLS/SSL method to use. (e.g. TLSv1_method)

Example:

$ loadtest -n 1000 -s TLSv1_method https://www.example.com

-V, --version

Show version number and exit.

Advanced Usage

The following parameters are not compatible with Apache ab.

--rps, --requestsPerSecond

Controls the number of requests per second that are sent. Cannot be fractional, e.g. --rps 0.5. In this mode each request is not sent as soon as the previous one is responded, but periodically even if previous requests have not been responded yet.

Note: the --concurrency option will be ignored if --requestsPerSecond is specified; clients will be created on demand.

Note: --rps is not supported for websockets.

--cores number

Start loadtest in multi-process mode on a number of cores simultaneously. Forks the requested number of processes using the Node.js cluster module. Default: half the available CPUs on the machine.

The total number of requests and the rps rate are shared among all processes. The result shown is the aggregation of results from all cores.

Note: this option is not available in the API, since it runs just within the calling process.

Warning: the default value for --cores has changed in version 7+, from 1 to half the available CPUs on the machine. Set to 1 to get the previous single-process mode.

--timeout milliseconds

Timeout for each generated request in milliseconds. Setting this to 0 disables timeout (default).

-R requestGeneratorModule.js

Use a custom request generator function from an external file. See an example of a request generator module in requestGenerator. Also see sample/request-generator.js for some sample code including a body (or sample/request-generator.ts for ES6/TypeScript).

--agent (deprecated)

Open connections using keep-alive.

Note: instead of using the default agent, this option is now an alias for -k.

--quiet

Do not show any messages.

--debug (deprecated)

Show debug messages.

Note: deprecated in version 6+.

--insecure

Allow invalid and self-signed certificates over https.

--cert path/to/cert.pem

Sets the certificate for the http client to use. Must be used with --key.

--key path/to/key.pem

Sets the key for the http client to use. Must be used with --cert.

--tcp (experimental)

Option to use low level TCP sockets, faster than the standard HTTP library. Not all options are supported.

Warning: experimental option. May not work with your test case. See TCP Sockets Performance for details.

Test Server

loadtest bundles a test server. To run it:

$ testserver-loadtest [options] [port]

This command will show the number of requests received per second, the latency in answering requests and the headers for selected requests.

The server returns a short text 'OK' for every request, so that latency measurements don't have to take into account request processing.

If no port is given then default port 7357 will be used. The optional delay instructs the server to wait for the given number of milliseconds before answering each request, to simulate a busy server. You can also simulate errors on a given percent of requests.

The following optional parameters are available.

--delay ms

Wait the specified number of milliseconds before answering each request.

--error 5xx

Return the given error for every request.

--percent yy

Return an error (default 500) only for the specified % of requests.

--cores number

Number of cores to use. If not 1, will start in multi-process mode.

Note: since version v6.3.0 the test server uses half the available cores by default; use --cores 1 to use in single-process mode.

Complete Example

Let us now see how to measure the performance of the test server.

First we install loadtest globally:

$ sudo npm install -g loadtest

Now we start the test server:

$ testserver-loadtest --cores 2
Listening on http://localhost:7357/
Listening on http://localhost:7357/

On a different console window we run a load test against it for 20 seconds with concurrency 10 (only relevant results are shown):

$ loadtest http://localhost:7357/ -t 20 -c 10
...
Requests: 9589, requests per second: 1915, mean latency: 10 ms
Requests: 16375, requests per second: 1359, mean latency: 10 ms
Requests: 16375, requests per second: 0, mean latency: 0 ms
...
Completed requests:  16376
Requests per second: 368
Total time:          44.503181166000005 s

Percentage of requests served within a certain time
  50%      4 ms
  90%      5 ms
  95%      6 ms
  99%      14 ms
 100%      35997 ms (longest request)

The result was quite erratic, with some requests taking up to 36 seconds; this suggests that Node.js is queueing some requests for a long time, and answering them irregularly. Now we will try a fixed rate of 1000 rps:

$ loadtest http://localhost:7357/ -t 20 -c 10 --rps 1000
...
Requests: 4551, requests per second: 910, mean latency: 0 ms
Requests: 9546, requests per second: 1000, mean latency: 0 ms
Requests: 14549, requests per second: 1000, mean latency: 20 ms
...
Percentage of requests served within a certain time
  50%      1 ms
  90%      2 ms
  95%      8 ms
  99%      133 ms
 100%      1246 ms (longest request)

Again erratic results. In fact if we leave the test running for 50 seconds we start seeing errors:

$ loadtest http://localhost:7357/ -t 50 -c 10 --rps 1000
...
Requests: 29212, requests per second: 496, mean latency: 14500 ms
Errors: 426, accumulated errors: 428, 1.5% of total requests

Let us lower the rate to 500 rps:

$ loadtest http://localhost:7357/ -t 20 -c 10 --rps 500
...
Requests: 0, requests per second: 0, mean latency: 0 ms
Requests: 2258, requests per second: 452, mean latency: 0 ms
Requests: 4757, requests per second: 500, mean latency: 0 ms
Requests: 7258, requests per second: 500, mean latency: 0 ms
Requests: 9757, requests per second: 500, mean latency: 0 ms
...
Requests per second: 500
Completed requests:  9758
Total errors:        0
Total time:          20.002735398000002 s
Requests per second: 488
Total time:          20.002735398000002 s

Percentage of requests served within a certain time
  50%      1 ms
  90%      1 ms
  95%      1 ms
  99%      14 ms
 100%      148 ms (longest request)

Much better: a sustained rate of 500 rps is seen most of the time, 488 rps average, and 99% of requests answered within 14 ms.

We now know that our server can accept 500 rps without problems. Not bad for a single-process naรฏve Node.js server... We may refine our results further to find at which point from 500 to 1000 rps our server breaks down.

But instead let us research how to improve the result. One obvious candidate is to add keep-alive to the requests so we don't have to create a new connection for every request. The result (with the same test server) is impressive:

$ loadtest http://localhost:7357/ -t 20 -c 10 -k
...
Requests per second: 4099

Percentage of requests served within a certain time
50%      2 ms
90%      3 ms
95%      3 ms
99%      10 ms
100%      25 ms (longest request)

Now we're talking! The steady rate also goes up to 2 krps:

$ loadtest http://localhost:7357/ -t 20 -c 10 --keepalive --rps 2000
...
Requests per second: 1950

Percentage of requests served within a certain time
  50%      1 ms
  90%      2 ms
  95%      2 ms
  99%      7 ms
 100%      20 ms (longest request)

Not bad at all: 2 krps with a single core, sustained. However, it you try to push it beyond that, at 3 krps it will fail miserably.

API

loadtest is not limited to running from the command line; it can be controlled using an API, thus allowing you to load test your application in your own tests. A short introduction follows; see complete docs for API.

Invoke Load Test

To run a load test, invoke the exported function loadTest() with the desired options:

import {loadTest} from 'loadtest'

const options = {
    url: 'http://localhost:8000',
    maxRequests: 1000,
}
const result = await loadTest(options)
result.show()
console.log('Tests run successfully')

Beware: if there are no maxRequests and no maxSeconds, the test will run forever.

loadTest() Parameters

A simplified list of parameters is shown below; see doc/api.md for the full explanations with examples.

  • url: URL to invoke, mandatory.
  • concurrency: how many clients to start in parallel.
  • maxRequests: max number of requests; after they are reached the test will end.
  • maxSeconds: max number of seconds to run the tests.
  • timeout: timeout for each generated request in milliseconds, set to 0 to disable (default).
  • cookies: array of cookies to send, of the form name=value.
  • headers: object with headers, each with the value as string. Separate by semicolons to have multiple values.
  • method: HTTP method to use, default GET.
  • body: contents to send in the body of the message.
  • contentType: MIME type to use for the body, default text/plain.
  • requestsPerSecond: how many requests will be sent per second.
  • requestGenerator: custom request generator function.
  • agentKeepAlive: if true, will use 'Connection: Keep-alive'.
  • quiet: if true, do not show any messages.
  • indexParam: parameter to replace in URL and body with a unique index.
  • indexParamCallback: function to generate unique indexes.
  • insecure: allow invalid and self-signed certificates over https.
  • secureProtocol: TLS/SSL method to use.
  • statusCallback(error, result): function to call after every request is completed.
  • contentInspector(result): function to call before aggregating statistics.

Start Test Server

To start the test server use the exported function startServer() with a set of options:

import {startServer} from 'loadtest'
const server = await startServer({port: 8000})
// do your thing
await server.close()

The following options are available, see doc/api.md for details.

  • port: optional port to use for the server, default 7357.
  • delay: milliseconds to wait before answering each request.
  • error: HTTP status code to return, default 200 (no error).
  • percent: return error only for the given % of requests.
  • logger(request, response): function to call after every request.

Returns a test server that you can close() when finished.

Configuration file

It is possible to put configuration options in a file named .loadtestrc in your working directory or in a file whose name is specified in the loadtest entry of your package.json. The options in the file will be used only if they are not specified in the command line.

The expected structure of the file is the following:

{
	"delay": "Delay the response for the given milliseconds",
	"error": "Return an HTTP error code",
	"percent": "Return an error (default 500) only for some % of requests",
	"maxRequests": "Number of requests to perform",
	"concurrency": "Number of requests to make",
	"maxSeconds": "Max time in seconds to wait for responses",
	"timeout": "Timeout for each request in milliseconds",
	"method": "method to url",
	"contentType": "MIME type for the body",
	"body": "Data to send",
	"file": "Send the contents of the file",
	"cookies": {
		"key": "value"
	},
	"headers": {
		"key": "value"
	},
	"secureProtocol": "TLS/SSL secure protocol method to use",
	"insecure": "Allow self-signed certificates over https",
	"cert": "The client certificate to use",
	"key": "The client key to use",
	"requestGenerator": "JS module with a custom request generator function",
	"recover": "Do not exit on socket receive errors (default)",
	"agentKeepAlive": "Use a keep-alive http agent",
	"proxy": "Use a proxy for requests",
	"requestsPerSecond": "Specify the requests per second for each client",
	"indexParam": "Replace the value of given arg with an index in the URL"
}

See sample file in sample/.loadtestrc.

For more information about the actual configuration file name, read the confinode user manual. In the list of the supported file types, please note that only synchronous loaders can be used with loadtest.

Complete Example

The file test/integration.js contains complete examples, which are also a full integration test suite: they start the server with different options, send requests, waits for finalization and close down the server.

Licensed under The MIT License

Copyright (c) 2013-9 Alex Fernรกndez [email protected] and contributors.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

loadtest's People

Contributors

alexfernandez avatar alexsaves avatar alopes avatar bil-ash avatar cyb3rs4pi3n avatar danieltdt avatar dotchev avatar erikzaadi avatar fefe982 avatar ffoysal avatar fonthardcore avatar hodbauer avatar iainbryson avatar joshjung avatar khaihkd avatar miguelcnf avatar nielsdekeijzer avatar philbooth avatar pwmcintyre avatar rcrichton avatar rv-kip avatar seriousmanual avatar simoneb avatar sveyret avatar tangxinfa avatar thanhson1085 avatar trustmaster avatar venetahristova avatar xavierchow avatar ziyiwang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

loadtest's Issues

rename testserver

maybe it would be a good idea to rename the bin entry for the testserver from testserver to loadtest-testserveras testserver is a quite generic name and may interfere with other programs.

won't compile

donvawter@TheBeast ~> loadtest -c 10 -rps 200 http://localhost:2500
/Users/donvawter/.nvm/versions/node/v0.12.7/lib/node_modules/loadtest/bin/loadtest.js:86
        if (if (method_accept.indexOf(options.method) === -1)){
            ^^
SyntaxError: Unexpected token if
    at exports.runInThisContext (vm.js:73:16)
    at Module._compile (module.js:443:25)
    at Object.Module._extensions..js (module.js:478:10)
    at Module.load (module.js:355:32)
    at Function.Module._load (module.js:310:12)
    at Function.Module.runMain (module.js:501:10)
    at startup (node.js:129:16)
    at node.js:814:3

Error running testserver

After running npm install -g loadtest

I tried to run the testserver, this is what I got:

C:\Users\mer\AppData\Roaming\npm\node_modules\loadtest\lib\cli-wrapper.js:24
lib.run(args);
    ^
TypeError: Object #<Object> has no method 'run'
    at Object.<anonymous> (C:\Users\mer\AppData\Roaming\npm\node_modules\loadtest\lib\cli-wrapper.js:24:5)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Function.Module.runMain (module.js:497:10)
    at startup (node.js:119:16)
    at node.js:901:3

callback per 1s

Is it possible to show statistic in same callback function ,for example, 1 time per seconds?
I thinking to build real-time graphics...

Total number of request and Concurrent request are not in agreement

The number of total request exceeds the number parsed in the argument with [ -n ] when the argument passed in [ -c ] in not a multiple of [ -n ]. For Example -
loadtest -n 1 -c 200 [URL]
This command sends 200 requests to the URL despite of the fact that total request number is 1.
loadtest -n 401 -c 400 [URL]
This command sends 800 requests to the URL despite of the face that total request number is 401.

Recursive process.nextTick detected. This will break in the next version of node. Please use setImmediate for recursive deferral.

Hi,

I encountered above warning multiple times when running loadtest. Usually followed by:
RangeError: Maximum call stack size exceeded

This occurs when the server starts returning errors (my hypothesis). It causes a stacking of nextTick calls in node's core libraries http, net and dns.
I could avoid this by adding a setImmediate in getRequestFinisher:

        setImmediate(function () {
            operation.callback(error, result, callback);
        });

I don't have the time to sort out whether this is the right place or whether Node should live up to their own warning. I just know that it cures the problem.

loadtest API callback fired after 1/n client finishes

Simply take a route

  app.get("/testConcurrency", function(req, res){
            // some sql query that can take different time to insert same amount of data
            setTimeout(function(){ res.send("ok"); }, variableTimeout)
  });

And calling loadtest api

var options = {
       url: url,
      "maxRequests": 1,
      "concurrency": 3,
} 
loadtest.loadTest(options, function(error, result){
    if(error){
        res.send("Error: " + error);
    }
    // console.log(result);
    // this will send the result from the 1st request
   // hence ignoring the time spent on subsequent concurrent requests
    res.send(result);
})

Shouldn't "totalTimeSeconds" in result should be sum of all the concurrent requests?

testing set of different routes/endpoints on a given server

I remember experimenting with this module 6 months ago. What wasn't obvious then, and still isn't obvious, is how you put a load on a server by making requests to a variety of different routes, instead of just a single route/endpoint. Forgiven me if I mistaken, but is this possible with this module? It this easy to set up?

Support dynamic post data

Hi,

I'd like to propose an enhancement which would allow to generate post data dynamically, if you are interested I can go on and implement it.

It would be compatible with the current -p switch, and its behavior would be something like this:

  • if the file does not have .js extension, then proceed as usual
  • if the file does have .js extension, try to require it
    • the required module should export a function which, when invoked, returns the data to post

When the request is issued it would check whether the body property is a string, in that case it's passed as-is, otherwise it would invoke it to retrieve the data to post.

Bad Request / Error 400 all the time, regardless of domain tested

I get error 400 on any url using the below code. Urls like `http://localhost:62562' or 'http://google.com'. Running this on win7.

function (callback) {
    loadtest.loadTest({
      url: rootUrl,
      maxRequests: 50,
      concurrency: 2,
      debug: debug
    }, function(error, result) {
      if (error) {
        callback(error);
        return console.error('Got an error: %s', error);
      }

      console.log('Item complete.');
      callback(null, result);
    });
  }

example output:

[Tue May 06 2014 15:58:18 GMT+1000 (AUS Eastern Standard Time)] DEBUG Connection 61796588d889bab7833ea881e859207c695694b
2d8529a2423a16cfa9986a95d failed: Status code 400
[Tue May 06 2014 15:58:18 GMT+1000 (AUS Eastern Standard Time)] DEBUG New value: 67.097
[Tue May 06 2014 15:58:18 GMT+1000 (AUS Eastern Standard Time)] DEBUG Partial requests: 49
[Tue May 06 2014 15:58:18 GMT+1000 (AUS Eastern Standard Time)] DEBUG Total requests 49, max requests: 50
[Tue May 06 2014 15:58:18 GMT+1000 (AUS Eastern Standard Time)] DEBUG HTTP client connected to http://kiandra.com.au wit
h id e90e347fc4737044737daa760bb3db83e3129c3a20e5bdf28f76c63ded2e44c0
[Tue May 06 2014 15:58:18 GMT+1000 (AUS Eastern Standard Time)] DEBUG Body: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01
//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid Hostname</h2>
<hr><p>HTTP Error 400. The request hostname is invalid.</p>
</BODY></HTML>

Have no idea where it's getting the body from.

Requests with empty body when using requestGenerator

I wrote simple express application and loadtest for it. https://github.com/MyLittleRobo/loadtesttest
The problem is that some number of first requests always have empty body. Server returns BadRequest if request does not have "data" field. Tests show that the number of bad requests is always equal to number of concurrent clients.

Tested on Windows 8 and Mac OS X.

Probably I did something wrong in request generator though. When I use just "body" (not generator) all requests are valid.

Loadtest can't handle redirects

Hi,
I'm trying to test one service and loadtest fails to load required response because it receives 302 HTTP Code, here are some log records from verbose mode:

[Fri May 01 2015 15:04:11 GMT+0300 (MSK)] DEBUG Initializing histogram for 1434
[Fri May 01 2015 15:04:11 GMT+0300 (MSK)] DEBUG Total requests 78, max requests: undefined
[Fri May 01 2015 15:04:11 GMT+0300 (MSK)] DEBUG HTTP client connected to http://localhost/light/common with id d2f445d1be267c024a8bee628ab53b97ce60914a9564d2046288213b67489f0c
[Fri May 01 2015 15:04:11 GMT+0300 (MSK)] DEBUG Connection d2f445d1be267c024a8bee628ab53b97ce60914a9564d2046288213b67489f0c failed: Status code 302
[Fri May 01 2015 15:04:11 GMT+0300 (MSK)] DEBUG New value: 1488.4629810000001
[Fri May 01 2015 15:04:11 GMT+0300 (MSK)] DEBUG Partial requests: 79
[Fri May 01 2015 15:04:11 GMT+0300 (MSK)] DEBUG Initializing histogram for 1488

Is it possible to pass any options to force redirect handling?

post body

For me a very important feature of abis the possibility to send a postbody (with content-type:application/x-www-form-urlencode or application/json), is this planned for the future?

-C cmdline parameter

any try to set -C or -H parameter with cookie name fails and prints Missing URL to load-test
I have tried various combinations however it still fails (OSX) and Loadtest version: 1.1.0

load test "http://test/" -c 1 -n 1 -H 'Cookie:SPRING_SECURITY_CONTEXT=ZmYzYjZmYjItZThjOS00ZmZhLTkyOWQtZDRjYzE3NmRmZWIy'

I have tried the same with:

load test "http://test/" -c 1 -n 1 -C 'SPRING_SECURITY_CONTEXT=ZmYzYjZmYjItZThjOS00ZmZhLTkyOWQtZDRjYzE3NmRmZWIy'

same results: Missing URL to load-test

Query (URL params)

Is it possible to set query params for request? It's not flexible to add these params to URL itself.

Use of prototypes can break application behavior during tests

Your usage of your prototypes which has:

safeParseInt

which returns a 0 whenever someone does a

parseInt(undefined);

instead of NaN, breaks application tests whenever they pull in your loadTest to use directly as an API.

I would suggest not using prototypes library within this library since it changes signatures of existing Math packages that already exist in libraries.

Prototypes also breaks other 3rd party libraries in general by changing those relied upon signatures.

use different payload for each request

I'd like to POST a different payload data for each request e.g.:

{ taskId : "a1",..... }

{taskId : "a2" ,......}

Any advice on how I could achieve this? Thanks.

Worst performance results than ab

Hi there,

I need to make really hard performance test against a node with express services. Doing an initial test against a hello world service I have very significant differences between using loadtest and the traditional ab:

With loadtest:

>loadtest -n 50000 -c 1000 http://localhost:3000/version

[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO 
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Target URL:          http://localhost:3000/version
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Max requests:        50000
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Concurrency level:   1000
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Agent:               none
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO 
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Completed requests:  50000
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Total errors:        0
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Total time:          43.307403647 s
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Requests per second: 1155
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Total time:          43.307403647 s
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO 
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO Percentage of the requests served within a certain time
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO   50%      538 ms
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO   90%      1373 ms
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO   95%      1527 ms
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO   99%      7267 ms
[Fri Feb 27 2015 19:00:53 GMT+0100 (CET)] INFO  100%      31354 ms (longest request)

The same test but using ab:

)>ab -n 50000 -c 1000 -k http://localhost:3000/version
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Concurrency Level:      1000
Time taken for tests:   22.181 seconds
Complete requests:      50000
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      25450000 bytes
HTML transferred:       3650000 bytes
Requests per second:    2254.14 [#/sec] (mean)
Time per request:       443.629 [ms] (mean)
Time per request:       0.444 [ms] (mean, across all concurrent requests)
Transfer rate:          1120.46 [Kbytes/sec] received


Percentage of the requests served within a certain time (ms)
  50%    119
  66%    138
  75%    158
  80%    180
  90%   1127
  95%   1164
  98%   3133
  99%   3322
 100%  15219 (longest request)

Is it normal so big differences for the same test?

Using loadtest without http/https (unexpected way)

Hi! Been having a great time using loadtetst. I used it for https, but we have other non-https transport protocols that I wanted to integrate with...So...In a completely unexpected way, I dummy out the request and response node objects with requestGenerator.

I totally don't expect you to support this in any manner, but I wanted to show you how really ...close... your request generator actually is to being transport agnostic which is really cool.

Here is an ES6 pseudo code'ish version:

// my values from earlier
const maxRequests       = myMaxRequests;
const concurrency       = myConcurrency;
const requestsPerSecond = myRequestsPerSecond

// load test without using http or https, but rather a different  
// transport protocol.
// I use a dummy return request object as well as a dummy response
// object to accomplish this.
loadTest.loadTest({
  url: 'http://bunk.com',
  maxRequests       : maxRequests,
  concurrency       : concurrency,
  agentKeepAlive    : true,
  requestsPerSecond : requestsPerSecond,
  requestGenerator  : function (params, options, client, callback) {
    return {
      on    : () => {},
      write : () => {},
      end   : () => {
        callback({
          setEncoding : () => {},
          on          : function (signal, onCallback) {
            if (signal === 'end') {
              someNonHttpRequest() // <-- There be the GOLD, my non http promise
              .then((response) => {
                this.statusCode = 200;
                this.body       = response;
                onCallback();
              }).catch((errors) => {
                this.statusCode = errors.statusCode;
                this.body       = errors.body;
                onCallback();
              });
            }
          }
        });
      }
    };
  }
}, (error, result) => {
  //...deal with error and results normally
}

Post data not sent

I'm trying to test a very simple API that has just one method and two input parameters which needs to be passed as POST.

I use this script to generate the post data:

module.exports = function(requestID) {
    // var randomName = ...
    // var randomMail = ...
    return {name: randomName,
            mail: randomMail};
};

I'm now calling the API test from the command line:

loadtest -n 100 -c 10 --rps 100 -H X-Requested-With:API -T multipart/form-data -p postcontent.js http://localhost/API/send/

My PHP API logs the request using

ob_start();
echo "request header\n";
print_r(getallheaders());
echo "\n\nrequest\n";
print_r($_REQUEST);
echo "\n\npost\n";
print_r($_POST);
echo "\n\nget\n";
print_r($_GET);
echo "\n\nserver\n";
print_r($_SERVER);

fwrite($log_file, ob_get_clean());

The dumped request is definitely not what I expect:

request header
Array
(
    [host] => localhost
    [user-agent] => loadtest/1.2.11
    [accept] => */*
    [x-requested-with] => API
    [Content-Type] => multipart/form-data
    [Content-Length] => 52
    [Connection] => close
)

request
Array
(
)

post
Array
(
)

get
Array
(
)

server
Array
(
    [HTTP_HOST] => localhost
    [HTTP_USER_AGENT] => loadtest/1.2.11
    [HTTP_ACCEPT] => */*
    [HTTP_X_REQUESTED_WITH] => API
    [CONTENT_TYPE] => multipart/form-data
    [CONTENT_LENGTH] => 52
    [HTTP_CONNECTION] => close
    [PATH] => /usr/bin:/bin:/usr/sbin:/sbin
    [SERVER_SIGNATURE] => 
    [SERVER_SOFTWARE] => Apache/2.4.10 (Unix) PHP/5.5.20 OpenSSL/0.9.8zd
    [SERVER_NAME] => localhost
    [SERVER_ADDR] => 127.0.0.1
    [SERVER_PORT] => 80
    [REMOTE_ADDR] => 127.0.0.1
    [REQUEST_SCHEME] => http
    [CONTEXT_PREFIX] => 
    [GATEWAY_INTERFACE] => CGI/1.1
    [SERVER_PROTOCOL] => HTTP/1.1
    [REQUEST_METHOD] => POST
    [QUERY_STRING] => 
    [argv] => Array
        (
        )

    [argc] => 0
)```

Missing `testing` dependency

I got this when I tried running the loadtest on a fresh install today:

Error: Cannot find module 'testing'
  at Function.Module._resolveFilename (module.js:338:15)
  at Function.Module._load (module.js:280:25)
  at Module.require (module.js:364:17)
  at require (module.js:380:17)
  at Object.<anonymous> (/stress/node_modules/loadtest/lib/loadtest.js:11:15)
  at Module._compile (module.js:456:26)
  at Object.Module._extensions..js (module.js:474:10)
  at Module.load (/usr/lib/node_modules/coffee-script/lib/coffee-script/register.js:45:36)
  at Function.Module._load (module.js:312:12)
  at Module.require (module.js:364:17)
  at require (module.js:380:17)
  at Object.<anonymous> (/stress/node_modules/loadtest/index.js:10:16)
  at Module._compile (module.js:456:26)
  at Object.Module._extensions..js (module.js:474:10)
  at Module.load (/usr/lib/node_modules/coffee-script/lib/coffee-script/register.js:45:36)
  at Function.Module._load (module.js:312:12)
  at Module.require (module.js:364:17)
  at require (module.js:380:17)
  at Object.<anonymous> (/stress/stress.coffee:35:12)
  at Object.<anonymous> (/stress/stress.coffee:1:1)
  at Module._compile (module.js:456:26)

The problem is gone if I add testing package to dependencies in package.json.

http/https modules expect headers to be an object, not an array

When trying to use the loadtest module to test an API, I was getting 400 error codes. Digging into the http/https core modules, I noticed that they seem to expect the headers to be an object, not an array. Yet it looks like loadtest is intentionally passing the headers as an array (and converting them to one if they're an object). Am I reading this correctly?

Test case:

var loadtest = require('loadtest')

var headers = {
    'Authorization': 'Token mytoken'
}
var options = {
    url: 'https://twitter.com',
    method: 'GET',
    headers: headers,
    maxRequests: 1
}
loadtest.loadTest(options, function(error, result) {
    if (error) {
        return console.error('Got an error: %s', error);
    }
    console.log('Done runnning tests.');
    console.log(result);
});

authentication

I am trying to figure out how to send a basic auth header

Authorization: Basic xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==

So I need to use -H to send a header however -H takes header:value whereas the Auth headers is
Authoization: Basic xxxxxxxxxxxxxxxxxxxxxxxxxx==== ie, there are spaces in it.

I tried this

loadtest -n 10 -c 1 --rps 1 -H Content-Type:application/json Authorization:'Basic xxxxxxxxxxxxxxxxxxxxxxxxxxxx==' -p dynamic.js http://127.0.0.1:7357

but it doesn't work. Any ideas? Thanks.

maxLatencyMs not in results

I was wondering why the maxLatencyMs is not in the results object (and the percentiles only provide 50, 90, 95 and 99 values).

Could this be added easily?

Perhaps wrong parsing of -- command line arguments

Hi,

when trying to use --agent and --keepalive I noticed that the usage is printed instead, as if they weren't recognized (I assume the same is true for the other "--" options). I'm on Windows, so I guess it might be part of the problem.

To get them to work I have to use them like this: --keepalive true

The other thing I noticed is that even when they are used the output always shows agent: none.

Always getting 400 errors when called from Mac OSX Terminal bash

This calls eems to always fail with 400 for some reason:

$ loadtest -n 1 -c 1 http://api-guide.nextguide.tv/api/trailer_ids.json?tms_id=MV000526280000 [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Requests: 0 (0%), requests per second: 0, mean latency: 0 ms [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Target URL: http://api-guide.nextguide.tv/api/trailer_ids.json?tms_id=MV000526280000 [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Max requests: 1 [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Concurrency level: 1 [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Agent: none [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Completed requests: 1 [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Total errors: 1 [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Total time: 0.204526559 s [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Requests per second: 5 [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Total time: 0.204526559 s [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO Percentage of the requests served within a certain time [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO 50% 200 ms [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO 90% 200 ms [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO 95% 200 ms [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO 99% 200 ms [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO 100% 200 ms (longest request) [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO 100% 200 ms (longest request) [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO [Wed Mar 12 2014 15:13:26 GMT-0400 (EDT)] INFO 400: 1 errors

Multithreaded Support and 400 errors

Would be great if we have one built in, also the result shows all requests return 400 errors if I did it from the API (works fine with the command line). Why is this?

{ totalRequests: 5975,
  totalErrors: 5975,
  totalTimeSeconds: 5.014089503999999,
  rps: 1192,
  meanLatencyMs: 10,
  percentiles: { '50': 7, '90': 9, '95': 10, '99': 19 },
  errorCodes: { '400': 5975 } }
Tests run successfully

-R requestGeneratorModule.js does not work

it appears to be only partially implemented, and the example doesn't work. it seems as if it would be easiest to get it working using the library and providing an example of that.

Proxy support

I'm looking to load test a proxy service that I've built.

Is there any support for routing loadtest requests via a proxy? Any plans for this or thoughts?

Response object per request

Is there a way to view the response object (status code / payload / etc) per request? I know there is the statusCallback function but that only displays the current stats not the response object.

Does Your Module Support Multi-Part/FormData

First of all Bunch of thanx to @alexfernandez for developing this module !

Secondly can i use Multi-Part/FormData instead of application/x-www-form-urlencoded ?!

If so , then how can i give or how do i insert a file in body !

Or this module doesn't Support -> Multi-Part/FormData ?

Output in json format

Similar to what we have done at performance, add an option to output the statistics in JSON format, and ideally to a file instead of stdout.

Test output not consistently shown

The test results are not consistently shown after running a loadtest. I am able to reproduce on multiple computers and multiple platforms (Windows 7 and Ubuntu 14).

user@ubuntu:~/nodetest/node$ loadtest -c 8 -t 40 --rps 200 http://127.0.0.1:80/api/hello/world.json
[Mon Jan 12 2015 09:52:54 GMT-0500 (EST)] INFO Requests: 0, requests per second: 0, mean latency: 0 ms
[Mon Jan 12 2015 09:52:59 GMT-0500 (EST)] INFO Requests: 885, requests per second: 177, mean latency: 0 ms
[Mon Jan 12 2015 09:53:04 GMT-0500 (EST)] INFO Requests: 1885, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:53:09 GMT-0500 (EST)] INFO Requests: 2885, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:53:14 GMT-0500 (EST)] INFO Requests: 3885, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:53:19 GMT-0500 (EST)] INFO Requests: 4885, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:53:24 GMT-0500 (EST)] INFO Requests: 5885, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:53:29 GMT-0500 (EST)] INFO Requests: 6885, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:53:34 GMT-0500 (EST)] INFO Requests: 7885, requests per second: 200, mean latency: 0 ms
user@ubuntu:~/nodetest/node$ loadtest -c 8 -t 40 --rps 200 http://127.0.0.1:80/api/hello/world.json
[Mon Jan 12 2015 09:53:41 GMT-0500 (EST)] INFO Requests: 0, requests per second: 0, mean latency: 0 ms
[Mon Jan 12 2015 09:53:46 GMT-0500 (EST)] INFO Requests: 890, requests per second: 178, mean latency: 0 ms
[Mon Jan 12 2015 09:53:51 GMT-0500 (EST)] INFO Requests: 1890, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:53:56 GMT-0500 (EST)] INFO Requests: 2890, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:54:01 GMT-0500 (EST)] INFO Requests: 3890, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:54:06 GMT-0500 (EST)] INFO Requests: 4890, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:54:11 GMT-0500 (EST)] INFO Requests: 5890, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:54:16 GMT-0500 (EST)] INFO Requests: 6890, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:54:21 GMT-0500 (EST)] INFO Requests: 7890, requests per second: 200, mean latency: 0 ms
user@ubuntu:~/nodetest/node$ loadtest -c 8 -t 40 --rps 200 http://127.0.0.1:80/api/hello/world.json
[Mon Jan 12 2015 09:54:28 GMT-0500 (EST)] INFO Requests: 0, requests per second: 0, mean latency: 0 ms
[Mon Jan 12 2015 09:54:33 GMT-0500 (EST)] INFO Requests: 894, requests per second: 179, mean latency: 0 ms
[Mon Jan 12 2015 09:54:38 GMT-0500 (EST)] INFO Requests: 1891, requests per second: 199, mean latency: 0 ms
[Mon Jan 12 2015 09:54:43 GMT-0500 (EST)] INFO Requests: 2894, requests per second: 201, mean latency: 0 ms
[Mon Jan 12 2015 09:54:48 GMT-0500 (EST)] INFO Requests: 3893, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:54:53 GMT-0500 (EST)] INFO Requests: 4892, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:54:58 GMT-0500 (EST)] INFO Requests: 5893, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:55:03 GMT-0500 (EST)] INFO Requests: 6894, requests per second: 200, mean latency: 0 ms
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Target URL:          http://127.0.0.1:80/api/hello/world.json
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Max time (s):        40
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Concurrency level:   8
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Agent:               none
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Requests per second: 200
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Completed requests:  7892
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Total errors:        0
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Total time:          40.003793234999996 s
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Requests per second: 197
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Total time:          40.003793234999996 s
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO Percentage of the requests served within a certain time
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO   50%      1 ms
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO   90%      2 ms
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO   95%      2 ms
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO   99%      3 ms
[Mon Jan 12 2015 09:55:08 GMT-0500 (EST)] INFO  100%      17 ms (longest request)

I'm not able to reproduce this issue consistently however, as shown above. The last output is missing a line, "Requests: X, requests per second: Y, mean latency: Z ms ", so my guess is that the loadtest is killed before it has time to output the results. This seems to happen less frequently with tests that take longer to run, further evidence towards my above hypothesis.

Post request file read is not parsing the JSON

When I pass the command
loadtest [URL] -n 100 -c 4 -p "post.json" -T "application/json"
where post.json has
{
"a" : 1
}
The post request that is sent is not the entire JSON parsed but an array of ASCII codes of the the characters of the in the json file.

This is not the case when API call is done. In the API call when body is passed with JSON the request is sent properly.

Accept Header

Hey, I was wondering if we could define the acceptHeader in the options object? I need to be able to use the acceptHeader argument because I get this problem.
Cannot call method 'indexOf' of undefined
Thanks :)

native module microtime

in node core there exists a high resolution timer (process.hrtime) that can be used instead of the microtime native module.
this would be very helpful for portability between platforms.

ECONNRESET with concurrency > 5

If you try loadtesting the same host with concurrency > 5, most connections may fail with ECONNRESET. This is caused by default Node.js HTTP library agent limitation of 5 concurrent connections per host. More details here: http://stackoverflow.com/questions/12060869/why-is-node-js-only-processing-six-requests-at-a-time/12061013#12061013

In order to avoid this, a limitation should be pushed in the main module where http and https modules get imported for the first time, e.g.:

var http = require('http');
http.globalAgent.maxSockets = 20;
var https = require('https');
https.globalAgent.maxSockets = 20;

URL generator

So there's request generator, but no url generator.
In REST API the common practice is to use params from endpoint itself (like /resources/resourceID where's resourceID is different for every single resource). The lack of url generator makes it impossible to test REST servers.

Process not finishing

Reported by @jgato: "After some hard tests (even using command line) when everything is finished, the process is still waiting before returning the prompt. Like if it was waiting for closing connections. But all the results have been showed." Apparently the process does not ever end.

Can you try cloning the repo and running npm test to see if it ends in your setup?

RangeError

Hi I'm a noob to load testing but I copied the sample Hello World from the node home page and fired up the server then ran node loadtest --rps 2000 http://localhost:1337 (I didnt install globally).

I got the result:

[Sun Feb 02 2014 12:10:19 GMT+0000 (GMT)] INFO Requests: 0, requests per second: 0, mean latency: 0 ms
[Sun Feb 02 2014 12:10:24 GMT+0000 (GMT)] INFO Requests: 9170, requests per second: 1834, mean latency: 30 ms
[Sun Feb 02 2014 12:10:24 GMT+0000 (GMT)] INFO Errors: 1, accumulated errors: 1, 0% of total requests
[Sun Feb 02 2014 12:10:36 GMT+0000 (GMT)] INFO Requests: 18304, requests per second: 759, mean latency: 520 ms
[Sun Feb 02 2014 12:10:36 GMT+0000 (GMT)] INFO Errors: 1945, accumulated errors: 1946, 10.6% of total requests
[Sun Feb 02 2014 12:10:36 GMT+0000 (GMT)] INFO Requests: 18304, requests per second: 0, mean latency: 0 ms
[Sun Feb 02 2014 12:10:36 GMT+0000 (GMT)] INFO Errors: 0, accumulated errors: 1946, 10.6% of total requests

net.js:0
(null)

RangeError: Maximum call stack size exceeded

Is it supposed to error like that? It also killed the Hello World server..

Server running at http://127.0.0.1:1337/
Assertion failed: (fd > -1), function uv__close, file ../deps/uv/src/unix/core.c, line 413.
Abort trap: 6

Need to specify header host info explicitly for api call for https

For the API for loadTest, in options, I need to set this

headers: {
    "host": "<hostname>:<port>"
}

explicitly for https urls. Otherwise, it fails.

However, cli passes host information in header without mentioning it explicitly. I checked this using debug mode. Without this information the api call fails with 400.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.