yandex / pandora Goto Github PK
View Code? Open in Web Editor NEWA load generator in Go language
License: Other
A load generator in Go language
License: Other
Current metrics reporting is very ugly.
I propose to use rcrowley/go-metrics with json HTTP reporter.
Make an ubuntu package and upload to PPA
Need to automate compile and run of Pandora binary and check its behaviour. It would be great to check it before every PR in CI.
Propose:
Create acceptance_tests directory and create onsi/gomega/gexec driven tests for
I imagine tests like
onsi/gomega/gexec
testdata/http.yaml
configUse configuration by spf13/viper + mitchellh/mapstructure to:
map[string]interface{}
decode in extpoints fabric functionsUse go-playground/validate for config-parse time validation.
Also consider to make plugin creation easy like:
package custom_gun
type Gun struct { config Config}
var _ pandora.Gun = (*Gun)(nil)
type Config { Target string `validate:"hostname"` }
func New(config * Config) *Gun { return &Gun{config} }
init () { pandora.RegisterGun( New, "my_gun") }
Of course that requires some reflect based magic.
https://godoc.org/golang.org/x/net/http2 - it is still under development, but works fine
Read ammo from config file and cache them in memory
Basic idea: generate timestamps with a function and use them to drive the Timer.
Instance takes ammo before schedule token take, that allows be ready, when take next schedule token, and operate close to schedule.
But in case of individual schedules, extra ammo will be taken.
Propose add IsFinished() bool
schedule method, that allows check is schedule finished before taking ammo.
In such case, in case of shared schedule, some extra ammos can be consumed anyway at schedule finish, but that is minor problem - that will happen once after shooting finish.
Add a posibility to receive config contents from stdin. Like less
UNIX command.
Error waiting utils promises: 1 error(s) occurred:
* context canceled2017/06/23 21:11:00 Done
Config:
pools:
- id: HTTP pool # Pool name
gun:
type: http # Gun type
target: [my-host-here]:80 # Gun target
ammo:
type: uri # Ammo format
file: ./ammo.uri # Ammo File
result:
type: phout # Report format (phout is for Yandex.Tank)
destination: ./http_phout.log # Report file name
rps: # RPS Schedule
type: periodic # tick periodically
batch: 1 # in batches of two ticks
max: 300 # three batches total
period: 0.1s # one batch each second
startup: # Startup Schedule
type: periodic # start Instances periodically
batch: 1 # one Instance at a time
max: 5 # five Instances total
period: 0.5s # every 0.5 seconds
Ammo:
/
Generate tags automatically based on ammo contents. For example, take first n uri elements:
/my/very/deep/page?id=23¶m=33 -> _my_very
Now HTTP gun allows HTTP/2, but this behavior is too implicit.
Create explicit HTTP/2 gun, and forbid HTTP/2 for HTTP gun.
From Aggregator doc:
// If Aggregator can't process reported sample without blocking, it should just throw it away.
// If any reported samples were thrown away, Run should return error describing how many samples
// were thrown away.
We should implement that behaviour in all existing Aggregators.
Please, just ignore the spaces instead of stopping the test:
Pool failed. Canceling started tasks {"pool": "pool_0", "error": "provider faile d: failed to decode ammo at line: 1; data: \"[Accept-Encoding: gzip,deflate,sdch] \": header line should be like '[key: value]\n
Now HTTP gun make DNS resolve per every connection to target. Resolve should be done once.
Let's suggest test cases below:
Max HTTP RPS
Max HTTPS RPS
Max concurrent TLS connections
Add composite schedule that sequentially combines other schedules
Try some scripting languages support libraries and implement a scenario gun using one of them.
Implement a flag in config that will switch on/off ammo enumeration. Each ammo will have a number after # sign in tag:
_my_tag#33
Make parallelism
option in config, with 1
value by default.
When parallelism
is more that one, instances should call Gun.Shoot
in separate goroutine, using maximum parallelism
in parallel.
That option will be very helpful for http2 gun.
Link to examples is broken.
Also I got this error if startup
section configured to linear
type (as suggested in config):
cli/cli.go:101 Config decode failed {"error": "1 error(s) decoding:\n\n* error decoding 'pools[0].startup': no plugins of type core.Schedule has been registered for name linear"
See Phout format metrics format and description.
Every plugin implementation can have it's own defaults. But user still needed to set 'type' field for every plugin.
I propose to add default type
s for every plugin.
Update yandex-tank plugin to include new features support. https://github.com/yandex/yandex-tank
Collect and publish generator status metrics such as parallel requests count, ammo loop count, users count and others. Publish every n seconds, n should be configurable.
Every plugin (gun, user, etc) should be able to publish its metrics in a common way.
We can add this feature in #61 if needed
JSON and MessagePack are fast enough, and supported widely.
Such modules will be nice for quick implementations.
When performance become problem, it will be easy to replace reflect version with fast code generated.
Would be nice to have configurable options for cipher_suits and SNI in pandora
Add schedule with Pareto distributed inter-arrival times. May be support other distributions.
See doc for details: http://yandextank.readthedocs.io/en/latest/tutorial.html#request-style
Blocked by #30
Glide is most powerful package manager.
It supports aliasing packages (e.g. for working with github forks) and many other things
Get rid of stdlib logger, and use something leveled and fast. I suggest uber-go/zap.
At degug level, logs should include:
Usually, even when using multiple instance pools, wanted to use one Aggregator
for them.
Now, phout
aggregator solving this problem in not clear way: aggregator config passed to all needed places, but underhood only aggregator created, but many Run
routines started. And seems, that there are some bugs in this non clear implementation.
I propose add explicit mechanism to use one aggregator in multiple pools.
How I think it should be looking in config:
pools:
- ...
aggregator:
type: master
# key: some_key # "default" by default, so should not be typed when only master created.
config:
type: phout
....
- ...
aggregator:
- type: slave
# key: some_key # "default" by default, so should not be typed when only master created.
Master and slaves use one global registry, where found each other config key.
Only master runs real aggregator background routine, and it's context should not be canceled until master and all slaves context cancel. Context values passed from master context.
Slave aggregator instance should just block until context cancel, or master subroutine finished
Implement test cases to check if limiters produce ticks according to specified schedule.
First line: json metadata. Format like:
format: binary
date: 20-03-2017 20:17:53.122
type: response_stat
columns:
- name: ts
type: float64
- name: urt
type: int64
config:
pools:
- id:
....
Than print fields as is in big endian. One sample per line.
See doc for details: http://yandextank.readthedocs.io/en/latest/tutorial.html#uri-style-uris-in-file
Blocked by #30
Implement reproducible benchmarks that will allow to measure performance of our components.
Support POST requests (and other types also) in HTTP and SPDY guns.
It's looking good and should become official tool soon.
pkg/errors more popular, fast, and logged to zap in more readable way
I got some weird random crashes of pandora compiled with go 1.9.1 with config such as "rps": {"type": "const", "ops": 15000, "duration": "5m"}, "startup": {"type": "once", "times": 500}
Sometimes it's fatal error: sweep increased allocation count
, and sometimes it's a SIGSEGV.
pandora_ZaeI79.log
It seems that everything works fine when I compile with go 1.8
Try https://github.com/valyala/fasthttp when we'll have benchmarks.
We want go use different data sources for Providers, but not create multiple providers with same decoding logic.
Proposed, to create core.DataSource
plugin, that will contain logic of getting encoded ammo bytes, and use it as nested plugin in Providers that takes bytes from somewhere and decodes them.
Here some draft:
type DataSource {
// Open opens data source.
// Returned source can be file, socket, string or byte reader, or anything else, that implements
// more that io.Reader, so it's ok to check if source is io.Seeker, and seek if it is.
// Specific type checking (*os.File, for example) highly not recommended - use interfaces.
Open() (source io.Closer, err error)
}
What DataSources can be implemented:
Pass core.Deps
struct to Aggregator
, Provider
and Gun
as an arg to their Run
(Bind
) methods.
What put to core.Deps
:
1). zap.Logger
2) some metrics registry. I think something custom based on go-metrics
3) registry, to register what will be logged as every second shooting status
4) afero.Fs? Not shure really.
Document, that some fields may be added in future versions.
Blocked by #55
Make HTTP ammo something like
type HTTP interface {
GetRequest() (*http.Request, SmthElse)
}
That allows easily use different ammo sources with different http based guns.
Pandora repo designed to be used as library, as well as binary.
But /vendor folder cause compilation error, when Pandora used ad library.
I propose to get rid of /vendor folder, but remain glide.lock
and glide.yaml
files, and use them to have reproducible builds in CI.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.