Comments (13)
I'm not so into performance testing, but isn't it dangerous to add a fail/pass verdict on a test based on execution time? that will always be system dependent, so you might be getting different results on different systems.
from busted.
Typically performance tests, like integration tests, run on a nightly basis since they also typically take longer to run than unit tests. This means that typically they are also running on consistent hardware.
This does give us something we can think about though, maybe we have it benchmark a specific routine that it's total execution time is equal to 1 point. We then use points to describe execution time rather than seconds? This should allow us to be consistent across cpu architectures and lua implementations.
from busted.
Maybe also we generate a log of total points, for system performance over time tracking?
from busted.
That is, make total points a part of the output.
from busted.
or points per test, rather
from busted.
I would rather think that some report (similar to the coverage output) would be best. Highlighting the code that takes the most time to run, that's the code you would want to optimize. But then again, optimizing testruns is a sub optimization as you should actually optimize production like behaviour
from busted.
see http://lua-users.org/wiki/ProfilingLuaCode for some existing profilers that might be integrated
from busted.
I think a separate report is a better idea, I modified the other feature to not specify how it was output but just to expose the data to the output handlers.
There are two use cases here I think.
- Performance regression testing - target production use cases(basically integration tests that are timed)
- Unit benchmarking - target small units of code
The performance regression tests would be run nightly along with the integration tests and the unit benchmarks would run with the unit tests.
Limiting a test to a certain run time is useful more for async tests and not for performance analysis.
I could see myself writing a parser for my performance test output that shoved it all into something like graphite, so I could then see my application performance over time and also break it down all the way to the unit level.
from busted.
seems ok to me.
from busted.
Do you think lanes would be good to use for this? I'm interested in working on this feature.
from busted.
iirc @DorianGray had lanes in mind for this feature.
from busted.
@cmr We were looking at lanes, but ran into some problems some months ago running Lanes on Windows and OSX. If you'd like to give it a shot, we're always interested in pull requests.
from busted.
Just chiming in with my two cents here, I think the most useful for performance tests is to be able to track the results over time, to see what builds contained performance regressions and by how much. Mono does/did a similar thing with nightly performance tests although their links seem to be broken at the moment.
It was just a table with a row per night and a percentage per cell (green for faster, red for negative), the columns being the performance test cases. I'm not saying that should be the format, but I'm offering it as an example.
from busted.
Related Issues (20)
- Can't install dependency of busted via luarocks HOT 1
- Can't install dependency 'mediator_lua' of busted via luarocks HOT 2
- Lua
- bad release HOT 1
- Can't install/run on Windows 10 HOT 3
- Garbled characters in output HOT 4
- Wrong `LUAROCKS_SYSCONFDIR` in `busted.bat` HOT 1
- `package.moonpath` is never updated which breaks Moonscript module requirements HOT 1
- [feature request] Support clean up function for it() HOT 4
- Async documented but not functional HOT 1
- [question] how to pass argument to function when trying to catch errors HOT 2
- [help needed] fails to execute as a standalone file HOT 2
- the comand line argument with space will be split when use --lua
- Fails to encode results to json due to non-string error objects being raised
- [Feature Request] Could we introduce a new context for ordered tests? HOT 11
- Make it easier to run a global before_each before all before_each blocks or a global after_each after all after_each blocks HOT 1
- Make it easier to run a global before_each before all before_each blocks or a global after_each after all after_each blocks HOT 1
- Fennel loader? HOT 2
- Bug? Loader applied to wrong language
- Teee HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from busted.