Comments (9)
Comment from rohitcav
Also can we get a report in html/excel/word format?
My reply to this:
I had planned to export to a csv/tsv which would be easy to import into excel (issue #17), but i'm in two minds as to whether I should have an intensive reporting functionality within the gui or break it out to a seperate tool like LoadRunner does with LR Analysis? JMeter also seems to take this approach.
FYI - All the results are all stored in an SQLite3 database in the results directory, Also the agents table gives you the agent monitoring data. So if you are comfortable with SQL you can easily create your own custom reports from that, this will also be the data source if I decide to go with an external reporting tool.
from rfswarm.
At the moment I have functionality built to generate 3 csv files:
- Summary data (as displayed in run screen)
- Raw Data (result data behind the summary)
- Agent Data
With these 3 files you should have everything needed for reporting test results
- these files can import them into excel easily
- with excel Summary data can be nicely formatted as a table
- with excel you can filter the Raw Data for drill down information
- with excel you can create graphs from the Raw Data
- with excel you can create graphs from the Agent Data (e.g. running users)
In addition to this you could also create an odbc connection from excel or another application to query the data for creating graphs and tables.
I have place holder code commented out but ready for html & word format reports which i'll leave as is for now.
from rfswarm.
CSV export functionality merged into branch v0.4.4
from rfswarm.
Hi Dave,
Apology for delay in response. I was out of office from couple of weeks.
I executed my script with the latest version of rfswarm and agent and it looks good from earlier version.
Just one concern,
In reporting, the result name is very descriptive and long. Even for a login an application the credentials appears in the result name. I would suggest to optimize in result name and not to show such details.
I can see there are lots of improvements in few days.. Keep it up..
One more suggestion: Can we export the result by clicking on the csv icon once test finishes? And also test end time in the console?
from rfswarm.
Hi rohitcav,
Thanks for the feedback, many of the improvements were as a result of our conversations so your contribution has helped.
Regarding your questions:
Can we export the result by clicking on the csv icon once test finishes?
You can click the csv export at any time, usually you would wait until the test finished and then do the export, but there is nothing to stop you from doing it earlier either. I deliberately chose not to restrict this to after the test finished because this was one of the issues I have had with Loadrunner over the years, many times I have needed to get the results from a test before the end of the test, usually the trigger is something beyond your control.
And also test end time in the console?
I thought I had done that, but when I checked the code, it only happens if you used one of the auto-run modes, I guess I need to implement this for manual run as well. I logged Issue #51 for this.
from rfswarm.
I ran couple of test and waited for more than 30 mins but still test is not getting stopped automatically. However, stats are showing on the table.
from rfswarm.
Can you show me a screen shot of your plan screen?
from rfswarm.
from rfswarm.
Ok, and when you run those tests individually from robot framework how long do they take to complete?
Also at the 30 min mark, on the run screen was the number under Robots 0 or higher than 0? if the robots count is still above 0 the test hasn't finished ramping down and some test cases are still going.
What should be happening based on this plan:
- In the first 20 seconds all 4 tests start
- in the following 40 seconds, if any of these tests finish their first iteration they will start the second iteration (same test repeated for a second time), then 3rd, 4th, 5th etc
- after 60 seconds, no new iterations will be started, but any that are running will be allowed to run to their natural completion.
Because most test cases take longer than 1 minute, especially if you have added think times. But because rfswarm doesn't know how long your test case will run for the ramp down in the plan is estimated based on the ramp-up it could be much longer or much shorter, for example, if your test case takes ~1hr to run then the ramp down will be ~1 hour (minus the run time and some of the ramp up), likewise if your scripts only take 10 minutes and your ramp up was 1 hour, then the ramp down will be ~10 minutes.
from rfswarm.
Related Issues (20)
- Test Cases for Double Quotes Issue #159
- resource files fail to copy to agent when using ${CURDIR} and glob pattern HOT 1
- Try to configure proxmox for self-hosted runners for github actions
- Add NiceProject to Sponsors page HOT 1
- Github build not supporting python 3.7 on macos arm64 HOT 3
- Python 3.13 Support
- Support RF 7.0's localised test data sections HOT 2
- Reporter initial template is too basic
- Manager cannot handle corrupted scenario files. HOT 7
- The manager is not able to handle non-existent robot files. HOT 2
- Create some pangram test cases HOT 1
- Windows path becomes `C:Users\` should be `C:\Users\` HOT 1
- Consider a HTML/CSS style GUI for manager and Reporter HOT 4
- Ability to run Agent as a daemon (service)
- RFSWARM-Reporter - Does not pick up the start / end time of a test run when using a saved template. HOT 5
- Listener errors with RF 7.0.1 HOT 3
- Donation Reminder HOT 1
- Not all CSV report files are generated after clicking CSV Report button HOT 6
- Saving a Reporter template file in Windows does not work correctly HOT 2
- Resource files located in separate folder cannot be found HOT 16
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rfswarm.