Giter Site home page Giter Site logo

arturspirin / test_junkie Goto Github PK

View Code? Open in Web Editor NEW
72.0 8.0 9.0 715 KB

Highly configurable testing framework for Python

Home Page: https://www.test-junkie.com/

License: MIT License

Python 100.00%
test-automation testing-tools testing-framework test-runner python test-reporting test-analytics test-metrics test-junkie qa

test_junkie's People

Contributors

arturspirin avatar snyk-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

test_junkie's Issues

Executing same suite with different tags wrong behaviour

  1. Python version python --version - Python 3.10.7
  2. Test Junkie version pip show test_junkie - Version: 0.8a6
  3. Windows 10
  4. Executing a suite:
def order1():

    runner = Runner(suites=[suite_login]
    runner.run( features=
                [
                    Feature.LOGIN,
    
                    ],
        tag_config={"run_on_match_any": 
        [
            Tag.OPERATIONS_LOGIN,
            ]})

def order2():
    runner = Runner(suites=[suite_login,suite]
    runner.run( features=
                [
                    Feature.LOGIN,

                    ],
        tag_config={"run_on_match_any": 
        [
            Tag.OPERATIONS_LOGIN,
            ]})

def order_all():
    funcs = [order1,order2]
    for func in funcs:
        try:
            func()
        except ValueError:
            print('TEST')
            break

order_all()

Expected behavior

I expect that the runner can run the Feature.login twice , 1 after another. This can be very useful if I want to create a suite with different tags and run the same suite multiple times with different tags. This should run my first function then the function 2 separately and so on.

Actual behavior

Right now the first result are correct and real. But the runner only runs the first the suite for the first function, after that only show the results accordingly with the tags. Example of results:
Result of the order1() (Run 1)
========== Test Junkie finished in 4.17 seconds ==========
[4/4 100.00%] SUCCESS
[0/4 0%] FAIL
[0/4 0%] IGNORE
[0/4 0%] ERROR
[0/4 0%] SKIP
[0/4 0%] CANCEL

[SUCCESS] [4/4 100.00%] [3.97s] test_unity_login.suite_login

Result of the order2() (Run 1)
========== Test Junkie finished in 4.17 seconds ==========
[4/4 100.00%] SUCCESS
[0/4 0%] FAIL
[0/4 0%] IGNORE
[0/4 0%] ERROR
[0/4 0%] SKIP
[0/4 0%] CANCEL

[SUCCESS] [4/4 100.00%] [3.97s] test_unity_login.suite_login

It looks good but in reality it only runs the suite one time and I want it to run it twice, one after another. The second result is misleading the runner consider it SUCESS because it use the results from the previous function run.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Audit metrics are off on Linux Mint (Likely effecting other Linux distros as well)

  1. Python version python --version 2.7 & likely 3.5+ as well but not confirmed
  2. Test Junkie version pip show test_junkie v0.7a8. Likely older versions are also effected.
  3. Platform aka Windows 10 or Linux Mint 18.1 Linux Mint 19.2 Confirmed
  4. Command used, if running via terminal: tj audit owners -s <project_directory>
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

Unclear how to reproduce, issue observed in a live project, need to investigate further.

Expected behavior

Accurate metrics are shown to the user

Actual behavior

Metrics are significantly inflated on Linux aka suites that have 2 tests should show 2 tests in the audit but they can show a much higher number like 90 in the audit metrics. Thus also inflating the count of overall tests by about 30%.
Not reproducible on Windows 10.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Failure to parse Exception Object into string for HTML report

  1. Python version python --version >> 2.7
  2. Test Junkie version pip show test_junkie >> 0.5a2
  3. Platform aka Windows 10 or Linux Mint 18.1 >> all
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

Expected behaviour

HTML generation must never fail, regardless of metrics that are being processed

Actual behaviour

File "/usr/lib/python2.7/site-packages/test_junkie/runner.py", line 116, in __create_html_report
    reporter.generate_html_report(self.__kwargs.get("html_report"))
  File "/usr/lib/python2.7/site-packages/test_junkie/reporter/reporter.py", line 80, in generate_html_report
    table_data = self.__get_table_data()
  File "/usr/lib/python2.7/site-packages/test_junkie/reporter/reporter.py", line 228, in __get_table_data
    test_metrics[class_param][param]["exceptions"][index] = str(exception)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 15-19: ordinal not in range(128)

If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Bad for loop in add_group_rule

  1. Python version python --version >> 2.7 & 3
  2. Test Junkie version pip show test_junkie >> 0.5a8
  3. Platform aka Windows 10 or Linux Mint 18.1 >> all
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
    add_group_rule() function during the suite for loop needs to take a copy of the suites

Expected behaviour

Actual behaviour


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

BeforeClass may run for a suite that is supposed to be skipped entirely based on tags or components

  1. Python version python --version - All
  2. Test Junkie version pip show test_junkie - 0.7a6 and prior
  3. Platform aka Windows 10 or Linux Mint 18.1 - All
  4. Command used, if running via terminal - N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

This issue is similar to #20 but reproducible by tagging tests or defining components and then running them using Runner().run(tag_config={...}) or Runner().run(components=[...])

Expected behavior

Only applicable suites/tests and their decorators should be executed.

Actual behavior

BeforeClass(), if any, will get executed for suites that should be ignored unless matching the run config.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Allow skipping tests inside test logic

Describe desired feature

Currently, Test Junkie allows to skip tests via a predefined test definition. However, there may be use cases in parameterized testing where you may want to run all the parameters but skip the test for one of them etc. To do this, TJ needs to allow to dynamically skip a test at any time in the test it self.

html_report from config is not passed into args

  1. Python version python --version
    3.6.8
  2. Test Junkie version pip show test_junkie
    Name: test-junkie
    Version: 0.7a3
    Summary: Modern Testing Framework
    Home-page: https://www.test-junkie.com/
    Author: Artur Spirin
    Author-email: [email protected]
    License: UNKNOWN
    Location: c:\serp_storage.venv\lib\site-packages
    Requires: colorama, configparser, statistics, psutil, appdirs, coverage
    Required-by
  3. Platform aka Windows 10 or Linux Mint 18.1
    Windows 10
  4. Command used, if running via terminal
    tj run
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
Traceback (most recent call last):
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\cli\cli_runner.py", line 271, in run_suites
    quiet=args.quiet)
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\runner.py", line 193, in run
    self.__create_html_report(reporter)
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\runner.py", line 56, in __create_html_report
    reporter.generate_html_report(self.__kwargs.get("html_report"))
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\reporter\html_reporter.py", line 111, in generate_html_report
    with open(write_file, "w+") as output:
TypeError: expected str, bytes or os.PathLike object, not type

Expected behaviour

Actual behaviour


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Improve error message in case of bad input for Suite object to the Runner instance

  1. Python version python --version - 2 & 3
  2. Test Junkie version pip show test_junkie - v0.7.a8 and below
  3. Platform aka Windows 10 or Linux Mint 18.1 - All
  4. Command used, if running via terminal - N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
    In case of a bad input for suite objects to the Runner instance, Test Junkie throws an internal exception which is questionable and not clear to the end user.

Expected behavior

Throw clear actionable exception for user

Actual behavior

Internal exception thrown because incorrect suite object passed in.

Traceback (most recent call last):
  File "C:\eclipse\workspace\qatitan\titan\Runner.py", line 202, in start_test_cycle
    if not Settings.is_containerized() else False)
  File "C:\Python27\lib\site-packages\test_junkie\runner.py", line 34, in __init__
    self.__suites = self.__prioritize(suites=suites)
  File "C:\Python27\lib\site-packages\test_junkie\runner.py", line 83, in __prioritize
    priority = suite_object.get_priority()
AttributeError: 'NoneType' object has no attribute 'get_priority'

If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Encoding/Decoding issues w/ HTML & Console outputs

  1. Python version python --version 3.6
  2. Test Junkie version pip show test_junkie All
  3. Platform aka Windows 10 or Linux Mint 18.1 All
  4. Command used, if running via terminal N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
    Framework chokes when generating html and/or console output when there is char it can't decode. No code to repro yet. Seems to be related to dynamic value fields like actual result from meta.

Expected behavior

Actual behavior

Traceback (most recent call last):
  File "C:\eclipse\workspace\qatitan/titan/Runner.py", line 256, in start_test_cycle
    runner.run(suite_multithreading_limit=Settings.SUITE_THREAD_LIMIT,
  File "C:\Users\automation\AppData\Local\Programs\Python\Python38-32\lib\site-packages\test_junkie\runner.py", line 180, in run
    Aggregator.present_console_output(aggregator)
  File "C:\Users\automation\AppData\Local\Programs\Python\Python38-32\lib\site-packages\test_junkie\metrics.py", line 338, in present_console_output
    print("\t      |__ run #{num} [{status}] [{runtime:0.2f}s] {trace}"
  File "C:\Users\automation\AppData\Local\Programs\Python\Python38-32\lib\site-packages\colorama\ansitowin32.py", line 41, in write
    self.__convertor.write(text)
  File "C:\Users\automation\AppData\Local\Programs\Python\Python38-32\lib\site-packages\colorama\ansitowin32.py", line 162, in write
    self.write_and_convert(text)
  File "C:\Users\automation\AppData\Local\Programs\Python\Python38-32\lib\site-packages\colorama\ansitowin32.py", line 187, in write_and_convert
    self.write_plain_text(text, cursor, start)
  File "C:\Users\automation\AppData\Local\Programs\Python\Python38-32\lib\site-packages\colorama\ansitowin32.py", line 195, in write_plain_text
    self.wrapped.write(text[start:end])
  File "C:\Users\automation\AppData\Local\Programs\Python\Python38-32\lib\encodings\cp1252.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\ue00e' in position 956: character maps to 

If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Ecplicitly cast Test/Suite multithreading args to int

  1. Python version python --version >> Discovered on 2.7 but may also affect Python 3
  2. Test Junkie version pip show test_junkie >> 0.5a3
  3. Platform aka Windows 10 or Linux Mint 18.1 >> All
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
self.__test_limit = kwargs.get("test_multithreading_limit", 1)
if self.__test_limit == 0:
    self.__test_limit = 1

self.__suite_limit = kwargs.get("suite_multithreading_limit", 1)
if self.__suite_limit == 0:
    self.__suite_limit = 1

Expected behaviour

Explicitly cast to int.

Actual behaviour

If args for test_multithreading_limit & test_multithreading_limit are passed in as str, I noticed that they enable multithreading as "1" > 1 = True


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

(Sporadic) Threading limits may not be honored

  1. Python version python --version - All
  2. Test Junkie version pip show test_junkie - All
  3. Platform aka Windows 10 or Linux Mint 18.1 - All
  4. Command used, if running via terminal - N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

Need two suites with the following configuration

from time import sleep

from test_junkie.decorators import test, Suite

from threading_tut.suites.ExampleSuiteB import ExampleSuiteB


@Suite()
class ExampleSuiteA:

    @test(parameters=[1, 2, 3, 4, 5], parallelized_parameters=True, pr=[ExampleSuiteB.example_test_1])
    def example_test_1(self, parameter):
        print(1)
        sleep(2)

and

from time import sleep

from test_junkie.decorators import test, Suite


@Suite()
class ExampleSuiteB:

    @test(parameters=[1, 2, 3, 4, 5], parallelized_parameters=True)
    def example_test_1(self, parameter):
        print(2)
        sleep(2)

Then run it like so

from test_junkie.runner import Runner

from threading_tut.suites.ExampleSuiteA import ExampleSuiteA
from threading_tut.suites.ExampleSuiteB import ExampleSuiteB

runner = Runner(suites=[ExampleSuiteA, ExampleSuiteB])
runner.run(suite_multithreading_limit=2,
           test_multithreading_limit=18)

Expected behavior

Suites take 4 sec to run overall, honoring threading limits and parallel restriction settings

Actual behavior

There is a sporadic behavior where PR and test thread limits may not be honored


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

[HTML Report] Passing rate shows 0 when some tests have passed

  1. Python version python --version >> All
  2. Test Junkie version pip show test_junkie >> 0.4a1
  3. Command used, if running via terminal >> N/A
  4. Smallest code snippet that can reproduce the issue
    Report generated for any suite that has tests in categories other than success will show passing rate as 0% even when some tests have passed.
@Suite()
class Example:

    @test()
    def a(self):
        pass

    @test()
    def b(self):
        assert 1 == 2

runner = Runner([Example], html_report="path/to/report.html")
runner.run()

Expected behaviour

Passing rate should be calculated correrctly

Actual behaviour

Passing rate shows 0%

BeforeClass & AfterClass errors dont have proper traceback

  1. Python version python --version >> 2.7
  2. Test Junkie version pip show test_junkie >> 0.5a2
  3. Platform aka Windows 10 or Linux Mint 18.1 >> all
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
@Suite()
class Example:

    @beforeClass()
    def before_class(self):
        raise Exception("Find me!")

Expected behaviour

Exception is raised with a usable traceback

Actual behaviour

Traceback (most recent call last):
File "E:\Development\test_junkie\test_junkie\runner.py", line 507, in __run_before_class
    class_parameter=class_parameter)
File "E:\Development\test_junkie\test_junkie\runner.py", line 557, in __process_decorator
    raise decorator_error
Exception: Find me!

Based on this traceback its hard to track where the issue happen in the code because the exception is being raised in the internal function. Instead need to capture the traceback.format_exc() during the exception and raise appropriate exception with the right trace so it can pinpoint exact line in the code with the issue.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Not able to detect methods for parameterized execution

  1. Python version python --version >> All
  2. Test Junkie version pip show test_junkie >> 0.4a1
  3. Command used, if running via terminal >> N/A
  4. Smallest code snippet that can reproduce the issue
@Suite()
class Example:

    @test(parameters=ConfigGenerator().create_config)
    def test_query_options(self, parameter):
        ...

Expected behaviour

Test Junkie needs to be able to detect when a function/method is used and run the function to get the output that would be then used for parameterized execution

Actual behaviour

Only able to detect when a function is used. If using a method of an instance, it does not work

Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 754, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/usr/local/lib/python2.7/dist-packages/test_junkie/runner.py", line 292, in __run_suite
    self.__validate_parameters(suite)
  File "/usr/local/lib/python2.7/dist-packages/test_junkie/runner.py", line 287, in __validate_parameters
    __validate(test.get_parameters(process_functions=True), suite_object, test)
  File "/usr/local/lib/python2.7/dist-packages/test_junkie/runner.py", line 283, in __validate
    if _test is not None else ""))
Exception: Wrong data type used for parameters. Expected: <type 'list'>. Found: <type 'instancemethod'> in Class: <some class>  Test: test_query_options()

Console output is showing the latest test status for all recorded runs

  1. Python version python --version - All
  2. Test Junkie version pip show test_junkie - All
  3. Platform aka Windows 10 or Linux Mint 18.1 - All
  4. Command used, if running via terminal - N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
from test_junkie.decorators import Suite, test, beforeClass
from tests.junkie_suites.TestListener import TestListener


@Suite(retry=2,
       listener=TestListener,
       priority=2, feature="Store", owner="Mike", parameters=[1, 2])
class NewProductsSuite:

    ATT = 1

    @beforeClass()
    def before_class1(self):
        if NewProductsSuite.ATT <= 1:
            NewProductsSuite.ATT += 1
            raise Exception("Derp")

    @test(component="Admin", tags=["store_management"], parameters=[10, 20], parallelized_parameters=True)
    def edit_product(self, parameter, suite_parameter):
        print("edit_product")
        if parameter == 10:
            raise Exception("yo")

    @test(component="Admin", tags=["store_management"])
    def archive_product(self):
        print("archive_product")


if "__main__" == __name__:
    from test_junkie.runner import Runner
    from test_junkie.debugger import LogJunkie

    LogJunkie.enable_logging(10)
    runner = Runner([NewProductsSuite])
    runner.run(test_multithreading_limit=4)

Expected behavior

Console output for tests that fail due to exception in before class should show IGNORE for status while tests that failed due to an error in the test should show ERROR. Tests that failed initially and passed on the second run should show ERROR and then SUCCESS.

Actual behavior

Console output shows the latest status for all tests with the same parameters. aka if test failed and then passed the console will say SUCCESS for status next to a traceback.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

QUESTION: usecase of test junkie on flask app without UI

I am a computer science graduate student and I have chosen to study and make a report on your testing framework 'test-junkie' as part of my coursework on software quality. So far I have started to go through the documentationย and tutorials mentioned on the website. However, what I observed from the tutorial examples is that you are using the framework to test components of your website. I am fairly new to python and I have no experience in testing UI components of a website.

I am curious to understand on how I can use test-junkie on a flask based app in python. I am little confused if test-junkie library needs to be imported directly in the main application code or do I import it inside my unit tests or we do not write unit tests at all and use test junkie as a substitute for it?

I would really appreciate your insights to help me understand the use-case of test-junkie as it would really help me with my project.

My apologies for opening up a conversation in a github issue rather than email but I did not find your email address anywhere on the website.

Metrics are not updated on subclassed before_test() & after_test() Rules call

  1. Python version python --version: 2 & 3
  2. Test Junkie version pip show test_junkie : 0.6a3 & 0.6a4
  3. Platform aka Windows 10 or Linux Mint 18.1 : All
  4. Command used, if running via terminal: N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

Expected behaviour

When subclassed after_test() from the Rules fails, metrics should be properly updated.

Actual behaviour

Metrics are not updated for the test which caused an exception later when creating reports

  File "/usr/local/lib/python2.7/dist-packages/test_junkie/runner.py", line 197, in run
    or self.__processor.suite_multithreading())
  File "/usr/local/lib/python2.7/dist-packages/test_junkie/reporter/html_reporter.py", line 56, in __init__
    self.average_runtime = aggregator.get_average_test_runtime()
  File "/usr/local/lib/python2.7/dist-packages/test_junkie/metrics.py", line 329, in get_average_test_runtime
    samples.append(mean(param_data["performance"]))
  File "/usr/local/lib/python2.7/dist-packages/statistics/__init__.py", line 294, in mean
    raise StatisticsError(u"mean requires at least one data point")
StatisticsError: mean requires at least one data point

Should have explicit metrics update call in the runner:420

if not test.skip_after_test_rule():
    suite.get_rules().after_test(test=copy.deepcopy(test))

Long term this needs to have a generic call to handle all the metrics for the Rules.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Hi Artur, Please help me understand in Base class where are we opening the Browser and what contents do we have to pass to Settings.py to invoke the https://www.test-junkie.com/

Describe desired feature

Any existing examples of such feature? - Hi Artur, Please help me understand in Base class where are we opening the Browser and what contents do we have to pass to Settings.py to invoke the https://www.test-junkie.com/, But it doenot opens browser and Runs the Texts.

Thanks,
Ashish

Baseclass.py
class BasePage:
def init(self, domain, directory, title):
"""
This class will be inherited by all of the Page Objects
:param domain: STRING, Base domain of the website aka "https://test-junkie.com"
Base domain & expected URL will be used to open this page
:param directory: STRING, expected URL of the page that is inheriting this class aka:
"/documentation/" or "/get-started/"
:param title: STRING, expected Title of the page that is inheriting this class
"""
self.__title = title
self.__directory = directory
self.__domain = domain
self.__expected_url = "{domain}{directory}".format(domain=domain, directory=directory)
@Property
def expected_directory(self):
return self.__directory
@Property
def expected_domain(self):
return self.__domain
@Property
def expected_title(self):
return self.__title
@Property
def expected_url(self):
return self.__expected_url
@staticmethod
def get_actual_title():
return Browser.get_driver().title
@staticmethod
def get_actual_url():
return Browser.get_driver().current_url
def open(self, **kwargs):
"""
This method will open the page which inherited BasePage
:return: self (Page Object which inherited BasePage will be returned)
"""
Browser.get_driver().get(self.expected_url)
return self

BeforeClass may run for a suite that is supposed to be skipped entirely

  1. Python version python --version - All
  2. Test Junkie version pip show test_junkie - 0.7a5 and prior
  3. Platform aka Windows 10 or Linux Mint 18.1 - All
  4. Command used, if running via terminal - N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

Steps to reproduce:

  1. Create two test suites with some tests and a @BeforeClass() decorator in both.
  2. Lets say we only want to run example test from test suite B. Thus we initiate the run via
runner = Runner([TestSuiteA, TestSuiteB])
aggregator = runner.run(tests=[TestSuiteB.example])

or

runner = Runner([TestSuiteA, TestSuiteB])
aggregator = runner.run(tests=["example"])

Note that we pass in both of the suites we created.

Expected behavior

TestSuiteA should be skipped, this includes the BeforeClass() that was defined in scope of TestSuiteA.

Actual behavior

BeforeClass() gets executed for the suite that is later gets correctly evaluated to be skipped.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Poor Traceback on bad parameters

  1. Python version python --version >> 2 & 3
  2. Test Junkie version pip show test_junkie >> 0.4a4
  3. Platform aka Windows 10 or Linux Mint 18.1 >> N/A
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
@Suite(owner={"name": "John"})
class ExampleSuite:

    @test()
    def test_help(self):
        pass

runner = Runner([ExampleSuite], html_report="path/to/file/report.html", monitor_resources=True)
runner.run()

Expected behaviour

Error is raised that helps to track down where incorrect input was set

Actual behaviour

Currently can't track where the issue is based on the traceback

Traceback (most recent call last):
  File "E:/Development/test_junkie/playground/run.py", line 19, in <module>
    aggregator = runner.run(test_multithreading_limit=10)
  File "E:\Development\test_junkie\test_junkie\runner.py", line 256, in run
    aggregator=aggregator)
  File "E:\Development\test_junkie\test_junkie\reporter\reporter.py", line 22, in __init__
    self.owners = aggregator.get_report_by_owner()
  File "E:\Development\test_junkie\test_junkie\metrics.py", line 142, in get_report_by_owner
    if owner not in report:
TypeError: unhashable type: 'dict'

Deepcopy prevents HTML report generation on Python 2.7 when there are exceptions inheriting from the base Exception object

  1. Python version python --version >> 2.7
  2. Test Junkie version pip show test_junkie >> 0.5a2
  3. Platform aka Windows 10 or Linux Mint 18.1 >> All
  4. Command used, if running via terminal
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

Expected behaviour

HTML generation must never fail

Actual behaviour

Traceback (most recent call last):
  File "E:/Development/test_junkie/playground/run.py", line 20, in <module>
    aggregator = runner.run(test_multithreading_limit=10)
  File "E:\Development\test_junkie\test_junkie\runner.py", line 250, in run
    self.__create_html_report(reporter)
  File "E:\Development\test_junkie\test_junkie\runner.py", line 117, in __create_html_report
    reporter.generate_html_report(self.__kwargs.get("html_report"))
  File "E:\Development\test_junkie\test_junkie\reporter\reporter.py", line 80, in generate_html_report
    table_data = self.__get_table_data()
  File "E:\Development\test_junkie\test_junkie\reporter\reporter.py", line 215, in __get_table_data
    test_metrics = copy.deepcopy(test.metrics.get_metrics())
  File "C:\Python27\lib\copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "C:\Python27\lib\copy.py", line 257, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Python27\lib\copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "C:\Python27\lib\copy.py", line 257, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Python27\lib\copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "C:\Python27\lib\copy.py", line 257, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Python27\lib\copy.py", line 163, in deepcopy
    y = copier(x, memo)
  File "C:\Python27\lib\copy.py", line 230, in _deepcopy_list
    y.append(deepcopy(a, memo))
  File "C:\Python27\lib\copy.py", line 190, in deepcopy
    y = _reconstruct(x, rv, 1, memo)
  File "C:\Python27\lib\copy.py", line 329, in _reconstruct
    y = callable(*args)
TypeError: __init__() takes exactly 3 arguments (2 given)

It seems that deepcopy cannot be used on Python 2.7, with custom exception object if they inherit from Exception object, reliably.

If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Ignored tests were not honoring suite level retries after an error in BeforeClass decorator

  1. Python version python --version - All
  2. Test Junkie version pip show test_junkie - 0.7a5+
  3. Platform aka Windows 10 or Linux Mint 18.1 - All
  4. Command used, if running via terminal - N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
from test_junkie.decorators import Suite, test, beforeClass
from tests.junkie_suites.TestListener import TestListener


@Suite(retry=2,
       listener=TestListener,
       priority=2, feature="Store", owner="Mike", parameters=[1, 2])
class NewProductsSuite:

    @beforeClass()
    def before_class1(self):
       raise Exception("Derp")

    @test(component="Admin", tags=["store_management"])
    def archive_product(self):
        print("archive_product")


if "__main__" == __name__:
    from test_junkie.runner import Runner
    runner = Runner([NewProductsSuite])
    runner.run(test_multithreading_limit=4)

Expected behavior

archive_product test should be executed twice and metrics should be updated accordingly irrespective of the before class error because suite retry is set to 2.

Actual behavior

test is only getting executed once and thus on successful retry of the before class it would not run the test and the test would be reported as ignored. This bug was introduced during the fix of #19


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

FileNotFoundError: [Errno 2] No such file or directory: report_template.html

  1. Python version python --version >> All
  2. Test Junkie version pip show test_junkie >> 0.4a0
  3. Command used, if running via terminal >> N/A
  4. Smallest code snippet that can reproduce the issue
    Any suites that are executed with the html_report flag
@Suite()
class Example:

    @test()
    def a(self):
        pass

    @test()
    def b(self):
        pass

runner = Runner([Example], html_report="path/to/report.html")
runner.run()

Expected behaviour

Should parse out the default html template and create a new html report at path/to/report.html

Actual behaviour

The default HTML template, used to create the report, is actually not getting distributed with the Python package thus the error such as

Traceback (most recent call last):
  File "C:/Users/aspir/Desktop/test.py", line 30, in <module>
    runner.run()
  File "C:\Python36\lib\site-packages\test_junkie\runner.py", line 257, in run
    self.__create_html_report(reporter)
  File "C:\Python36\lib\site-packages\test_junkie\runner.py", line 117, in __create_html_report
    reporter.generate_html_report(self.__kwargs.get("html_report"))
  File "C:\Python36\lib\site-packages\test_junkie\reporter\reporter.py", line 36, in generate_html_report
    with open(self.html_template, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Python36\\lib\\site-packages\\test_junkie\\reporter\\report_template.html'

Coverage is started after the functions are defined resulting in false negative reports

  1. Python version python --version - All supported versions
  2. Test Junkie version pip show test_junkie - All versions under & including 0.7a7 that support coverage
  3. Platform aka Windows 10 or Linux Mint 18.1 - All platforms
  4. Command used, if running via terminal - tj run -s E:\Development\TestJunkieWebTesting\tests\suites\NavigationSuite.py -T 6 --code-cov
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

Expected behavior

Accurate coverage metrics are reported

Actual behavior

When using --code-cov flag, number of lines would get reported as not executed when in reality they were. This is due to late initialization of the coverage API. The coverage needs to be initialized before the scan for suites so creation of the functions and classes can be detected by the coverage library.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Error in report creation

  1. Python version python --version----V3.10
  2. Test Junkie version pip show test_junkie -----Version: 0.8a6
  3. Platform aka Windows 10 or Linux Mint 18.1 ---Linux
  4. Command used, if running via terminal ---python3 Runner.py
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
    .local/lib/python3.10/site-packages/test_junkie/metrics.py", line 179, in get_basic_report
    report["tests"][param_data["status"]] += 1
    KeyError: None

Expected behavior

The code shoud returne the report

Actual behavior

KeyError: None


PATH: venv/lib/python3.10/site-packages/test_junkie/metrics.py line 156
There is the code to fix the error

`

from collections import defaultdict

def get_basic_report(self):
def get_template():
return {"total": 0,
TestCategory.SUCCESS: 0,
TestCategory.FAIL: 0,
TestCategory.ERROR: 0,
TestCategory.IGNORE: 0,
TestCategory.SKIP: 0,
TestCategory.CANCEL: 0}

report = {"tests": get_template(),
          "suites": defaultdict(get_template)}

for suite in self.__executed_suites:
    for test in suite.get_test_objects():
        test_metrics = test.metrics.get_metrics()
        for class_param, class_param_data in test_metrics.items():
            for param, param_data in class_param_data.items():
                report["tests"]["total"] += 1
                if param_data["status"] is not None:
                    report["tests"][param_data["status"]] += 1
                    report["suites"][suite]["total"] += 1
                    report["suites"][suite][param_data["status"]] += 1

return report

`

wss

I found it very difficult to find a way to write to you!
I saw some of your published videos, more specifically about browsermob-proxy.
i want to ask if it can record socket traffic flowing in the browser, by the method you showed.
i want to save socket data but it is encrypted by traffic, is there any way to get it directly from the browser?
thanks
Regards
[email protected]

__get_value function - ast.literal_eval on a str type to get a str type is returning SyntaxError

  1. Python version python --version
    3.6.8
  2. Test Junkie version pip show test_junkie
    Name: test-junkie
    Version: 0.7a3
    Summary: Modern Testing Framework
    Home-page: https://www.test-junkie.com/
    Author: Artur Spirin
    Author-email: [email protected]
    License: UNKNOWN
    Location: c:\serp_storage.venv\lib\site-packages
    Requires: coverage, colorama, configparser, psutil, appdirs, statistics
    Required-by:
  3. Platform aka Windows 10 or Linux Mint 18.1
    Windows 10
  4. Command used, if running via terminal
tj config update -s "C:\serp_storage\tests\test_serp_storage.py" --html_report "C:\serp_storage\report.html"
tj run
  1. Smallest code snippet that can reproduce the issue and /or description of the issue
(.venv) PS C:\serp_storage> tj run
[INFO] Scanning: C:\serp_storage\tests\test_serp_storage.py ...
[INFO] Scan finished in: 0.90 seconds. Found: 1 suite(s).
[INFO] Running tests ...
[ERROR] Unexpected error during test execution.

Traceback (most recent call last):
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\cli\cli_runner.py", line 271, in run_suites
    quiet=args.quiet)
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\runner.py", line 131, in run
    self.__settings = Settings(runner_kwargs=self.__kwargs, run_kwargs=kwargs)
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\settings.py", line 48, in __init__
    self.__print_settings()
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\settings.py", line 65, in __print_settings
    value=self.html_report, type=type(self.html_report)))
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\settings.py", line 192, in html_report
    key="html_report", default=Settings.__DEFAULT_HTML)
  File "c:\serp_storage\.venv\lib\site-packages\test_junkie\settings.py", line 99, in __get_value
    value = ast.literal_eval(value)
  File "C:\Users\hkuang\AppData\Local\Programs\Python\Python36\lib\ast.py", line 48, in literal_eval
    node_or_string = parse(node_or_string, mode='eval')
  File "C:\Users\hkuang\AppData\Local\Programs\Python\Python36\lib\ast.py", line 35, in parse
    return compile(source, filename, mode, PyCF_ONLY_AST)
  File "<unknown>", line 1
    C:\serp_storage\report.html
     ^
SyntaxError: invalid syntax

value = C:\serp_storage\report.html
type(value) = <class 'str'>

Expected behaviour

Actual behaviour


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

HTML Report JS Modal Initiation issue

  1. Python version python --version >> 2 & 3
  2. Test Junkie version pip show test_junkie >> All
  3. Platform aka Windows 10 or Linux Mint 18.1 >> All
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

Expected behaviour

User must be able to paginate and search and view individual test results on any page.

Actual behaviour

Large HTML reports (that have multiple pages) suffer a JS issue where clicking on the rows brings up the wrong modal dialog if user used search or changed a page to anything other than the first page of the report.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Missing traceback on empty parameters exception handling

  1. Python version python --version >> 2.7 & 3+
  2. Test Junkie version pip show test_junkie >> 0.5a7
  3. Platform aka Windows 10 or Linux Mint 18.1 >> all
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
def f():
    return []

@Suite(listener=TestListener)
class ExampleTestSuite:

    @test(parameters=f)
    def something_to_test1(self, parameter):
        pass

if "__main__" == __name__:
    runner = Runner([ExampleTestSuite])
    runner.run()

Expected behaviour

Traceback thrown through to the end due to which ignore happen

Actual behaviour

User does not know why test case is being ignored if empty parameters passed in (and possibly other use cases)


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Bug in the no_retry_on & retry_on logic

  1. Python version python --version >> All
  2. Test Junkie version pip show test_junkie >> 0.5a6
  3. Platform aka Windows 10 or Linux Mint 18.1 >> all
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue

Expected behaviour

Actual behaviour

Bug in the logic no_retry_on and retry_on logic
Needs to be

if self.get_no_retry_on() and type(test["exceptions"][-1]) in self.get_no_retry_on():
    return False
elif self.get_retry_on() and type(test["exceptions"][-1]) not in self.get_retry_on():
    return False

If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

Using class of type in test/suite parameters causes TypeError error during json.dumps() when creating HTML report

  1. Python version Python 3.6.0
  2. Test Junkie version 0.7a8
  3. Platform aka MAC OS Catalina 10.15.1

Expected behavior

created the html Report

Actual behavior

Traceback (most recent call last):
  File "/scr/test/Runner.py", line 10, in <module>
    aggregator = runner.run()
  File "/venv/lib/python3.6/site-packages/test_junkie/runner.py", line 188, in run
    reporter.generate_html_report(self.__settings.html_report)
  File "/venv/lib/python3.6/site-packages/test_junkie/reporter/html_reporter.py", line 110, in generate_html_report
    html = html.format(body=body, database_lol=json.dumps(table_data["database_lol"]))
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/encoder.py", line 180, in default
    o.__class__.__name__)
TypeError: Object of type 'type' is not JSON serializable

Nota
You have to deleted in the library html_report.py the json.dumps in the line 110
path : test_junkie/reporter/html_reporter.py

Class instance references should be the same for all decorators

  1. Python version python --version >> Discovered on 2.7, not tested on 3+
  2. Test Junkie version pip show test_junkie >> 0.5a6
  3. Platform aka Windows 10 or Linux Mint 18.1 >> all
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
from test_junkie.decorators import test, Suite, beforeClass, beforeTest
from test_junkie.runner import Runner


@Suite()
class Tes:

    def __init__(self):

        self.example = 0

    @beforeClass()
    def before_class(self):

        self.example = 1

    @beforeTest()
    def before_test(self):
        self.example = 2
        print ">", self.example

    @test(retry=2)
    def test_example(self):
        print ">>", self.example
        assert 1 == 2


if "__main__" == __name__:

    runner = Runner([Tes])
    runner.run()

Expected behaviour

test should have access to the self.example variable that was changed by the beforeTest and beforeClass functions.

Actual behaviour

Looks like all of the decorators are referencing their own instance of the Tes class

If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

AttributeError: 'Thread' object has no attribute 'isAlive'

  1. Python version python --version ----> Python 3.9.5

  2. Test Junkie version pip show test_junkie ----> Version: 0.8a4

  3. Platform aka Windows 10 or Linux Mint 18.1 ----> Mac 11.2.3

  4. Command used, if running via terminal ----> N/a

  5. Smallest code snippet that can reproduce the issue and /or description of the issue

When I'm trying to run in paralel (aggregator = runner.run(test_multithreading_limit=6, suite_multithreading_limit=2))
framework show an error that says
AttributeError: 'Thread' object has no attribute 'isAlive'

Expected behavior

This famework should run all the test case in paralel

Actual behavior

open the browser but shows an error


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

====== WebDriver manager ======
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 954, in _bootstrap_inner
self.run()
File "/usr/local/Cellar/[email protected]/3.9.5/Frameworks/Python.framework/Versions/3.9/lib/python3.9/threading.py", line 892, in run
self._target(*self._args, **self._kwargs)
File "/Users/ricardovalbuena/PycharmProjects/pythonProject/venv/lib/python3.9/site-packages/test_junkie/runner.py", line 308, in __run_suite
while self.__processor.test_limit_reached():
File "/Users/ricardovalbuena/PycharmProjects/pythonProject/venv/lib/python3.9/site-packages/test_junkie/parallels.py", line 87, in test_limit_reached
if test["thread"].isAlive():
AttributeError: 'Thread' object has no attribute 'isAlive'

SuiteObject is not added to the meta properties on before class fail/error

  1. Python version python --version >> All
  2. Test Junkie version pip show test_junkie >> 0.4a0
  3. Command used, if running via terminal >> N/A
  4. Smallest code snippet that can reproduce the issue

Listener definition:

from test_junkie.listener import Listener
class EventListener(Listener):

    def on_before_class_error(self, **kwargs):
       kwargs["properties"]["jm"]["jso"].get_class_name()

    def on_before_class_failure(self, **kwargs):
        kwargs["properties"]["jm"]["jso"].get_class_name()

Suite Definition:

from test_junkie.decorators import Suite, beforeClass
from titan.settings.EventListener import EventListener


@Suite(listener=EventListener)
class Example:

    @beforeClass()
    def beforeClass(self):
        raise Exception("I'm a fluffy bunny")

Expected behaviour

jso(SuiteObject) object exists at this point, thus its possible to add it on before class failure/error

Actual behaviour

jso(SuiteObject) object is not inserted into the data returned during before class failure/error

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/test_junkie/runner.py", line 572, in __process_event
    error=error)
  File "/usr/local/lib/python2.7/dist-packages/test_junkie/listener.py", line 36, in on_before_class_error
    Listener.__process_event(**kwargs)
  File "/usr/local/lib/python2.7/dist-packages/test_junkie/listener.py", line 15, in __process_event
    exception=kwargs.get("error", None))
  File "/var/www/html/qatitan/titan/settings/EventListener.py", line 79, in on_before_class_error
    self.__critical_failure_alert(category="error", decorator_type=DecoratorType.BEFORE_CLASS, **kwargs)
  File "/var/www/html/qatitan/titan/settings/EventListener.py", line 48, in __critical_failure_alert
    title = "{} in: {} >> {}".format(category, kwargs["properties"]["jm"]["jso"].get_class_name(), decorator_type)
KeyError: 'jso'

on_ignore event may be triggered for tests that have passed prior to an error in beforeClass() on retry

  1. Python version python --version >> 2.7 & 3.5+
  2. Test Junkie version pip show test_junkie >> 0.7a4 and prior
  3. Platform aka Windows 10 or Linux Mint 18.1 >> All
  4. Command used, if running via terminal >> N/A
  5. Smallest code snippet that can reproduce the issue and /or description of the issue
from test_junkie.decorators import Suite, test, beforeClass
from tests.junkie_suites.TestListener import TestListener


@Suite(retry=2,
       listener=TestListener,
       priority=2, feature="Store", owner="Mike", parameters=[1, 2])
class NewProductsSuite:

    ATT = 1

    @beforeClass()
    def before_class1(self):
        if NewProductsSuite.ATT == 2:
            raise Exception("Derp")
        NewProductsSuite.ATT += 1

    @test(component="Admin", tags=["store_management"], parameters=[10, 20], parallelized_parameters=True)
    def edit_product(self, parameter, suite_parameter):
        print("edit_product")

    @test(component="Admin", tags=["store_management"])
    def archive_product(self):
        print("archive_product")


if "__main__" == __name__:
    from test_junkie.runner import Runner
    runner = Runner([NewProductsSuite])
    runner.run(test_multithreading_limit=4)

Expected behavior

archive_product should not be marked failed after it has passed initially.

Actual behavior

archive_product gets two metric updates, one is marked success and the other is marked as error due to exception in before class.


If attaching logs, make sure to remove any personal information, credentials, tokens, keys - anything that you don't
want to become public.

[HTML Report] Incorrect timestamp to datetime conversion

  1. Python version python --version >> All
  2. Test Junkie version pip show test_junkie >> 0.4a1
  3. Command used, if running via terminal >> N/A
  4. Smallest code snippet that can reproduce the issue
    Anytime monitor_resources with HTML report, the CPU and MEM Trend date conversion is incorrect when viewing the report
runner = Runner([Example], html_report="/path/to/report.html", monitor_resources=True)
runner.run()

Expected behaviour

timestamp should be converted to local date-time

Actual behaviour

timestamp is converted into a date-time but its from 1969

Hi Champion

Describe desired feature

I wanted to ask how you can achieve the project you present on YouTube at:
Page Objects - Advanced Selenium WebDrvier Tutorial [Python]

I would love if you could email it to me or write me how to get this project done.
[email protected]
Many thanks in advance!!

Any existing examples of such feature?

Test Junkie

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.