Giter Site home page Giter Site logo

specter's Introduction

Specter

Specter is a Python testing framework inspired from RSpec and Jasmine. The library was created out of a desire to have a relatively flexible Python testing framework that adopted a more code-centric approach to BDD.

Specter is open-source and is available on GitHub. We love contributions!

Getting Started

  • Specter Documentation
  • Problems or Questions? Ask us on Freenode on the #specterframework channel

Continuous Integration

Travis CI Build status Coverage Status Specter PyPi Package

Release Notes

specter's People

Contributors

alin23 avatar chadlung avatar jmvrbanac avatar mtchurch avatar stackedsax avatar stas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

specter's Issues

Add numbered source output for expect failures

Currently when an exception is thrown from a test method, the numbered source output for that exception traceback is output, which helps in locating in which file and line number the error occurred.

This issue adds similar numbered source output for expect failures as well, as it could be difficult to locate which file and line number such failures occurred, especially if a lot of tests are run.

Issue with cross-module fixtures

Attempting to use a fixture from another module is causing an odd traceback. However, if I move the fixture into the module I want to use it in, the error goes away and it works just fine.

 Error Traceback:
          - /home/john/Repositories/github/requests-cloud/spec/keystone.pyc
          ------------------
          ------------------
          - /home/john/.virtualenvs/cloudauth/lib/python2.7/site-packages/specter/expect.py
          ------------------
            161:     """
            162:     src_line = get_called_src_line(use_child_attr='__spec__')
        --> 163:     src_params = get_expect_param_strs(src_line)
            164:     expect_obj = ExpectAssert(obj, src_params=src_params,
          ------------------
          - /home/john/.virtualenvs/cloudauth/lib/python2.7/site-packages/specter/util.py
          ------------------
            38: def get_expect_param_strs(src_line):
        --> 39:     matches = re.search('\((.*?)\)\..*\((.*?)\)', src_line)
            40:     return (matches.group(1), matches.group(2)) if matches else None
          ------------------
          - /home/john/.virtualenvs/cloudauth/lib/python2.7/re.py
          ------------------
            140:     """Scan through string looking for a match to the pattern, returning
            141:     a match object, or None if no match was found."""
        --> 142:     return _compile(pattern, flags).search(string)
          ------------------
          - Error | TypeError: expected string or buffer

Fix exit code

Currently, Specter isn't returning an exit code depending on the test results. For this to work in a CI process, Specter needs to return proper exit codes.

Add a no-color option

Often in CI systems, you cannot see the ascii color output. Considering Specter's default reporter outputs in color, we need to provide an option to disable color.

Coverage data isn't properly reported for Python 3+

I'm wondering if this issue is due to an issue with Pynsive's module loading process as this resembles the issue that I fixed in Pynsive to support coverage in Python 2.7.

Coverage report for Python 3.4.x

py34 runtests: commands[1] | coverage report -m
Name               Stmts   Miss  Cover   Missing
------------------------------------------------
lplight/__init__       0      0   100%   
lplight/client        54     54     0%   1-101
lplight/models       124    124     0%   1-143
------------------------------------------------
TOTAL                178    178     0%   
_________________________________________

Coverage report for Python 2.7.x

py27 runtests: commands[1] | coverage report -m
Name               Stmts   Miss Branch BrMiss  Cover   Missing
--------------------------------------------------------------
lplight/__init__       0      0      0      0   100%   
lplight/client        54     54     12     12     0%   1-101
lplight/models       124      0      4      1    99%   8->7
--------------------------------------------------------------
TOTAL                178     54     16     13    65%   

Create reporter for the Specter json format

Output a format something like:

{
    "specs": [
        "ExampleSpec": {
            "docstring": "",
            "metadata": {},
            "tests": [
                "it_can_create_an_object": {
                    "docstring": "",
                    "metadata": {},
                    "success": false,
                    "error": Null,
                    "incomplete": true
                }
            ],
            "specs": []
        }
    ]
}

Add settings manager

We need a way get and store configuration information across the specter platform.

Clean up odd traceback when mistyping the qualifier not_to

If you accidentally use to_not instead of not_to, you get the correct error, but it then pushes out a second traceback that doesn't make much sense. Should be cleaned up.

      --> 40:         expect(inst).to_not.be_none()
        ------------------
        - Error | AttributeError: 'ExpectAssert' object has no attribute 'to_not'
Traceback (most recent call last):
  File ".tox/py27/bin/specter", line 9, in <module>
    load_entry_point('Specter==0.1.15', 'console_scripts', 'specter')()
  File "/home/john/Repositories/github/Rift/.tox/py27/lib/python2.7/site-packages/specter/runner.py", line 163, in activate
    runner.run(args)
  File "/home/john/Repositories/github/Rift/.tox/py27/lib/python2.7/site-packages/specter/runner.py", line 137, in run
    parallel_manager=self.parallel_manager)
  File "/home/john/Repositories/github/Rift/.tox/py27/lib/python2.7/site-packages/specter/spec.py", line 316, in execute
    self.standard_execution(select_metadata)
  File "/home/john/Repositories/github/Rift/.tox/py27/lib/python2.7/site-packages/specter/spec.py", line 294, in standard_execution
    self.top_parent.dispatch(TestEvent(case))
  File "/home/john/Repositories/github/Rift/.tox/py27/lib/python2.7/site-packages/pyevents/manager.py", line 58, in dispatch
    listener.callback(event)
  File "/home/john/Repositories/github/Rift/.tox/py27/lib/python2.7/site-packages/specter/reporting/console.py", line 93, in event_received
    print_expects(test_case, level)
  File "/home/john/Repositories/github/Rift/.tox/py27/lib/python2.7/site-packages/specter/reporting/console.py", line 219, in print_expects
    expect_msg = u'{mark} {msg}'.format(mark=mark, msg=expect)
  File "/home/john/Repositories/github/Rift/.tox/py27/lib/python2.7/site-packages/specter/expect.py", line 109, in __str__
    action_list[-1] = self.expected_src_param or str(self.expected)
AttributeError: 'ExpectAssert' object has no attribute 'expected'

Literal expect messages aren't captured across lines

If someone has a line break in their expect call, specter will pull the value of the parameter instead of outputting the literal code that the person used.

Code:
expect(
    bug_attr).to.equal(sample_dict[key])

Results:
✔ Test id to equal Test id

Instead of the expected:
✔ bug_attr to equal sample_dict[key]

Parallel testing

We need to be able to run our tests in parallel. This will cause a bit of pain around reporting and data-driven describes.

A way to attach release test outputs to git tags

It would be cool to view test outputs for every release. This isn't a Specter issue, but maybe a hook/plugin that can be used with Specter and GitHub so that after every release, tests can be saved for documentation/artifact purposes. This would allow us to diff on tests for every release to check on bugs introduced/deprecated features/ etc.

Capture stdout/stderr that occurs during the execution of a test

Currently, Specter doesn't capture the output for code that occurs during a test. It would be nice to have the option to suppress it.

This is was it currently looks like (kind of ugly):

          ___
        _/ @@\
    ~- ( \  O/__     Specter
    ~-  \    \__)   ~~~~~~~~~~
    ~-  /     \     Keeping the Bogeyman away from your code!
    ~- /      _\
       ~~~~~~~~~

Some actions Actions
  ∟ should not contain duplicates
(None, None)
GOT HERE!
  ∟ can disconnect

Redirect all print statements

To make specter easier to test, we need to unify how specter prints out to the console into a separate function. After it's unified, we can redirect print statements and test the output.

Add expectations for checking types and isinstance

I'm thinking adding support for the following expectations would be good:

Checking Object Type:

expect(obj).to.be_a(object_type)

Checking object is an instance:

expect(obj).to.be_an_instance_of(object_type)

Expect raises an error when attempting to catch an exception type that don't contain __name__

Specter cannot format the name as it is expecting __name__ to be there. This causes a passing test to fail.

        Error Traceback:
          - /home/john/Repositories/github/requests-cloud/spec/keystone.py
          ------------------
            15: 
            16:         def get_token_should_raise_not_implemented(self):
        --> 17:             expect(self.auth.get_token).to.raise_a(NotImplemented)
            18: 
          ------------------
          - /home/john/.virtualenvs/cloudauth/lib/python2.7/site-packages/specter/expect.pyc
          ------------------
            121:             msg = _('Function {func_name} {was} expected to raise "{excpt}"'
            122:                     ''.format(func_name=self.target_src_param,
        --> 123:                               excpt=self.expected.__name__,
            124:                               was=was))
          ------------------
          - Error | AttributeError: 'NotImplementedType' object has no attribute '__name__'

Add error test metric

Currently if a test throws an exception it gets counted as a failed test. We should have a separate category for errors.
Requested by: @samu4924

Be able to remove duplicates from combined data-driven datasets

While we current handle duplication of names, we don't check if a test has duplicated kwargs. The use case is if we are loading a large dataset and want to make sure we don't have data-driven tests that are identical except for the test name; essentially making a duplicated test.

Issue mentioned by: @reaperhulk

Fix Jenkins Unicode error

This happened with using --no-color.


          ___
        _/ @@\
    ~- ( \  O/__     Specter
    ~-  \    \__)   ~~~~~~~~~~
    ~-  /     \     Keeping the boogy man away from your code!
    ~- /      _\
       ~~~~~~~~~

Example Spec
Traceback (most recent call last):
  File "specter/runner.py", line 128, in <module>
    activate()
  File "specter/runner.py", line 121, in activate
    runner.run(args)
  File "specter/runner.py", line 100, in run
    suite.execute(select_metadata=select_meta)
  File "/var/lib/jenkins/jobs/Specter Test/workspace/specter/spec.py", line 217, in execute
    self.top_parent.dispatch(TestEvent(case))
  File "/var/lib/jenkins/.virtualenvs/Specter/lib/python2.7/site-packages/pyevents/manager.py", line 58, in dispatch
    listener.callback(event)
  File "/var/lib/jenkins/jobs/Specter Test/workspace/specter/reporting/console.py", line 129, in event_received
    self.print_test_msg(name, level, status)
  File "/var/lib/jenkins/jobs/Specter Test/workspace/specter/reporting/console.py", line 65, in print_test_msg
    self.print_indent_msg(msg=msg, level=level, color=color)
  File "/var/lib/jenkins/jobs/Specter Test/workspace/specter/reporting/console.py", line 92, in print_indent_msg
    self.print_colored(msg=msg, color=color)
  File "/var/lib/jenkins/jobs/Specter Test/workspace/specter/reporting/console.py", line 98, in print_colored
    print(msg)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u221f' in position 2: ordinal not in range(128)

Fix summary report colors

Currently the summary report outputs the color green at all times. If there is an error or failure during the test run, it should change the summary color to red.

Split out test events

Currently there is only a generic TestEvent when a test completes. This should be split up into start and complete test events.

Having a parameter named "name" in before_all raises an exception

I noticed that when I had a parameter with the name "name" in my before_all, it would cause an exception to be raised. It appears to be trying to overwrite a specter specific variable:

Traceback (most recent call last):
File "/usr/local/bin/specter", line 9, in
load_entry_point('Specter==0.1.12', 'console_scripts', 'specter')()
File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/specter/runner.py", line 121, in activate
runner.run(args)
File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/specter/runner.py", line 100, in run
suite.execute(select_metadata=select_meta)
File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/specter/spec.py", line 209, in execute
self.before_all()
File "", line 120, in before_all
AttributeError: can't set attribute

Coverage doesn't seem to be catching some executed lines

Not sure what is causing this, but on a sample project I created the other day, coverage.py doesn't seem to be catching executed lines. This could be a coverage.py bug as it detects the test code, but just not the source code. This needs to be looked into.

Add coverage.py run support

Support integration of specter and coverage.py by allowing the following:

coverage run --source=blah/,tests/ -m specter

Remove duplicated traceback information

Traceback information seems to be getting duplicated between errors. In the following example, the first traceback is bleeding into the second's results:

Target Model
  ∟ Deserialization
    ∟ can deserialize from a dictionary
  ∟ Serialization
    ∟ can serialize to a summary dictionary
        Error Traceback:
          - /home/john/Repositories/github/Rift/spec/rift/data/models/target.pyc
          ------------------
            28:             summary_dict = self.target.summary_dict()
            29: 
        --> 30:             expect(summary_dict['id']).to.equal(self.example_dict['id'])
            31:             expect(summary_dict['name']).to.equal(self.example_dict['name'])
          ------------------
          - Error | KeyError: 'id'
      ✘ summary_dict['id'] self.example_dict['id']
          summary_dict['id']: fc619f5c-0844-4013-8ff0-86e0782c9978
          self.example_dict['id']: ERROR - Couldn't evaluate expected value
    ∟ can serialize to a dictionary
        Error Traceback:
          - /home/john/Repositories/github/Rift/spec/rift/data/models/target.pyc
          ------------------
            28:             summary_dict = self.target.summary_dict()
            29: 
        --> 30:             expect(summary_dict['id']).to.equal(self.example_dict['id'])
            31:             expect(summary_dict['name']).to.equal(self.example_dict['name'])
          ------------------
          - /home/john/.virtualenvs/Rift/lib/python2.7/site-packages/specter/spec.pyc
          ------------------
            54:         self.start()
            55:         try:
        --> 56:             MethodType(self.case_func, context or self)(**kwargs)
            57:         except TestIncompleteException as e:
          ------------------
          - /home/john/Repositories/github/Rift/spec/rift/data/models/target.pyc
          ------------------
            14:             target_dict = self.target.as_dict()
            15: 
        --> 16:             expect(target_dict['id']).to.equal(self.example_dict['id'])
            17:             expect(target_dict['name']).to.equal(self.example_dict['name'])
          ------------------
          - Error | KeyError: 'id'
      ✘ target_dict['id'] self.example_dict['id']

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.