Giter Site home page Giter Site logo

sublimetext / unittesting Goto Github PK

View Code? Open in Web Editor NEW
110.0 8.0 33.0 1.01 MB

Testing Sublime Text Packages

License: MIT License

PowerShell 17.12% Python 65.82% Shell 15.12% C++ 0.81% HTML 0.52% Dockerfile 0.61%
sublime-text unittest coverage-report actions

unittesting's Introduction

UnitTesting

test codecov

This is a unittest framework for Sublime Text. It runs unittest testcases on local machines and via Github Actions. It also supports testing syntax_test files for the new sublime-syntax format and sublime-color-scheme files.

Sublime Text 4

Sublime Text 4 is now supported and testing works for Python 3.8 packages.

Preparation

  1. Install UnitTesting via Package Control.
  2. Your package!
  3. TestCases should be placed in test*.py under the directory tests (configurable, see below). The testcases are then loaded by TestLoader.discover.

Here are some small examples

Running Tests Locally

Command Palette

  1. Open Command Palette using ctrl+shift+P or menu item Tools → Command Palette...
  2. Choose a Unittesting: ... command to run and hit Enter

To test any package...

  1. run UnitTesting: Test Package
  2. enter the package name in the input panel and hit enter.

An output panel pops up displaying progress and results of running tests.

To run only tests in particular files, enter <Package name>:<filename>. <filename> should be a unix shell wildcard to match the file names. <Package name>:test*.py is used by default.

The command UnitTesting: Test Current Package runs all tests of the current package the active view's file is part of. The package is reloaded to pickup any code changes and then tests are executed.

The command UnitTesting: Test Current Package with Coverage runs tests for current package and generates a coverage report via coverage. The .coveragerc file is used to control coverage configurations. If it is missing, UnitTesting will ignore the tests directory.

Note

As of Unittesting 1.8.0 the following commands have been replaced to enable more flexible usage and integration in build systems.

unit_testing_current_package

{ "command": "unit_testing", "package": "$package_name" }

unit_testing_current_file

{ "command": "unit_testing", "package": "$package_name", "pattern": "$file_name" }

Build System

To run tests via build system specify unit_testing build system "target".

{
  "target": "unit_testing"
}

Project specific Test Current Package build command

It is recommended to add the following to .sublime-project file so that ctrl+b would invoke the testing action.

"build_systems":
[
  {
    "name": "Test Current Package",
    "target": "unit_testing",
    "package": "$package_name",
    "failfast": true
  }
]

Project specific Test Current File build command

It is recommended to add the following to .sublime-project file so that ctrl+b would invoke the testing action.

"build_systems":
[
  {
    "name": "Test Current File",
    "target": "unit_testing",
    "package": "$package_name",
    "pattern": "$file_name",
    "failfast": true
  }
]

GitHub Actions

Unittesting provides the following GitHub Actions, which can be combined in a workflow to design package tests.

  1. SublimeText/UnitTesting/actions/setup

    Setup Sublime Text to run tests within.

    This must always be the first step after checking out the package to test.

  2. SublimeText/UnitTesting/actions/run-color-scheme-tests

    Test color schemes using ColorSchemeUnit.

  3. SublimeText/UnitTesting/actions/run-syntax-tests

    Test sublime-syntax definitions using built-in syntax test functions of already running Sublime Text environment.

    It is an alternative to SublimeText/syntax-test-action or sublimehq's online syntax_test_runner

  4. SublimeText/UnitTesting/actions/run-tests

    Runs the unit_testing command to perform python unit tests.

Note

actions are released in the branch v1. Minor changes will be pushed to the same branch unless there are breaking changes.

Color Scheme Tests

To integrate color scheme tests via ColorSchemeUnit add the following snippet to a workflow file (e.g. .github/workflows/color-scheme-tests.yml).

name: ci-color-scheme-tests

on: [push, pull_request]

jobs:
  run-syntax-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: SublimeText/UnitTesting/actions/setup@v1
      - uses: SublimeText/UnitTesting/actions/run-color-scheme-tests@v1

Syntax Tests

To run only syntax tests add the following snippet to a workflow file (e.g. .github/workflows/syntax-tests.yml).

name: ci-syntax-tests

on: [push, pull_request]

jobs:
  run-syntax-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: SublimeText/UnitTesting/actions/setup@v1
      - uses: SublimeText/UnitTesting/actions/run-syntax-tests@v1

Note

If you are looking for syntax tests only, you may also checkout SublimeText/syntax-test-action. Using this test makes most sense to just re-use an already set-up ST test environment.

Unit Tests

To run only python unit tests on all platforms and versions of Sublime Text add the following snippet to a workflow file (e.g. .github/workflows/unit-tests.yml).

name: ci-unit-tests

on: [push, pull_request]

jobs:
  run-tests:
    strategy:
      fail-fast: false
      matrix:
        st-version: [3, 4]
        os: ["ubuntu-latest", "macOS-latest", "windows-latest"]
    runs-on: ${{ matrix.os }}
    steps:
      - uses: actions/checkout@v4
      - uses: SublimeText/UnitTesting/actions/setup@v1
        with:
          package-name: Package Name   # if differs from repo name
          sublime-text-version: ${{ matrix.st-version }}
      - uses: SublimeText/UnitTesting/actions/run-tests@v1
        with:
          coverage: true
          package-name: Package Name   # if differs from repo name
      - uses: codecov/codecov-action@v4

Run All Tests

name: ci-tests

on: [push, pull_request]

jobs:
  run-tests:
    strategy:
      fail-fast: false
      matrix:
        st-version: [3, 4]
        os: ["ubuntu-latest", "macOS-latest", "windows-latest"]
    runs-on: ${{ matrix.os }}
    steps:
      # checkout package to test
      - uses: actions/checkout@v4

      # setup test environment
      - uses: SublimeText/UnitTesting/actions/setup@v1
        with:
          sublime-text-version: ${{ matrix.st-version }}

      # run color scheme tests (only on Linux)
      - if: ${{ matrix.os == 'ubuntu-latest' }}
        uses: SublimeText/UnitTesting/actions/run-color-scheme-tests@v1
      
      # run syntax tests and check compatibility with new syntax engine (only on Linux)
      - if: ${{ matrix.os == 'ubuntu-latest' }}
        uses: SublimeText/UnitTesting/actions/run-syntax-tests@v1
        with:
          compatibility: true
      
      # run unit tests with coverage upload
      - uses: SublimeText/UnitTesting/actions/run-tests@v1
        with:
          coverage: true
          extra-packages: |
            A File Icon:SublimeText/AFileIcon
      - uses: codecov/codecov-action@v4

Check this for further examples.

Options

Package Configuration

UnitTesting is primarily configured by unittesting.json file in package root directory.

{
  "verbosity": 1,
  "coverage": true
}

Build System Configuration

Options provided via build system configuration override unittesting.json.

{
  "target": "unit_testing",
  "package": "$package_name",
  "verbosity": 2,
  "coverage": true
}

Command Arguments

Options passed as arguments to unit_testing command override unittesting.json.

window.run_command("unit_testing", {"package": "$package_name", "coverage": False})

Available Options

name description default value
tests_dir the name of the directory containing the tests "tests"
pattern the pattern to discover tests "test*.py"
deferred whether to use deferred test runner true
condition_timeout default timeout in ms for callables invoked via yield 4000
failfast stop early if a test fails false
output name of the test output instead of showing
in the panel
null
verbosity verbosity level 2
capture_console capture stdout and stderr in the test output false
reload_package_on_testing reloading package will increase coverage rate true
coverage track test case coverage false
coverage_on_worker_thread (experimental) false
generate_html_report generate HTML report for coverage false
generate_xml_report generate XML report for coverage false

Writing Unittests

UnitTesting is based on python's unittest library. Any valid unittest test case is allowed.

Example:

tests/test_myunit.py

from unittest import TestCase

class MyTestCase(TestCase):

  def test_something(self):
    self.assertTrue(True)

Deferred testing

Tests can be written using deferrable test cases to test results of asynchronous or long lasting sublime commands, which require yielding control to sublime text runtime and resume test execution at a later point.

It is a kind of cooperative multithreading such as provided by asyncio, but with a home grown DeferringTextTestRunner acting as event loop.

The idea was inspired by Plugin UnitTest Harness.

DeferrableTestCase is used to write the test cases. They are executed by the DeferringTextTestRunner and the runner expects not only regular test functions, but also generators. If the test function is a generator, it does the following

  • if the yielded object is a callable, the runner will evaluate the callable and check its returned value. If the result is not None, the runner continues the generator, if not, the runner will wait until the condition is met with the default timeout of 4s. The result of the callable can be also retrieved from the yield statement. The yielded object could be also a dictionary of the form {"condition": callable, timeout: timeout} to specify timeout in ms.

  • if the yielded object is an integer, say x, then it will continue the generator after x ms.

  • yield AWAIT_WORKER would yield to a task in the worker thread.

  • otherwise, a single yield would yield to a task in the main thread.

Example:

import sublime
from unittesting import DeferrableTestCase


class TestCondition(DeferrableTestCase):

    def test_condition(self):
        x = []

        def append():
            x.append(1)

        def condition():
            return len(x) == 1

        sublime.set_timeout(append, 100)

        # wait until `condition()` is true
        yield condition

        self.assertEqual(x[0], 1)

see also tests/test_defer.py.

Helper TestCases

UnitTesting provides some helper test case classes, which perform common tasks such as overriding preferences, setting up views, etc.

  • DeferrableViewTestCase
  • OverridePreferencesTestCase
  • TempDirectoryTestCase
  • ViewTestCase

Usage and some examples are available via docstrings, which are displayed as hover popup by LSP and e.g. LSP-pyright.

Credits

Thanks guillermooo and philippotto for their early efforts in AppVeyor and Travis CI macOS support (though these services are not supported now).

unittesting's People

Contributors

absszero avatar andyli avatar damnwidget avatar deathaxe avatar dnicolson avatar ehuss avatar evandrocoan avatar gerardroche avatar giampaolo avatar guillermooo avatar kaste avatar philippotto avatar pykong avatar randy3k avatar rchl avatar rwols avatar tedmiston avatar thom1729 avatar vors avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unittesting's Issues

Testing against dev builds

It's currently not possible to test against the dev builds. This would require:

  1. Downloading and installing the dev build (easy on Windows).
  2. Somehow providing an interface to place a valid license in the correct location to allow execution of the dev build.

Travis CI has a kind of "secret value setting" thing where you can provide secret information that is not part of .travis.yml, but I don't exactly remember how it works, nor have I ever used it.

Question about coverage

Thanks for making sublime-integrated testing so easy!

I just added a very simple test case to LSP in sublimelsp/LSP@9e0c86d, and got high coverage numbers when I ran "Test Current Package with Coverage":

test_conversion (test_protocol.PointTests) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.015s

OK

Name                            Stmts   Miss  Cover
---------------------------------------------------
boot.py                            19      0   100%
plugin/__init__.py                  0      0   100%
plugin/code_actions.py             52     33    37%
plugin/completion.py              161    107    34%
plugin/configuration.py            91     62    32%
plugin/core/__init__.py             0      0   100%
plugin/core/clients.py             74     43    42%
plugin/core/configurations.py      68      9    87%
plugin/core/diagnostics.py         65     44    32%
plugin/core/documents.py          148    100    32%
plugin/core/edit.py                39     27    31%
plugin/core/events.py              19      6    68%
plugin/core/logging.py             14      4    71%
plugin/core/main.py               131     42    68%
plugin/core/panels.py              18     10    44%
plugin/core/protocol.py           187     47    75%
plugin/core/rpc.py                145     75    48%
plugin/core/settings.py           116     12    90%
plugin/core/url.py                  8      1    88%
plugin/core/workspace.py           46     26    43%
plugin/definition.py               35     22    37%
plugin/diagnostics.py             135     88    35%
plugin/formatting.py               37     25    32%
plugin/hover.py                    72     48    33%
plugin/references.py               61     42    31%
plugin/rename.py                   31     20    35%
plugin/signature_help.py          106     73    31%
plugin/symbols.py                  35     21    40%
---------------------------------------------------
TOTAL                            1913    987    48%

I would like to configure my package / UnitTesting so only coverage on plugin/core/protocol.py is reported ( so I don't get "free" coverage from module initialisation / package reloading) - is this possible?

same level of support for appveyor.com as for Travis-CI?

Would it be possible to make configuration for Windows users as simple as for Linux and Mac users?

I think you recently abstracted out the Travis-CI bootstrapping into a script living in UnitTesting. Could we do the same for Appveyor (or any other Windows-friendly CI service)?

Thanks!

Output saved to file if unittesting.json is missing

If plugin doesn't have unittesting.json output is redirected to file, otherwise if unittesting.json is present (even without any meaningful settings) output is shown on panel.

This took me some time to understand why I don't see test results. I think we should output test results to panel by default even if unittesting.json is missing.

Appveyor stopped working

It seems that guys at Appveyor have changed something. I tried running a recent build for your example project and it fails with the following output:

Build started
git clone -q --branch=master https://github.com/niosus/UnitTesting-example.git C:\projects\unittesting-example
git checkout -qf bb61670216685cdcb5ebb6cc5312b0cb3c72ee6c
Running Install scripts
start-filedownload "https://raw.githubusercontent.com/randy3k/UnitTesting/master/sbin/appveyor.ps1"
Downloading appveyor.ps1 (2,094 bytes)...100%

Then it hangs and eventually fails the build after the timeout. Any ideas on why that may be the case?

What does the verbosity setting value means?

I could not find a documentation for it. Is there some?

I experienced a little with it:

  1. Putting it to 2, it displays each executed tests name + ok.
  2. Putting it to 1, displays like the default Python unit test runner.
  3. Putting it to 0, seems to only displays the overall progress.

Add helper methods to improve working with views

I use your package on multiple sublime projects and always end up writing a helper-command in my package to be able to create new views and edit them.
The problem is that this needs to be encapsulated in a sublime.TextCommand so that there is a valid handle to an edit-object.
Maybe your UnitTesting package could provide the same functionality out of the box? This would yield the benefit that the testing code is in one place and that the actual package code doesn't get polluted with testing code.

To clarify what I mean, here is some sample code I would use in my packages:

class TestMultiEditUtilsCommand(sublime_plugin.TextCommand):

        def run(self, edit, commandName, argTuple):

            getattr(self, commandName)(self.view, edit, argTuple)


        def insertSomeText(self, view, edit, argTuple):

            view.insert(edit, 0, argTuple[0])


        def selectText(self, view, edit, regions):

            if regions:
                view.sel().clear()
                for regionTuple in regions:
                    view.sel().add(sublime.Region(regionTuple[0], regionTuple[1]))
            else:
                view.run_command('select_all')

            SelectionListener().on_selection_modified(view)

In my test*.py I have the following method:

    def runCommand(self, commandName, argTuple = ()):

        self.view.run_command("test_multi_edit_utils", dict(commandName = commandName, argTuple = list(argTuple)))

which can be used like this:

        self.runCommand("insertSomeText", ["testString"])
        self.runCommand("selectText")

Sorry for the numerous code snippets. I hope I could convey the idea.

AppVeyor is using Python 2.7.14 instead of Python 3.3

I just do not know how it can use Python 2. I use almost the same appveyor.yml you suggest:

environment:
    # The package name
    PACKAGE: "WrapPlus"
    SUBLIME_TEXT_VERSION : "3"


install:
    - ps: appveyor DownloadFile "https://raw.githubusercontent.com/SublimeText/UnitTesting/master/sbin/appveyor.ps1"
    - ps: .\appveyor.ps1 "bootstrap" -verbose
    - ps: pip install coverage codacy-coverage
    # install Package Control
    # - ps: .\appveyor.ps1 "install_package_control" -verbose


build: off


test_script:
    # run tests with test coverage report
    - ps: .\appveyor.ps1 "run_tests" -coverage -verbose
    - echo


on_finish:
    - "SET PYTHON=C:\Python33"
    - "SET PATH=%PYTHON%;%PYTHON%\\Scripts;%PATH%"
    - pip install codecov
    - codecov
    - python --version
    - coverage xml -o coverage.xml
    - python-codacy-coverage

https://github.com/evandrocoan/WrapPlus/blob/5680b39c0b9eebf44397a83341cb314764b446d0/appveyor.yml


After adding that python --version we see that the python version is 2.7.14 on the AppVeyor build:

...
==> Uploading
    .url https://codecov.io
    .query service=appveyor&package=py2.0.14&job=evandrocoan%2Fwrapplus%2F1.0.22&build=2snio61i77gcab3m&branch=master&commit=5680b39c0b9eebf44397a83341cb314764b446d0&slug=evandrocoan%2FWrapPlus
    Pinging Codecov...
    Uploading to S3...
    https://codecov.io/github/evandrocoan/WrapPlus/commit/5680b39c0b9eebf44397a83341cb314764b446d0
python --version
Python 2.7.14
coverage xml -o coverage.xml
Couldn't parse 'C:\projects\wrapplus\wrap_plus.py' as Python source: 'invalid syntax' at line 987
Command exited with code 1
python-codacy-coverage
2018-01-25 23:51:18,059 - ERROR - Coverage report coverage.xml not found.
Command exited with code 1

https://ci.appveyor.com/project/evandrocoan/wrapplus/build/1.0.22

logging output during tests

@niosus wrote in randy3k/UnitTesting-example#3

I would like to see debug output during the run of my tests. Currently I can just see if the tests failed or not and the appropriate values in case they failed.

Is it possible to show debug output always? Either output from print statements or, preferably using logging?

Thanks!

generalizing travis.sh for linux/macos-based ci hosts

There are more SaaS CI servers today than a few years ago. While so far I've seen each CI environment can be adapted so that the travis.sh script (and also appveyor.ps1) can be used with them, it should be possible to further decouple the travis.sh script from the Travis-CI specifics.

specify ouput file for coverage

Please tell me if I missed it, but it seems that there is no way to define an output file for the tests. I am using codacy for automated code reviews on my project and they seem to require coverage.xml as the output of the tests, while your plugin produces only .coverage file if I understand everything correctly. Would it be possible to configure this?

SystemError: Parent module '' not loaded, cannot perform relative import

I had imported another test file, in one (main) test file like this:

from .text_extraction_unit_tests import PrefixStrippingViewUnitTests
wrap_plus_module = sys.modules["Wrap Plus.wrap_plus"]

def run_unit_tests(unit_tests_to_run=[]):
    runner = unittest.TextTestRunner()

    classes = \
    [
        PrefixStrippingViewUnitTests,
        SemanticLineWrapUnitTests,
        LineBalancingUnitTests,
    ]

    if len( unit_tests_to_run ) < 1:
        # Comment all the tests names on this list, to run all Unit Tests
        unit_tests_to_run = \
        [
            # "test_semantic_line_wrap_line_starting_with_comment",
            # "test_split_lines_with_trailing_new_line",
            # "test_split_lines_without_trailing_new_line",
            # "test_balance_characters_between_line_wraps_with_trailing_new_line",
            # "test_balance_characters_between_line_wraps_without_trailing_new_line",
            # "test_balance_characters_between_line_wraps_ending_with_long_word",
        ]

    runner.run( suite( classes, unit_tests_to_run ) )

def suite(classes, unit_tests_to_run):
    """
        Problem with sys.argv[1] when unittest module is in a script
        https://stackoverflow.com/questions/2812218/problem-with-sys-argv1-when-unittest-module

        Is there a way to loop through and execute all of the functions in a Python class?
        https://stackoverflow.com/questions/2597827/is-there-a-way-to-loop-through-and-execute

        looping over all member variables of a class in python
        https://stackoverflow.com/questions/1398022/looping-over-all-member-variables-of-a-class
    """
    suite = unittest.TestSuite()
    unit_tests_to_run_count = len( unit_tests_to_run )

    for _class in classes:
        _object = _class()

        for function_name in dir( _object ):

            if function_name.lower().startswith( "test" ):

                if unit_tests_to_run_count > 0 \
                        and function_name not in unit_tests_to_run:

                    continue

                suite.addTest( _class( function_name ) )

    return suite

Using this I can run the unit tests by my loader file inside the file Wrap Plus.wrap_plus:

def run_tests():
    """
        How do I unload (reload) a Python module?
        https://stackoverflow.com/questions/437589/how-do-i-unload-reload-a-python-module
    """
    print( "\n\n" )
    sublime_plugin.reload_plugin( "Wrap Plus.tests.text_extraction_unit_tests" )
    sublime_plugin.reload_plugin( "Wrap Plus.tests.semantic_linefeed_unit_tests" )
    sublime_plugin.reload_plugin( "Wrap Plus.tests.semantic_linefeed_manual_tests" )

    from .tests import semantic_linefeed_unit_tests

    # Comment all the tests names on this list, to run all Unit Tests
    unit_tests_to_run = \
    [
        # "test_semantic_line_wrap_ending_with_comma_list",
        # "test_is_command_separated_list_5_items",
        # "test_is_command_separated_list_4_items",
        # "test_is_command_separated_list_3_items",
        # "test_is_command_separated_list_2_items",
    ]

    semantic_linefeed_unit_tests.run_unit_tests( unit_tests_to_run )

def plugin_loaded():
    """
        Running single test from unittest.TestCase via command line
        https://stackoverflow.com/questions/15971735/running-single-test-from-unittest-testcase
    """
    run_tests()

And it successfully run I reload the Wrap Plus.wrap_plus file:

reloading plugin Wrap Plus.tests.text_extraction_unit_tests
reloading plugin Wrap Plus.tests.semantic_linefeed_unit_tests
reloading plugin Wrap Plus.tests.semantic_linefeed_manual_tests
...................................................
----------------------------------------------------------------------
Ran 51 tests in 0.601s

OK
reloading plugin Wrap Plus.wrap_plus

But when I use the UnitTesting command UnitTesting: Test Current Package, I got this error:

semantic_linefeed_unit_tests (unittest.loader.ModuleImportFailure) ... ERROR
test_double_quotes_wrappting (text_extraction_unit_tests.PrefixStrippingViewUnitTests) ... ok
test_double_quotes_wrappting_without_leading_whitespace (text_extraction_unit_tests.PrefixStrippingViewUnitTests) ... ok
test_triple_quotes_comment (text_extraction_unit_tests.PrefixStrippingViewUnitTests) ... ok
test_triple_quotes_wrappting (text_extraction_unit_tests.PrefixStrippingViewUnitTests) ... ok
test_triple_quotes_wrappting_without_leading_whitespace (text_extraction_unit_tests.PrefixStrippingViewUnitTests) ... ok

======================================================================
ERROR: semantic_linefeed_unit_tests (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "./python3.3/unittest/case.py", line 384, in _executeTestPart
  File "./python3.3/unittest/loader.py", line 32, in testFailure
ImportError: Failed to import test module: semantic_linefeed_unit_tests
Traceback (most recent call last):
  File "./python3.3/unittest/loader.py", line 261, in _find_tests
  File "./python3.3/unittest/loader.py", line 239, in _get_module_from_name
  File "D:\SublimeText\Data\Packages\Wrap Plus\tests\semantic_linefeed_unit_tests.py", line 11, in <module>
    from .text_extraction_unit_tests import PrefixStrippingViewUnitTests
SystemError: Parent module '' not loaded, cannot perform relative import


----------------------------------------------------------------------
Ran 6 tests in 0.544s

FAILED (errors=1)

UnitTesting: Done.

PermissionError exception makes UnitTesting freeze

In the setUpClass class method of a fixture of mine I use os.makedirs in some places. When there's no permission to make dirs (in / for example), a PermissionError is thrown, but this freezes UnitTesting. Maybe account for this case?

alternative test result display

Is it possible to redirect the test result to a file or something, instead of using the message panel?

The problem is, I want to run a command in a test, which will display thing on the message panel, thus hiding the test result from me...

generate coverage

I am not sure you should be the addressee here, but it would be cool to be able to generate test coverage alongside running the tests.

cannot run on Bitbucket Pipelines

I've modified the scripts for Travis and got as far as executing RunTests, but it invariably results in a timeout ("Sublime Text is not responding"). I have no idea why ST is getting stuck. BBPL run Python 2.7. I've also tried running 0 tests and even so the timeout triggers.


BBPL use Docker. I have little experience with Docker myself, but it may possible to create a testing environment as a Docker image and import that from BBPL.

Async testing

I'm not sure if it's an issue with this plugin or with Python unittest module itself, but I believe async testing is nor supported? If so, is that something that could be supported?

I suppose some kind of sleeping workaround can be used sometimes but it's quite terrible.

Feel free to close if it's not an issue with the plugin itself.

AttributeError: 'list' object has no attribute 'get'

I have my tests in tests folder and they all start with test*. My unittesting.json is:

{
    "tests_dir" : "tests",
    "pattern" : "test*.py",
    "async": true,
    "deferred": false,
    "verbosity": 5,
    "output": "<panel>",
}

If I try to test my package with these settings I get an error:

Traceback (most recent call last):
  File "C:\Program Files\Sublime Text 3\sublime_plugin.py", line 536, in run_
    return self.run(**args)
  File "unittesting.test_runner in C:\Users\igor\AppData\Roaming\Sublime Text 3\Installed Packages\UnitTesting.sublime-package", line 163, in run
  File "unittesting.test_runner in C:\Users\igor\AppData\Roaming\Sublime Text 3\Installed Packages\UnitTesting.sublime-package", line 97, in load_settings
AttributeError: 'list' object has no attribute 'get'

If I remove the coma in the end of the unittesting.json everything turns back to normal and the tests are run. This happens for me both on Windows and Linux.

Not uploading coverage reports when there are tests failures/errors

Even with failure, the coverage is generated, but after that its results are not uploaded. The repository is: https://github.com/evandrocoan/PackagesManager

It must always load the results because there is nothing wrong with some tests failing. It is well expected that some tests are going to fail, for some time or even for ever, but just because they fail does not mean they should be moved from the application or block something else.

----------------------------------------------------------------------
Ran 60 tests in 8.894s
FAILED (failures=19)
Name                                                            Stmts   Miss  Cover
-----------------------------------------------------------------------------------
1_reloader.py                                                      61     33    46%
2_bootstrap.py                                                    268    165    38%
PackagesManager.py                                                 48     21    56%
...
packages_manager/upgraders/hg_upgrader.py                          49     38    22%
packages_manager/upgraders/vcs_upgrader.py                          7      4    43%
packages_manager/versions.py                                       61      9    85%
-----------------------------------------------------------------------------------
TOTAL                                                           11537   7185    38%
UnitTesting: Done.
The command "sh travis.sh run_tests --coverage" exited with 1.
Done. Your build exited with 1.

"capture_console": true crashes sublime text

Title says it all. If I add this line to the end of my unittesting.json Sublime Text hangs upon running the tests. Both on Windows and Linux. The strange part is that it still runs all the tests on Travis Linux machine and Appveyor Windows machine, so it may be my setup. Any ideas?

It was working until a couple of weeks ago.

Travis OSX build is halting

The Linux build is going OK, but the OSX is always halting with this:

Wait for Sublime Text response
......
Start to read output...
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received
The build has been terminated

1 .https://travis-ci.org/evandrocoan/PackagesManager/jobs/325856633

The package PackagesManager is just Package Control renamed without space on the name, but setup to run the unit tests with UnitTesting.

I just have setup another package https://travis-ci.org/evandrocoan/WrapPlus and the OSX is building just fine. Perhaps issue with PackagesManager this is related with:

  1. #57 Failures on OSX

[Bug] command: unit_testing_run_scheduler run not one time

1. Summary

If I start Sublime Text, command: unit_testing_run_scheduler run not one time for me. The more packages I have installed, the more times I see command: unit_testing_run_scheduler in console.

2. Settings

For example, I have 400+ packages for Sublime Text.

3. Steps to reproduce

I reproduce the problem in a version of Sublime Text without plugins and user settings.

I install UnitTesting → I restart Sublime Text → I open Sublime Text console.

4. Expected behavior

command: unit_testing_run_scheduler

5. Actual behavior

command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler
command: unit_testing_run_scheduler

If I disable some my packages, I get command: unit_testing_run_scheduler fewer time, but still no one time. If I disable still some packages, I get command: unit_testing_run_scheduler fewer time and so on.

6. Environment

Operating system and version:
Windows 10 Enterprise LTSB 64-bit EN
Sublime Text:
Build 3126
Package:
The latest stable version of UnitTesting for Sublime Text 3

Thanks.

load only default preferences

Thanks for a great plugin! It is extremely useful for Sublime Text plugin development! 👍

My question is: do you think it is possible to make tests load only default settings, so NOT the user settings? It seems to make sense as then we can test against initial state of the plugin. I gave it a thought, but this seems to be internal to sublime. Any ideas? Do you think it might be useful?

Appveyor run_syntax_tests and run_color_scheme_tests failing

run_syntax_tests and run_color_scheme_tests are failing on appveyor.

For example with ColorSchemeUnit: https://github.com/gerardroche/sublime-color-scheme-unit/tree/develop

Configuration:

Note I am using my development version of UnitTesting, but this is not the issue.

environment:
    global:
        PACKAGE: "ColorSchemeUnit"
        SUBLIME_TEXT_VERSION: "3"

install:
    - ps: appveyor DownloadFile "https://raw.githubusercontent.com/gerardroche/UnitTesting/develop/sbin/appveyor.ps1"
    - ps: ./appveyor.ps1 "bootstrap" -verbose

build: off

test_script:
    - ps: ./appveyor.ps1 "run_tests" -coverage -verbose
    - ps: ./appveyor.ps1 "run_syntax_tests"
    - ps: ./appveyor.ps1 "run_color_scheme_tests"
git clone -q --branch=develop https://github.com/gerardroche/sublime-color-scheme-unit.git C:\projects\sublime-color-scheme-unit
git checkout -qf b5cceccedc4344f1e7acec3632276ec1b0b404c7
Running Install scripts
appveyor DownloadFile "https://raw.githubusercontent.com/gerardroche/UnitTesting/develop/sbin/appveyor.ps1"
Downloading appveyor.ps1 (4,706 bytes)...100%
./appveyor.ps1 "bootstrap" -verbose
VERBOSE: copy the package to sublime text Packages directory
VERBOSE: download UnitTesting tag: 1.1.0
VERBOSE: 09116efe4e1f4046391051cb3944f6bfd3ef624f
VERBOSE: 
VERBOSE: download sublime-coverage tag: 1.0.0
VERBOSE: bdda8348acf163fbb2c7e276d7d5415c2d0fa4bb
VERBOSE: 
VERBOSE: installing sublime text 3
VERBOSE: GET http://www.sublimetext.com/3 with 0-byte payload
VERBOSE: received 41643-byte response of content type text/html; charset=utf-8
VERBOSE: downloading https://download.sublimetext.com/Sublime Text Build 3143 x64.zip
./appveyor.ps1 "run_tests" -coverage -verbose
VERBOSE: Schedule:
VERBOSE:   output: C:\st\Data\Packages\User\UnitTesting\ColorSchemeUnit\result
VERBOSE:   syntax_test: False
VERBOSE:   coverage: True
VERBOSE:   color_scheme_test: False
VERBOSE:   package: ColorSchemeUnit
..
VERBOSE: start to read output
test_bg (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_bg_build (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_fg (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_fg_bg (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_fg_bg_fs (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_fg_bg_fs_build (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_fg_build (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_fs_bold (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_fs_bold_italic (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_fs_italic (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_invalid (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_repeats (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_valid_comment_markers (test_color_test_assertion_params_pattern.TestColorTestAssertionParamsPattern) ... ok
test_allow_skipping_syntax_if_not_found (test_color_test_params_pattern.TestColorTestParamsPattern) ... ok
test_allows_syntax_to_be_auto_detected (test_color_test_params_pattern.TestColorTestParamsPattern) ... ok
test_doesnt_include_trailing_whitespace (test_color_test_params_pattern.TestColorTestParamsPattern) ... ok
test_invalid (test_color_test_params_pattern.TestColorTestParamsPattern) ... ok
test_valid (test_color_test_params_pattern.TestColorTestParamsPattern) ... ok
test_valid_using_comments (test_color_test_params_pattern.TestColorTestParamsPattern) ... ok
test_generate_assertions (test_generate_assertions.TestGenerateAssertions) ... ok
test_generate_assertions_that_ends_in_blanks (test_generate_assertions.TestGenerateAssertions) ... ok
test_generate_assertions_with_blanks (test_generate_assertions.TestGenerateAssertions) ... ok
test_generate_assertions_with_comment_start (test_generate_assertions.TestGenerateAssertions) ... ok
test_generate_repeat_assertions (test_generate_assertions.TestGenerateAssertions) ... ok
test_invalid (test_is_valid_color_scheme_test_file_name.TestIsValidColorSchemeTestFileName) ... ok
test_valid (test_is_valid_color_scheme_test_file_name.TestIsValidColorSchemeTestFileName) ... ok
----------------------------------------------------------------------
Ran 26 tests in 1.891s
OK
Name                  Stmts   Miss  Cover
-----------------------------------------
lib\color_scheme.py      31     24    23%
lib\coverage.py          89     81     9%
lib\generator.py         58     30    48%
lib\result.py            87     74    15%
lib\runner.py           193    165    15%
lib\test.py              68     43    37%
plugin.py                64     36    44%
-----------------------------------------
TOTAL                   590    453    23%
UnitTesting: Done.
./appveyor.ps1 "run_syntax_tests"
VERBOSE: Schedule:
VERBOSE:   output: C:\st\Data\Packages\User\UnitTesting\ColorSchemeUnit\result
VERBOSE:   syntax_test: True
VERBOSE:   coverage: False
VERBOSE:   color_scheme_test: False
VERBOSE:   package: ColorSchemeUnit
.............................................................
Timeout: Sublime Text is not responding.

Travis CI errors

I'm getting CI errors on travis, what changed?

...
sublime_text_3/Icon/48x48/sublime-text.png
sublime_text_3/Icon/256x256/
sublime_text_3/Icon/256x256/sublime-text.png
sublime_text_3/sublime.py
sublime_text_3/sublime_plugin.py

travis_time:end:24f9deb4:start=1514012490463751222,finish=1514012494801732430,duration=4337981208
�[0K
�[31;1mThe command "sh travis.sh bootstrap" failed and exited with 1 during .�[0m

Your build has been stopped.

https://travis-ci.org/NeoVintageous/NeoVintageous/builds/320570276

https://api.travis-ci.org/v3/job/320570277/log.txt

add text-based spec format for command tests

In Vintageous, I'm using a plain text format to declare tests for ST commands. This allows me to write the tests much faster than in Python code.

Long-term, I'd like to 'outsource' all the testing to a specialized package like UnitTesting. Given that the text-based approach is generally useful, what do you think about including a similar feature in UT?

Here's an example file:

https://github.com/guillermooo/Vintageous/blob/master/tests/commands/vi_x-internal-normal-mode.cmd-test

My current implementation is very simple, but it could be extended with the following features:

  • check state of view/window settings
  • better traceback fo failing .cmd-test
  • teleport user to failing .cmd-test
  • options to toggle checking of selections before/after and possibly other options
  • autogeneration of .cmd-test stubs for all *Command classes found in the project
  • ...

How to run selected tests?

For example:

.
|__ tests/
    |___a_tests.py
    |___b_tests.py

a_tests.py

class Tests():
     def test_a_test():
          print("a")

     def test_b_test():
          print("b")

b_tests.py

class Tests():
     def test_c_test():
          print("a")

     def test_d_test():
          print("b")

Can I set a setting like:

{
    "tests_dir" : "package_manager",
    "pattern" : "*tests.py",
    "async": false,
    "deferred": true,
    "verbosity": 1,
    "capture_console": false,
    "reload_package_on_testing": true,
    "show_reload_progress": true,
    "output": null,
    "selected_test": 
    [
         "test_b_test",
         "test_c_test",
    ]
}

If selected_tests is empty or non existent, then all tests are run. Otherwise, only the listed tests are run.

testing agains multiple builds of ST

I think this is possible now by modifying the existing scripts, but perhaps it's worth including such functionality by default? I could take a look at it, but some guidance would be appreciated.

Failures on OSX

I've been getting failures on OSX. They seem like random failures because they only seemed to appear on some branches even when the checkouts are identical. I think I've narrowed it down to an issue with "close_windows_when_empty" setting, but not sure yet what the best to solve it is or why it's happening.

In build 3126 if close_windows_when_empty is true then Sublime Text won't open. It seems like ST is crashing, it's not, it's just opening and then instantly closing due to a bug with that setting.

The install_sublime_text.sh script anticipates this:

https://github.com/randy3k/UnitTesting/blob/1f97a12a8dc0a1a920a4e644a5be0769e4c8e010/sbin/install_sublime_text.sh#L90

if [ ! -f "$STP/User/Preferences.sublime-settings" ]; then
    echo creating sublime package directory
    mkdir -p "$STP/User"
    # make sure a new window will be opened
    echo '{"close_windows_when_empty": false }' > "$STP/User/Preferences.sublime-settings"
fi

However, on some branches on OSX it looks like that settings file isn't being created which causes failures. It looks like it's an issue on older branches which makes me think it's some issue with Travis caching that settings file and the code above won't create the settings if the preferences file already exists.

For example:

On this master branch there are failures: master

When I create a new branch off the master branch it runs fine: upgrades

Combing through the log file the master branch is missing the message "creating sublime package directory" which means the above is not being executed and so I'm guessing it's failing because "close_windows_when_empty" isn't being set to false.

This is at least my initial guess. I can't see anything else it could be.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.