├── .gitignore
├── .travis.yml
├── CHANGELOG.md
├── LICENSE
├── README.md
├── draft
├── test_importerror.py
├── test_overlay.py
└── test_stdout_capture.py
├── make
├── pytest.ini
├── pytui
├── __init__.py
├── common.py
├── logging_tools.py
├── plugin.py
├── runner.py
├── settings.py
└── ui.py
├── setup.cfg
├── setup.py
├── test_projects
├── test_module_a
│ ├── __init__.py
│ ├── test_feat_1.py
│ ├── test_import_error_in_test.py
│ └── test_pytest_options.py
├── test_module_b
│ ├── __init__.py
│ ├── test_feat_3.py
│ └── test_feat_4.py
└── test_module_c
│ ├── test_import_error_in_module.py
│ └── test_syntax_error.py
├── tests
├── __init__.py
├── test_common.py
└── test_runner.py
└── tox.ini
/.gitignore:
--------------------------------------------------------------------------------
1 | *.pyc
2 | env/
3 | env3/
4 | *.log
5 | draft/*
6 | dist/
7 | pytest_ui.egg-info/
8 | .tox
9 | .eggs
10 | test_projects
11 | build/
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | language: python
2 | python:
3 | - "2.7"
4 | - "3.4"
5 | - "3.5"
6 | - "3.6"
7 | - "3.7"
8 | - "3.8"
9 | - "3.8-dev" # 3.8 development branch
10 | - "nightly" # nightly build
11 | # command to install dependencies
12 | install:
13 | - pip install .
14 | # command to run tests
15 | script:
16 | - pytest
17 | env: PYTHONPATH=$PYTHONPATH:$TRAVIS_BUILD_DIR/pytui
18 |
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
1 | version 0.5
2 | -----------
3 | - run single test
4 | - update ui
5 |
6 |
7 | version 0.4
8 | -----------
9 |
10 | - drop beta from version number
11 | - fix fuzzy and non-fuzzy boundary filter issue
12 |
13 |
14 | version 0.3.6b0
15 | ---------------
16 |
17 | - pytest-compatible interface.
18 |
19 |
20 | version 0.3.5b0
21 | ---------------
22 |
23 | - fix bug related to disabled terminal plugin
24 | - add non-fuzzy syntax in filter for exact matching
25 | - increase pytest verbosity for detailed assert diffs.
26 |
27 |
28 | version 0.3.4b0
29 | ---------------
30 |
31 | - add --debug/--no-debug option to CLI command
32 | - disable debug logging by default, enable with --debug
33 | - fix log garbage on screen in the debug mode, clear the screen after pytest run
34 |
35 |
36 | version 0.3.3b0
37 | ---------------
38 |
39 | - Catch pytest crashes and report them in a popup.
40 |
41 |
42 | version 0.3.2b0
43 | ---------------
44 |
45 | - Improve error reporting. Show an popup dialog when collect/testrun errors occur (based on pytest exit code)
46 | - Unfreeze the dependencies in setup.py
47 |
48 |
49 | version 0.3.1b0
50 | ---------------
51 |
52 | - Add proper handling for collect-time errors (like module level import errors or syntax errors)
53 |
54 |
55 | version 0.3b
56 | ------------
57 |
58 | - Make source python2/3 compatible
59 | - Add exact dependency versions into setup.py
60 | - Workaround the problem with cyclic logging of stdout/printing stdout logs to stdout caused by pytest/capture or pytest/logging
61 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2017 Martin
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | [](https://travis-ci.com/martinsmid/pytest-ui)
2 |
3 | # pytest-ui
4 | Text User Interface for running python tests. Still in _beta_ version
5 |
6 | # installation
7 | - install using pip
8 | `pip install pytest-ui`
9 | - provides the cli command `pytui`
10 |
11 | # usage
12 | ```
13 | $ pytui --help
14 | Usage: pytui [OPTIONS] [PATH]
15 |
16 | Options:
17 | --debug / --no-debug Enable debug logging [default: False]
18 | --help Show this message and exit.
19 | ```
20 | - pypi address
21 | https://pypi.python.org/pypi/pytest-ui
22 |
23 | # keyboard controls
24 | - r, F5 - run tests (last failed or first run, using filter)
25 | - R, Ctrl + F5 - run all tests (using filter)
26 | - s - run single test under cursor
27 | - / - focus filter input
28 | - Ctrl + f - clear filter input and focus it
29 | - F4 - toggle show only failed tests
30 | - Alt + Up/Down - navigate between failed tests (skipping passed)
31 | - q - close window, quit (in main window)
32 |
33 | ## filter input
34 | By default, filter input is in fuzzy mode. This could be avoided by using dash signs,
35 | where exact match mode is used between a pair of them. For example
36 |
37 | `abc#match#def` will match fuzzy "abc", then exactly "match" and then again fuzzy "def"
38 |
39 | # main goals
40 | The goal of this project is to ease the testing process by
41 | - [x] selecting tests to run using fuzzy filter
42 | - [x] viewing failed tests stacktrace/output/log while the test suite is still running
43 | - [x] rerunning failed tests
44 | - [ ] running a test with debugger
45 | - [ ] usage as pytest plugin (for custom pytest scripts)
46 |
--------------------------------------------------------------------------------
/draft/test_importerror.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | from builtins import object
3 | import sys
4 | import pytest
5 | import unittest
6 | import logging
7 | import logging_tools
8 |
9 |
10 | logger = logging.getLogger(__name__)
11 |
12 |
13 | class PutrPytestPlugin(object):
14 | def pytest_runtest_protocol(self, item, nextitem):
15 | print('pytest_runtest_protocol %s %s' % (item, nextitem))
16 |
17 | def pytest_runtest_makereport(self, item, call):
18 | print('pytest_runtest_makereport %s %s' % (item, call))
19 |
20 | # @pytest.hookimpl(hookwrapper=True)
21 | # def pytest_runtest_makereport(self, item, call):
22 | # # logger.debug('pytest_runtest_makereport %s %s', item, call)
23 | # outcome = yield
24 | # # logger.debug('outcome %s', outcome)
25 | # result = outcome.get_result()
26 | # logger.debug('result %s', result)
27 | # logger.debug('result.capstdout %s', result.capstdout)
28 | # logger.debug('result.capstderr %s', result.capstderr)
29 |
30 | # if call.when == 'call':
31 | # self.runner.set_test_result(self.runner.get_test_id(item), call)
32 |
33 | # logger.debug('pytest_runtest_makereport %s %s', item, call)
34 |
35 | def pytest_itemcollected(self, item):
36 | print('pytest_itemcollected %s' % item)
37 |
38 | def pytest_collectstart(self, collector):
39 | print('pytest_collectstart(self, collector)')
40 |
41 | def pytest_collectreport(self, report):
42 | pass
43 |
44 | def pytest_runtest_logreport(self, report):
45 | print('pytest_runtest_logreport')
46 | logger.debug('report %s', report)
47 | logger.debug('report.capstdout %s', report.capstdout)
48 | logger.debug('report.capstderr %s', report.capstderr)
49 |
50 |
51 | if __name__ == '__main__':
52 | logging_tools.configure()
53 |
54 | print(sys.path)
55 | pytest.main(['-p', 'no:terminal', 'test_projects/test_module_a/test_feat_1.py'], plugins=[PutrPytestPlugin()])
56 |
--------------------------------------------------------------------------------
/draft/test_overlay.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # encoding: utf-8
3 |
4 | from __future__ import print_function
5 | import urwid
6 | import sys
7 |
8 | class TestResultWindow(urwid.WidgetWrap):
9 | _sizing = frozenset(['box'])
10 | # _selectable = True
11 |
12 | def __init__(self, text, escape_method):
13 | self.escape_method = escape_method
14 | super(TestResultWindow, self).__init__(urwid.LineBox(urwid.Filler(urwid.Text(text))))
15 |
16 | def keypress(self, size, key):
17 | raise urwid.ExitMainLoop()
18 | if key == 'esc':
19 | self.escape_method()
20 |
21 | return None
22 |
23 | class TestResultWindow2(urwid.LineBox):
24 | _sizing = frozenset(['box'])
25 | # _selectable = True
26 |
27 | def __init__(self, text, escape_method):
28 | self.escape_method = escape_method
29 | super(TestResultWindow2, self).__init__(urwid.Filler(urwid.Text(text)))
30 |
31 | def keypress(self, size, key):
32 | print('here')
33 | raise urwid.ExitMainLoop()
34 | if key == 'esc':
35 | self.escape_method()
36 |
37 | return key
38 |
39 |
40 | class FixedLineBox(urwid.LineBox):
41 | _sizing = frozenset(['fixed'])
42 |
43 | def pack(self, size=None, focus=False):
44 | return (20, 2)
45 |
46 |
47 | w_main = urwid.Overlay(
48 | TestResultWindow2('The\ntest\nresult', None),
49 | urwid.SolidFill(),
50 | 'center', ('relative', 80), 'middle', ('relative', 80))
51 |
52 |
53 | def handle_input(key):
54 | if key in ('q', 'Q'):
55 | print('exiting on q')
56 | raise urwid.ExitMainLoop()
57 | elif key in ('1'):
58 | main_loop.widget = urwid.LineBox(urwid.Filler(urwid.Text('The second top window', align='right')))
59 |
60 |
61 | if __name__ == '__main__':
62 | main_loop = urwid.MainLoop(w_main, palette=[('reversed', 'standout', '')],
63 | unhandled_input=handle_input)
64 | main_loop.run()
65 |
--------------------------------------------------------------------------------
/draft/test_stdout_capture.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | from future import standard_library
3 | standard_library.install_aliases()
4 | #!/usr/bin/env python
5 | # encoding: utf-8
6 |
7 | import sys
8 | import unittest
9 | from io import StringIO
10 |
11 |
12 | if __name__ == '__main__':
13 | _orig_stdout = sys.stdout
14 | _orig_stderr = sys.stderr
15 | sys.stdout = StringIO()
16 | sys.stderr = StringIO()
17 |
18 | loader = unittest.TestLoader()
19 | top_suite = loader.loadTestsFromName('test_module_b.test_feat_3')
20 | result = unittest.TextTestRunner(stream=sys.stdout, verbosity=2).run(top_suite)
21 |
22 | test_output = sys.stdout.getvalue()
23 | test_output_err = sys.stderr.getvalue()
24 |
25 | sys.stdout.close()
26 | sys.stderr.close()
27 |
28 | sys.stdout = _orig_stdout
29 | sys.stderr = _orig_stderr
30 |
31 | print('And here is the output')
32 | print(test_output)
33 | print('And here is the error output')
34 | print(test_output_err)
--------------------------------------------------------------------------------
/make:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | import sys
3 | import shutil
4 | import doit
5 | import subprocess
6 | from git import Repo
7 |
8 | def get_release_tag():
9 | """ Ask user for release tag and return it """
10 | repo = Repo('.')
11 | repo.tags
12 | current_tag = next((tag for tag in repo.tags if tag.commit == repo.head.commit), None)
13 | if current_tag:
14 | use_current = input(f'Use current tag "{current_tag.name}" for release ? [y/n] : ')
15 | if use_current == 'y':
16 | return current_tag.name
17 |
18 | # print all/previous release tags
19 | output = subprocess.check_output(['git', 'tag', '--list', 'v*']).decode()
20 | print(f""" -- Existing release tags : \n{output}""")
21 | release_tag = input('Enter release tag : ')
22 | confirmed = input(f'Confirm release tag {release_tag} [y/n] : ')
23 | if confirmed != 'y':
24 | return False
25 |
26 | return release_tag
27 |
28 |
29 | def run_setup_build():
30 | subprocess.run(["python", "setup.py", "sdist", "bdist_wheel"])
31 |
32 |
33 | def prepare():
34 | # check current branch
35 | repo = Repo('.')
36 | if repo.active_branch.name != 'master':
37 | print(f'You are in "{repo.active_branch.name}" branch. Switch to master for release.')
38 | sys.exit(1)
39 |
40 | def build():
41 | # get release tag
42 | release_tag = get_release_tag()
43 | changelog_tag = release_tag[1:]
44 | # check changelog
45 | cmd = subprocess.run(["grep", f'{changelog_tag}', "CHANGELOG.md"])
46 | if cmd.returncode != 0:
47 | print(f'Update CHANGELOG.md. Current release tag "{release_tag}" not found.')
48 | sys.exit(2)
49 |
50 | # tag release
51 | subprocess.run(["git", "tag", release_tag])
52 |
53 | # clean build dir
54 | shutil.rmtree('dist')
55 |
56 | # build
57 | run_setup_build()
58 | return True
59 |
60 |
61 | def publish(pypi_repo_name):
62 | # push tags
63 | subprocess.run(["git", "push", "--tags"])
64 |
65 | print(pypi_repo_name)
66 | # upload to pypi
67 | subprocess.run(["twine", "upload", "-r", pypi_repo_name, "dist/*"])
68 |
69 | return True
70 |
71 |
72 | def info(live):
73 | _type = 'live' if live else 'test'
74 | print(f'Doing a {_type} release.')
75 |
76 |
77 | def task_release(live=False):
78 | """ Build a package. Prepare files for upload. """
79 | pypi_repo_name = 'pypi' if live else 'pypitest'
80 |
81 | return {
82 | 'actions': [
83 | info,
84 | prepare,
85 | build,
86 | (publish, [pypi_repo_name]),
87 | ],
88 | 'params': [
89 | {
90 | 'name': 'live',
91 | 'long': 'live',
92 | 'help': 'make a real release, not a test',
93 | 'type': bool,
94 | 'default': False
95 | }
96 | ],
97 | 'verbosity': 2,
98 | }
99 |
100 | if __name__ == '__main__':
101 | doit.run(globals())
102 |
--------------------------------------------------------------------------------
/pytest.ini:
--------------------------------------------------------------------------------
1 | [pytest]
2 | testpaths = tests
3 | addopts = -v
4 |
--------------------------------------------------------------------------------
/pytui/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/martinsmid/pytest-ui/15fcbe04a6467cc6f7a373ef6156acc44f0ba5ec/pytui/__init__.py
--------------------------------------------------------------------------------
/pytui/common.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 | from builtins import object
3 |
4 | import re
5 | from logging_tools import get_logger
6 |
7 | logger = get_logger('ui')
8 |
9 |
10 | def get_fuzzy_regex(fuzzy_str):
11 | return '.*?'.join(list(iter(
12 | fuzzy_str.replace('.', r'\.').replace(r'\\', '\\\\')
13 | )))
14 |
15 |
16 | def get_filter_regex_str(filter_value):
17 | pieces = filter_value.split('#')
18 | return '.*'.join(
19 | (get_fuzzy_regex(value) if i % 2 == 0 else value
20 | for i, value in enumerate(pieces)
21 | )
22 | )
23 |
24 |
25 | def get_filter_regex(filter_value):
26 | if not filter_value:
27 | return None
28 |
29 | regexp_str = get_filter_regex_str(filter_value)
30 | logger.debug('filter_regex %s', regexp_str)
31 | return re.compile(regexp_str, re.UNICODE + re.IGNORECASE)
32 |
33 |
34 | class PytestExitcodes(object):
35 | ALL_COLLECTED = 0
36 | ALL_COLLECTED_SOME_FAILED = 1
37 | INTERRUPTED_BY_USER = 2
38 | INTERNAL_ERROR = 3
39 | USAGE_ERROR = 4
40 | NO_TESTS_COLLECTED = 5
41 |
42 | # Own exitcodes
43 | CRASHED = 100
44 |
45 | text = {
46 | ALL_COLLECTED: "All tests were collected and passed successfully",
47 | ALL_COLLECTED_SOME_FAILED: "Tests were collected and run but some of the tests failed",
48 | INTERRUPTED_BY_USER: "Test execution was interrupted by the user",
49 | INTERNAL_ERROR: "Internal error happened while executing tests",
50 | USAGE_ERROR: "pytest command line usage error",
51 | NO_TESTS_COLLECTED: "No tests were collected",
52 | CRASHED: "Pytest crashed",
53 | }
54 |
--------------------------------------------------------------------------------
/pytui/logging_tools.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 | from __future__ import unicode_literals
3 | from builtins import object # noqa: F401
4 |
5 | import logging
6 | import logging.config
7 | from io import IOBase
8 |
9 | import settings
10 |
11 |
12 | DEBUG_A = 9
13 | DEBUG_B = 8
14 | DEBUG_C = 7
15 | logging.addLevelName(DEBUG_A, "DEBUG_A")
16 | logging.addLevelName(DEBUG_B, "DEBUG_B")
17 | logging.addLevelName(DEBUG_C, "DEBUG_C")
18 |
19 |
20 | def get_logger(name, *args):
21 | return logging.getLogger('.'.join(['pytui', name] + list(args)))
22 |
23 |
24 | class LogWriter(IOBase):
25 | def __init__(self, logger):
26 | self.logger = logger
27 | self._data = []
28 |
29 | def write(self, message):
30 | # self.logger.debug('STDOUT: (%s)\n', message.strip('\n'))
31 | self._data.append(message)
32 |
33 |
34 | def configure(filename, debug):
35 | logging_dict = {
36 | 'version': 1,
37 | 'disable_existing_loggers': True,
38 | 'formatters': {
39 | 'process': {
40 | 'format':
41 | '%(created)f %(msecs)25.19f %(name)-25s %(levelname)-7s %(message)s',
42 | }
43 | },
44 | 'handlers': {
45 | 'default': {
46 | 'class': 'logging.NullHandler'
47 | },
48 | 'logfile': {
49 | 'class': 'logging.FileHandler',
50 | 'formatter': 'process',
51 | 'filename': filename,
52 | 'mode': 'w+',
53 | }
54 | },
55 | 'loggers': {
56 | 'pytui': {
57 | 'handlers': ['logfile'],
58 | 'level': 'DEBUG' if debug else 'INFO',
59 | },
60 | 'pytui.runner.pipe': {
61 | 'level': 'INFO',
62 | },
63 | 'pytui.runner.stdout': {
64 | 'level': 'INFO',
65 | },
66 | 'pytui.runner.stderr': {
67 | 'level': 'INFO',
68 | },
69 | },
70 | 'root': {
71 | 'handlers': ['default'],
72 | 'level': 'CRITICAL',
73 | }
74 | }
75 |
76 | for module in settings.DEBUG_MODULES:
77 | logging_dict['loggers'].setdefault(module, {})
78 | logging_dict['loggers'][module]['level'] = 'DEBUG'
79 | # logging_dict['loggers'][module]['handlers'] = ['logfile']
80 | # logging_dict['loggers'][module]['propagate'] = False
81 |
82 | logging.config.dictConfig(logging_dict)
83 |
--------------------------------------------------------------------------------
/pytui/plugin.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 | from __future__ import unicode_literals
3 | from builtins import str
4 | from builtins import filter
5 | from builtins import object
6 |
7 | import logging_tools
8 | from common import get_filter_regex
9 |
10 |
11 | logger = logging_tools.get_logger('runner.plugin')
12 |
13 |
14 | class PytestPlugin(object):
15 | def __init__(self, runner, filter_value=None, config=None, select_tests=None):
16 | self.runner = runner
17 | self.filter_regex = get_filter_regex(filter_value)
18 | self.select_tests = select_tests
19 | logger.debug('plugin init %s %s', runner, filter_value)
20 |
21 | def pytest_runtest_protocol(self, item, nextitem):
22 | logger.debug('pytest_runtest_protocol %s %s', item.nodeid, nextitem)
23 |
24 | def pytest_collectreport(self, report):
25 | logger.debug('pytest_collectreport %s', report)
26 |
27 | def pytest_report_teststatus(self, report):
28 | logger.debug('pytest_report_teststatus %s', report)
29 |
30 | def pytest_runtest_setup(self, item):
31 | logger.debug('pytest_runtest_setup %s', item.nodeid)
32 | self.runner.set_test_state(
33 | item.nodeid,
34 | 'setup'
35 | )
36 |
37 | def pytest_runtest_call(self, item):
38 | logger.debug('pytest_runtest_call %s', item.nodeid)
39 | self.runner.set_test_state(
40 | item.nodeid,
41 | 'call'
42 | )
43 |
44 | def pytest_runtest_teardown(self, item):
45 | logger.debug('pytest_runtest_teardown %s', item.nodeid)
46 | self.runner.set_test_state(
47 | item.nodeid,
48 | 'teardown'
49 | )
50 |
51 | def pytest_itemcollected(self, item):
52 | logger.debug('pytest_itemcollected %s', item.nodeid)
53 | self.runner.item_collected(item)
54 |
55 | def pytest_runtest_makereport(self, item, call):
56 | logger.debug('pytest_runtest_makereport %s %s %s',
57 | item.nodeid, call.when, str(call.excinfo))
58 | evalxfail = getattr(item, '_evalxfail', None)
59 | wasxfail = evalxfail and evalxfail.wasvalid() and evalxfail.istrue()
60 | if evalxfail:
61 | xfail_strict = evalxfail.get('strict', False)
62 | logger.debug('wasxfail: %s wasvalid: %s, istrue: %s, strict: %s',
63 | wasxfail, evalxfail.wasvalid(), evalxfail.istrue(), xfail_strict)
64 |
65 | if call.excinfo:
66 | logger.debug('excinfo: %s reason: %s',
67 | call.excinfo, getattr(call.excinfo.value, 'msg', '-'))
68 | self.runner.set_exception_info(item.nodeid, call.excinfo, call.when, wasxfail, None)
69 | elif wasxfail and call.when == 'call':
70 | self.runner.set_exception_info(item.nodeid, None, call.when, wasxfail, xfail_strict)
71 |
72 | def pytest_runtest_logreport(self, report):
73 | logger.debug('pytest_runtest_logreport %s', report)
74 | self.runner.set_test_result(
75 | report.nodeid,
76 | report
77 | )
78 |
79 | def pytest_collection_modifyitems(self, session, config, items):
80 | logger.debug('pytest_collection_modifyitems %s %s %s', session, config, items)
81 | logger.debug('pytest_collection_modifyitems select_tests %s', self.select_tests)
82 |
83 | def is_filtered(item):
84 | """Return True if item meets the filtering conditions.
85 | Filtering conditions are determined by the filter_regex and select_tests members.
86 | """
87 | test_id = self.runner.get_test_id(item)
88 | return (
89 | (
90 | self.filter_regex is None or self.filter_regex.findall(test_id)
91 | ) and (
92 | self.select_tests is None or test_id in self.select_tests
93 | )
94 | )
95 |
96 | if self.filter_regex or self.select_tests:
97 | items[:] = list(filter(is_filtered, items))
98 |
99 | logger.debug('pytest_collection_modifyitems filtered %s', [i.nodeid for i in items])
100 |
101 | def pytest_exception_interact(self, node, call, report):
102 | logger.debug('pytest_exception_interact %s %s %s', node.nodeid, call, report)
103 | self.runner.set_exception_info(node.nodeid, call.excinfo, call.when, False, None)
104 |
--------------------------------------------------------------------------------
/pytui/runner.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # encoding: utf-8
3 |
4 | from __future__ import absolute_import
5 | from __future__ import unicode_literals
6 | from builtins import bytes
7 | from builtins import str
8 | from future import standard_library
9 | standard_library.install_aliases()
10 | from builtins import range
11 | from builtins import object
12 |
13 | import os
14 | import sys
15 | import json
16 | import traceback
17 | from collections import OrderedDict
18 |
19 |
20 | from tblib import Traceback
21 | import pytest
22 | from _pytest.runner import Skipped
23 |
24 | import logging_tools
25 | from logging_tools import get_logger, LogWriter
26 | from plugin import PytestPlugin
27 | from common import PytestExitcodes
28 |
29 | log_name = 'runner'
30 | logger = get_logger(log_name)
31 | pipe_logger = get_logger(log_name, 'pipe')
32 | stdout_logger = get_logger(log_name, 'stdout')
33 | stdout_logger_writer = LogWriter(stdout_logger)
34 | stderr_logger = get_logger(log_name, 'stderr')
35 | stderr_logger_writer = LogWriter(stderr_logger)
36 | PIPE_LIMIT = 4096
37 |
38 |
39 | def get_chunks(string):
40 | for offset in range(0, len(string), PIPE_LIMIT):
41 | yield string[offset:offset + PIPE_LIMIT]
42 |
43 |
44 | class Runner(object):
45 | def __init__(self, write_pipe=None, pipe_size=None, pipe_semaphore=None):
46 | self.tests = OrderedDict()
47 | logger.debug('%s Init', self.__class__.__name__)
48 | self.write_pipe = os.fdopen(write_pipe, 'wb', 0)
49 | self.pipe_size = pipe_size
50 | self.pipe_semaphore = pipe_semaphore
51 |
52 | def pipe_send(self, method, **kwargs):
53 | data = bytes(b'%s\n' % json.dumps({
54 | 'method': method,
55 | 'params': kwargs
56 | }).encode('utf-8'))
57 |
58 | data_size = len(data)
59 | pipe_logger.debug('pipe write, data size: %s, pipe size: %s',
60 | data_size, self.pipe_size.value)
61 | pipe_logger.debug('data: %s', data)
62 |
63 | for chunk in get_chunks(data):
64 | self.pipe_send_chunk(chunk)
65 |
66 | def pipe_send_chunk(self, chunk):
67 | chunk_size = len(chunk)
68 | # wait for pipe to empty
69 | # pipe_logger.debug('pipe_send_chunk')
70 | while True:
71 | # pipe_logger.debug('pipe check cycle')
72 | with self.pipe_size.get_lock():
73 | pipe_writable = self.pipe_size.value + chunk_size <= PIPE_LIMIT
74 | if pipe_writable:
75 | pipe_logger.debug('pipe writable')
76 | break
77 |
78 | pipe_logger.debug('no space in pipe: %d', self.pipe_size.value)
79 | pipe_logger.debug(' waiting for reader')
80 | self.pipe_semaphore.clear()
81 | self.pipe_semaphore.wait()
82 | pipe_logger.debug(' reader finished')
83 |
84 | with self.pipe_size.get_lock():
85 | self.pipe_size.value += chunk_size
86 | self.write_pipe.write(chunk)
87 | pipe_logger.debug('writing to pipe: %s', chunk)
88 |
89 | def set_test_result(self, test_id, report):
90 | output = \
91 | getattr(report, 'capstdout', '') + \
92 | getattr(report, 'capstderr', '')
93 |
94 | self.pipe_send(
95 | 'set_test_result',
96 | test_id=test_id,
97 | output=output,
98 | result_state=self.result_state(report),
99 | when=report.when,
100 | outcome=report.outcome
101 | )
102 |
103 | def set_test_state(self, test_id, state):
104 | self.pipe_send(
105 | 'set_test_state',
106 | test_id=test_id,
107 | state=state
108 | )
109 |
110 | def set_exception_info(self, test_id, excinfo, when, wasxfail, xfail_strict):
111 | if excinfo:
112 | logger.debug('exc info repr %s', excinfo._getreprcrash())
113 | elif wasxfail:
114 | if when == 'call':
115 | self.pipe_send(
116 | 'set_test_result',
117 | test_id=test_id,
118 | output='',
119 | result_state='failed' if xfail_strict else 'xpass',
120 | when=when,
121 | outcome='passed',
122 | last_failed_exempt=xfail_strict,
123 | )
124 | if xfail_strict:
125 | logger.debug('LF EXEMPT %s', test_id)
126 | return
127 | if wasxfail:
128 | result = 'xfail'
129 | extracted_traceback = Traceback(excinfo.tb).to_dict()
130 | elif excinfo.type is Skipped:
131 | result = 'skipped'
132 | extracted_traceback = None
133 | else:
134 | result = 'failed'
135 | extracted_traceback = Traceback(excinfo.tb).to_dict()
136 |
137 | self.pipe_send(
138 | 'set_exception_info',
139 | test_id=test_id,
140 | exc_type=repr(excinfo.type),
141 | exc_value=traceback.format_exception_only(excinfo.type, excinfo.value)[-1],
142 | extracted_traceback=extracted_traceback,
143 | result_state=result,
144 | when=when
145 | )
146 |
147 | def set_pytest_error(self, exitcode, description=None):
148 | self.pipe_send(
149 | 'set_pytest_error',
150 | exitcode=exitcode,
151 | description=description
152 | )
153 |
154 | def get_test_id(self, test):
155 | raise NotImplementedError()
156 |
157 |
158 | class PytestRunner(Runner):
159 | _test_fail_states = ['failed', 'error', None, '']
160 |
161 | def get_test_id(self, test):
162 | return test.nodeid # .replace('/', '.')
163 |
164 | @classmethod
165 | def process_init_tests(cls, write_pipe, pipe_size, pipe_semaphore, debug, pytest_args):
166 | """ Class method as separate process entrypoint. """
167 | logging_tools.configure('pytui-runner.log', debug)
168 |
169 | sys.stdout = stdout_logger_writer
170 | sys.stderr = stderr_logger_writer
171 |
172 | runner = cls(write_pipe=write_pipe, pipe_size=pipe_size, pipe_semaphore=pipe_semaphore)
173 | exitcode, description = runner.init_tests(pytest_args)
174 |
175 | if exitcode != PytestExitcodes.ALL_COLLECTED:
176 | logger.warning('pytest failed with exitcode %d', exitcode)
177 | runner.set_pytest_error(exitcode, description)
178 |
179 | logger.info('Init finished')
180 | runner.pipe_send('init_finished')
181 | return exitcode
182 |
183 | def init_tests(self, pytest_args):
184 | logger.debug('Running pytest --collect-only')
185 |
186 | args = [
187 | '-vv',
188 | '--collect-only',
189 | ] + pytest_args
190 |
191 | try:
192 | exitcode = pytest.main(args,
193 | plugins=[PytestPlugin(runner=self)])
194 | except Exception as e:
195 | return PytestExitcodes.CRASHED, traceback.format_exc(e)
196 |
197 | return exitcode, None
198 |
199 | @classmethod
200 | def process_run_tests(cls, failed_only, filtered, write_pipe,
201 | pipe_size, pipe_semaphore, filter_value, debug,
202 | pytest_args, select_tests):
203 | """ Class method as a separate process entrypoint """
204 | logging_tools.configure('pytui-runner.log', debug)
205 | logger = get_logger(log_name)
206 | logger.info(
207 | 'Test run started (failed_only: %s, filtered: %s, pytest args: %s, select_tests: %s)',
208 | failed_only, filtered, ' '.join(pytest_args), select_tests
209 | )
210 |
211 | sys.stdout = stdout_logger_writer
212 | sys.stderr = stderr_logger_writer
213 |
214 | runner = cls(write_pipe=write_pipe, pipe_size=pipe_size,
215 | pipe_semaphore=pipe_semaphore)
216 | try:
217 | exitcode, description = runner.run_tests(failed_only,
218 | filter_value,
219 | pytest_args,
220 | select_tests)
221 | except Exception as exc:
222 | exitcode = PytestExitcodes.CRASHED
223 | description = str(exc)
224 | logger.exception('Failed to run tests')
225 |
226 | if exitcode in (PytestExitcodes.INTERNAL_ERROR,
227 | PytestExitcodes.USAGE_ERROR,
228 | PytestExitcodes.NO_TESTS_COLLECTED,
229 | PytestExitcodes.CRASHED):
230 | logger.warning('pytest failed with exitcode %d', exitcode)
231 | runner.set_pytest_error(exitcode, description)
232 |
233 | logger.info('Test run finished')
234 | runner.pipe_send('run_finished')
235 |
236 | return exitcode
237 |
238 | def item_collected(self, item):
239 | # self.tests[self.get_test_id(item)] = item
240 | self.pipe_send('item_collected', item_id=self.get_test_id(item))
241 |
242 | def run_tests(self, failed_only, filter_value, pytest_args, select_tests=None):
243 | args = [
244 | '-vv',
245 | ]
246 | if failed_only:
247 | args.append('--lf')
248 | args += pytest_args
249 |
250 | try:
251 | exitcode = pytest.main(
252 | args,
253 | plugins=[
254 | PytestPlugin(
255 | runner=self,
256 | filter_value=filter_value,
257 | select_tests=select_tests
258 | )
259 | ]
260 | )
261 | except Exception as e:
262 | return PytestExitcodes.CRASHED, traceback.format_exc(e)
263 |
264 | return exitcode, None
265 |
266 | def result_state(self, report):
267 | if not report:
268 | return ''
269 | elif report.outcome == 'passed':
270 | return 'ok'
271 | elif report.outcome == 'failed':
272 | return 'failed'
273 | elif report.outcome == 'skipped':
274 | return 'skipped'
275 |
276 | logger.warning('Unknown report outcome %s', report.outcome)
277 | return 'N/A'
278 |
--------------------------------------------------------------------------------
/pytui/settings.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 |
3 | DEBUG_MODULES = [
4 | # 'pytui',
5 | ]
6 |
7 | VERSION = '0.5'
8 |
--------------------------------------------------------------------------------
/pytui/ui.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # encoding: utf-8
3 |
4 | from __future__ import absolute_import
5 | from __future__ import unicode_literals
6 | from future import standard_library
7 | standard_library.install_aliases()
8 | from builtins import object
9 |
10 | import warnings
11 | warnings.filterwarnings("ignore")
12 | import os
13 | import sys
14 | import json
15 | import urwid
16 | import click
17 | import traceback
18 | import multiprocessing
19 | from collections import OrderedDict
20 |
21 | from tblib import Traceback
22 |
23 | sys.path.insert(0, os.path.dirname(__file__))
24 |
25 | import logging_tools
26 | from logging_tools import get_logger, DEBUG_B
27 | from common import get_filter_regex, PytestExitcodes
28 | from runner import PytestRunner
29 |
30 | logger = get_logger('ui')
31 |
32 |
33 | class TestStatus:
34 | XFAIL = 'xfail'
35 | XPASS = 'xpass'
36 | FAILED = 'failed'
37 | ERROR = 'error'
38 | SKIPPED = 'skipped'
39 | OK = 'ok'
40 | NOOP = '..'
41 |
42 |
43 | class TestLine(urwid.Widget):
44 | _sizing = frozenset(['flow'])
45 | _selectable = True
46 |
47 | signals = ["click"]
48 |
49 | def __init__(self, test_data, *args, **kwargs):
50 | self.test_data = test_data
51 | super(TestLine, self).__init__(*args, **kwargs)
52 |
53 | def rows(self, size, focus=False):
54 | return 1
55 |
56 | def render(self, size, focus=False):
57 | result_state_str = self.test_data.get('result_state', '..')
58 | (maxcol,) = size
59 | title_width = maxcol - 11
60 | main_attr = (self.test_data.get('runstate'), title_width)
61 | state_attr = (result_state_str, 10)
62 | return urwid.TextCanvas(
63 | [('{} {:^10}'.format(
64 | self.test_data['id'][:title_width].ljust(title_width),
65 | result_state_str[:10].upper()
66 | )).encode('utf-8')],
67 | maxcol=maxcol,
68 | attr=[[main_attr, (None, 1), state_attr]]
69 | )
70 |
71 | def keypress(self, size, key):
72 | if key == 'enter':
73 | self._emit('click')
74 |
75 | return key
76 |
77 |
78 | class StatusLine(urwid.Widget):
79 | _sizing = frozenset(['flow'])
80 |
81 | def __init__(self, stats_callback, *args, **kwargs):
82 | super(StatusLine, self).__init__(*args, **kwargs)
83 | self.stats_callback = stats_callback
84 |
85 | def rows(self, size, focus=False):
86 | return 1
87 |
88 | def render(self, size, focus=False):
89 | (maxcol,) = size
90 | stats = self.stats_callback()
91 | return urwid.TextCanvas(
92 | ['Total: {} Filtered: {} Failed: {}'
93 | .format(stats['total'], stats['filtered'], stats['failed'])
94 | .encode('utf-8')],
95 | maxcol=maxcol
96 | )
97 |
98 |
99 | class TestResultWindow(urwid.LineBox):
100 | _sizing = frozenset(['box'])
101 |
102 | def __init__(self, test_id, text, escape_method):
103 | self.escape_method = escape_method
104 |
105 | lines = text.split('\n')
106 | list_items = [
107 | urwid.AttrMap(urwid.Text(line), None, focus_map='reversed') for line in lines
108 | ]
109 |
110 | super(TestResultWindow, self).__init__(
111 | urwid.ListBox(
112 | urwid.SimpleFocusListWalker(list_items)
113 | ),
114 | title=test_id
115 | )
116 |
117 | def keypress(self, size, key):
118 | if key == 'q':
119 | self.escape_method()
120 |
121 | self._original_widget.keypress(size, key)
122 |
123 | return None
124 |
125 | def selectable(self):
126 | return True
127 |
128 | def set_focus(self, item):
129 | self._original_widget.set_focus(item)
130 |
131 |
132 | class ErrorPopupWindow(TestResultWindow):
133 | pass
134 |
135 |
136 | class Store(object):
137 | def __init__(self, ui):
138 | self.test_data = OrderedDict()
139 | self.ui = ui
140 | self.filter_regex = None
141 | self.filter_value = None
142 | self._show_failed_only = False
143 | self._show_collected = True
144 |
145 | @property
146 | def current_test_list(self):
147 | if not self.filter_regex and not self._show_failed_only and self._show_collected:
148 | return self.test_data
149 |
150 | return self._get_tests(
151 | self._show_failed_only,
152 | bool(self.filter_regex),
153 | collected=self._show_collected
154 | )
155 |
156 | def get_test_stats(self):
157 | return {
158 | 'total': len(self.test_data),
159 | 'filtered': len(self.current_test_list),
160 | 'failed': self.get_failed_test_count()
161 | }
162 |
163 | def item_collected(self, item_id):
164 | if item_id in self.test_data:
165 | logger.debug('Ignoring collect for %s', item_id)
166 | return
167 |
168 | self.test_data[item_id] = {
169 | 'id': item_id
170 | }
171 | self.ui.init_test_listbox()
172 |
173 | def get_test_position(self, test_id):
174 | return self.test_data[test_id]['position']
175 |
176 | def get_failed_test_count(self):
177 | return len(
178 | [
179 | test_id
180 | for test_id, test in list(self.current_test_list.items())
181 | if self.is_test_failed(test)
182 | ]
183 | )
184 |
185 | def _get_tests(
186 | self,
187 | failed_only=True,
188 | filtered=True,
189 | include_lf_exempt=True,
190 | collected=True,
191 | select_tests=None
192 | ):
193 | """
194 | Return list of tests based on argument filters
195 | """
196 | logger.info('_get_tests failed_only: %s filtered: %s include_lf_exempt %s collected %s',
197 | failed_only, filtered, include_lf_exempt, collected)
198 | return OrderedDict([
199 | (test_id, test)
200 | for test_id, test in list(self.test_data.items())
201 | if (
202 | not failed_only or self.is_test_failed(test)
203 | ) and (
204 | not filtered or self.is_test_filtered(test_id)
205 | ) and (
206 | not test.get('last_failed_exempt') or include_lf_exempt
207 | ) and (
208 | collected or not test.get('result_state', '') == ''
209 | ) and (
210 | select_tests is None or test_id in select_tests
211 | )
212 | ])
213 |
214 | def set_test_result(
215 | self,
216 | test_id,
217 | result_state,
218 | output,
219 | when,
220 | outcome,
221 | exc_type=None,
222 | exc_value=None,
223 | extracted_traceback=None,
224 | last_failed_exempt=None
225 | ):
226 | """
227 | Set test result in internal dictionary. Updates UI.
228 |
229 | Args:
230 | test_id: An unique string test identifier.
231 | """
232 | update_listbox = False
233 |
234 | if test_id not in self.test_data:
235 | self.test_data[test_id] = {
236 | 'id': test_id
237 | }
238 | update_listbox = True
239 |
240 | if extracted_traceback:
241 | py_traceback = Traceback.from_dict(extracted_traceback).as_traceback()
242 | extracted_traceback = traceback.extract_tb(py_traceback)
243 | output += ''.join(
244 | traceback.format_list(extracted_traceback) + [exc_value]
245 | )
246 |
247 | test_data = self.test_data[test_id]
248 | test_data['exc_type'] = exc_type
249 | test_data['exc_value'] = exc_value
250 | test_data['exc_tb'] = extracted_traceback
251 | if when == 'call' and last_failed_exempt is not None:
252 | test_data['last_failed_exempt'] = last_failed_exempt
253 |
254 | # Ignore success, except for the 'call' step
255 | # ignore successive failure, take only the first
256 | if (
257 | (outcome != 'passed' or when == 'call') and not test_data.get('result_state')
258 | ):
259 | test_data['result_state'] = result_state
260 | test_data['output'] = output
261 | if update_listbox:
262 | self.ui.init_test_listbox()
263 | else:
264 | self.ui.update_test_result(test_data)
265 |
266 | if when == 'teardown':
267 | test_data['runstate'] = None
268 | self.ui.update_test_line(test_data)
269 |
270 | def set_test_state(self, test_id, state):
271 | test_data = self.test_data[test_id]
272 | test_data['runstate'] = state
273 |
274 | self.ui.update_test_line(test_data)
275 | self.ui.set_listbox_focus(test_data)
276 |
277 | def set_exception_info(
278 | self,
279 | test_id,
280 | exc_type,
281 | exc_value,
282 | extracted_traceback,
283 | result_state,
284 | when
285 | ):
286 | self.set_test_result(
287 | test_id, result_state, exc_value, when, result_state,
288 | exc_type, exc_value, extracted_traceback
289 | )
290 |
291 | def set_filter(self, filter_value):
292 | self.filter_value = filter_value
293 | self.filter_regex = get_filter_regex(filter_value)
294 |
295 | def invalidate_test_results(self, tests):
296 | for test_id, test in list(tests.items()):
297 | self.clear_test_result(test_id)
298 |
299 | def clear_test_result(self, test_id):
300 | test_data = self.test_data[test_id]
301 | test_data.update({
302 | 'result': None,
303 | 'output': '',
304 | 'result_state': ''
305 | })
306 | test_data['widget'].test_data['result_state'] = ''
307 | test_data['widget']._invalidate()
308 |
309 | def is_test_failed(self, test_data):
310 | failed = (
311 | not test_data or
312 | test_data.get('result_state') in self.ui.runner_class._test_fail_states
313 | )
314 | return failed
315 |
316 | def is_test_filtered(self, test_id):
317 | return not self.filter_regex or self.filter_regex.findall(test_id)
318 |
319 | def get_failed_sibling(self, position, direction):
320 | """
321 | position is the position in ui listbox, and should be
322 | equal to position in the list of filtered tests
323 | """
324 | tests = self._get_tests(self.show_failed_only, True)
325 | keys = list(tests.keys())
326 | next_pos = position
327 |
328 | while True:
329 | next_pos = next_pos + direction
330 | if not (next_pos >= 0 and next_pos < len(keys)):
331 | return None
332 |
333 | if self.is_test_failed(tests[keys[next_pos]]):
334 | return keys[next_pos]
335 |
336 | def get_next_failed(self, test_id):
337 | return self.get_failed_sibling(test_id, 1)
338 |
339 | def get_previous_failed(self, test_id):
340 | return self.get_failed_sibling(test_id, -1)
341 |
342 | @property
343 | def show_failed_only(self):
344 | return self._show_failed_only
345 |
346 | @show_failed_only.setter
347 | def show_failed_only(self, value):
348 | self._show_failed_only = value
349 | self.ui.init_test_listbox()
350 |
351 | @property
352 | def show_collected(self):
353 | return self._show_collected
354 |
355 | @show_collected.setter
356 | def show_collected(self, value):
357 | self._show_collected = value
358 | self.ui.init_test_listbox()
359 |
360 | def set_pytest_error(self, exitcode, description=None):
361 | self.show_collected = False
362 | output = PytestExitcodes.text.get(exitcode, 'Unknown error')
363 | if description is not None:
364 | output += (
365 | '\n' +
366 | '---------- description ----------' +
367 | '\n' +
368 | description
369 | )
370 | self.ui.show_startup_error(
371 | 'Pytest init/collect failed',
372 | '{1:s} (pytest exitcode {0:d})'.format(exitcode, output),
373 | )
374 |
375 |
376 | class TestRunnerUI(object):
377 | palette = [
378 | ('reversed', '', 'dark green'), # noqa: E241
379 | ('edit', '', 'black', '', '', '#008'), # noqa: E241
380 | ('edit_focus', '', 'dark gray', '', '', '#00b'), # noqa: E241
381 | ('statusline', 'white', 'dark blue', '', '', ''), # noqa: E241
382 |
383 | # result states
384 | (TestStatus.XFAIL, 'brown', '', '', '', '#b00'), # noqa: E241
385 | (TestStatus.XPASS, 'brown', '', '', '', '#b00'), # noqa: E241
386 | (TestStatus.FAILED, 'light red', '', '', '', '#b00'), # noqa: E241
387 | (TestStatus.ERROR, 'brown', '', '', '#f88', '#b00'), # noqa: E241
388 | (TestStatus.SKIPPED, 'brown', '', '', '#f88', '#b00'), # noqa: E241
389 | (TestStatus.OK, 'dark green', '', '', '', ''), # noqa: E241
390 | (TestStatus.NOOP, 'dark gray', '', '', '', ''), # noqa: E241
391 |
392 |
393 | # run states
394 | ('setup', 'white', 'dark blue', '', '', ''), # noqa: E241
395 | ('call', 'white', 'dark blue', '', '', ''), # noqa: E241
396 | ('teardown', 'white', 'dark blue', '', '', ''), # noqa: E241
397 | ]
398 |
399 | def __init__(self, runner_class, debug, pytest_args):
400 | logger.info('Runner UI init')
401 | urwid.set_encoding("UTF-8")
402 |
403 | self.runner_class = runner_class
404 | self.debug = debug
405 | self.store = Store(self)
406 | self.pytest_args = pytest_args
407 |
408 | self.main_loop = None
409 | self.w_main = None
410 | self._first_failed_focused = False
411 |
412 | # process comm
413 | self.child_pipe = None
414 | self.pipe_size = multiprocessing.Value('i', 0)
415 | self.pipe_semaphore = multiprocessing.Event()
416 | self.receive_buffer = b''
417 | self.runner_process = None
418 |
419 | self.init_main_screen()
420 |
421 | def init_main_screen(self):
422 | self.w_filter_edit = urwid.Edit('Filter ')
423 | aw_filter_edit = urwid.AttrMap(self.w_filter_edit, 'edit', 'edit_focus')
424 | self.w_status_line = urwid.AttrMap(StatusLine(self.store.get_test_stats), 'statusline', '')
425 | urwid.connect_signal(self.w_filter_edit, 'change', self.on_filter_change)
426 | self.init_test_listbox()
427 | self.w_main = urwid.Padding(
428 | urwid.Pile([
429 | ('pack', urwid.Text(u'Python Urwid Test Runner', align='center')),
430 | ('pack', urwid.Divider()),
431 | ('pack', aw_filter_edit),
432 | ('pack', urwid.Divider()),
433 | self.w_test_listbox,
434 | ('pack', urwid.Divider()),
435 | ('pack', self.w_status_line),
436 | ]),
437 | left=2, right=2
438 | )
439 |
440 | def init_test_listbox(self):
441 | self.w_test_listbox = self.test_listbox(list(self.store.current_test_list.keys()))
442 | if self.w_main:
443 | self.w_status_line.original_widget._invalidate()
444 | self.w_main.original_widget.widget_list[4] = self.w_test_listbox
445 | self.w_main.original_widget._invalidate()
446 |
447 | def init_test_data(self):
448 | if self.runner_process and self.runner_process.is_alive():
449 | logger.info('Tests are already running')
450 | return
451 |
452 | self.runner_process = multiprocessing.Process(
453 | target=self.runner_class.process_init_tests,
454 | name='pytui-runner',
455 | args=(self.child_pipe, self.pipe_size, self.pipe_semaphore, self.debug),
456 | kwargs={
457 | 'pytest_args': self.pytest_args
458 | }
459 | )
460 | self.runner_process.start()
461 |
462 | def on_filter_change(self, filter_widget, filter_value):
463 | self.store.set_filter(filter_value)
464 | self.init_test_listbox()
465 | # self.w_main.original_widget._invalidate()
466 | # self.w_status_line.original_widget._invalidate()
467 | # self.main_loop.widget._invalidate()
468 | # self.main_loop.draw_screen()
469 |
470 | def received_output(self, data):
471 | """
472 | Parse data received by client and execute encoded action
473 | """
474 | logger.log(DEBUG_B, 'new data on pipe, data size: %s, pipe_size: %s',
475 | len(data), self.pipe_size.value)
476 | self.receive_buffer += data
477 | for chunk in self.receive_buffer.split(b'\n'):
478 | if not chunk:
479 | continue
480 | try:
481 | payload = json.loads(chunk.decode('utf-8'))
482 | assert 'method' in payload
483 | assert 'params' in payload
484 | except Exception:
485 | logger.debug('Failed to parse runner input: "%s"', chunk)
486 | # release the write end if waiting for read
487 | logger.log(DEBUG_B, 'pipe_size decrease to correct value')
488 | with self.pipe_size.get_lock():
489 | self.pipe_size.value -= len(data)
490 | self.pipe_semaphore.set()
491 | logger.log(DEBUG_B, 'released semaphore')
492 | return
493 |
494 | # correct buffer
495 | self.receive_buffer = self.receive_buffer[len(chunk) + 1:]
496 | logger.debug('handling method %s', payload['method'])
497 | try:
498 | if payload['method'] == 'item_collected':
499 | self.store.item_collected(**payload['params'])
500 | elif payload['method'] == 'set_test_result':
501 | self.store.set_test_result(**payload['params'])
502 | elif payload['method'] == 'set_exception_info':
503 | self.store.set_exception_info(**payload['params'])
504 | elif payload['method'] == 'set_test_state':
505 | self.store.set_test_state(**payload['params'])
506 | elif payload['method'] == 'set_pytest_error':
507 | self.store.set_pytest_error(**payload['params'])
508 | elif payload['method'] in ['init_finished', 'run_finished']:
509 | self.main_loop.screen.clear()
510 |
511 | except:
512 | logger.exception('Error in handler "%s"', payload['method'])
513 |
514 | # self.w_main._invalidate()
515 | # release the write end if waiting for read
516 | logger.log(DEBUG_B, 'pipe_size decrease to correct value')
517 | with self.pipe_size.get_lock():
518 | self.pipe_size.value -= len(data)
519 | self.pipe_semaphore.set()
520 | logger.log(DEBUG_B, 'released semaphore')
521 |
522 | def run(self):
523 | self.main_loop = urwid.MainLoop(
524 | self.w_main,
525 | palette=self.palette,
526 | unhandled_input=self.unhandled_keypress
527 | )
528 | self.child_pipe = self.main_loop.watch_pipe(self.received_output)
529 |
530 | self.init_test_data()
531 | logger.debug('Running main urwid loop')
532 | self.main_loop.run()
533 |
534 | def popup(self, widget):
535 | self._popup_original = self.main_loop.widget
536 | self.main_loop.widget = urwid.Overlay(
537 | widget,
538 | self._popup_original,
539 | 'center', ('relative', 90), 'middle', ('relative', 85)
540 | )
541 |
542 | def run_tests(self, failed_only=True, filtered=None, select_tests=None):
543 | """
544 | failed_only
545 | filtered
546 | filter_value
547 | """
548 | if self.runner_process and self.runner_process.is_alive():
549 | logger.info('Tests are already running')
550 | return
551 |
552 | self.w_main.original_widget.focus_position = 4
553 |
554 | if filtered is None:
555 | filtered = self.store.filter_value
556 | self.store.show_collected = True
557 |
558 | logger.info('Running tests (failed_only: %r, filtered: %r)', failed_only, filtered)
559 | self._first_failed_focused = False
560 |
561 | tests = self.store._get_tests(
562 | failed_only,
563 | filtered,
564 | include_lf_exempt=False,
565 | select_tests=select_tests
566 | )
567 | self.store.invalidate_test_results(tests)
568 |
569 | self.runner_process = multiprocessing.Process(
570 | target=self.runner_class.process_run_tests,
571 | name='pytui-runner',
572 | args=(failed_only, filtered, self.child_pipe, self.pipe_size,
573 | self.pipe_semaphore, self.store.filter_value, self.debug),
574 | kwargs={
575 | 'pytest_args': self.pytest_args,
576 | 'select_tests': select_tests
577 | }
578 | )
579 | self.runner_process.start()
580 |
581 | # self.w_test_listbox._invalidate()
582 | # self.w_main._invalidate()
583 | # self.main_loop.draw_screen()
584 |
585 | def run_selected_tests(self, failed_only, filtered, select_tests=None):
586 | self.run_tests(failed_only, filtered, select_tests)
587 |
588 | def update_test_result(self, test_data):
589 | display_result_state = test_data.get('result_state', '')
590 | if display_result_state in ['failed', 'error'] and not self._first_failed_focused:
591 | try:
592 | self.w_test_listbox.set_focus(test_data.get('position', 0))
593 | self._first_failed_focused = True
594 | except IndexError:
595 | pass
596 |
597 | if test_data.get('widget'):
598 | test_data['widget']._invalidate()
599 | test_data['lw_widget']._invalidate()
600 | # self.w_test_listbox._invalidate()
601 | self.w_status_line.original_widget._invalidate()
602 | else:
603 | logger.warning('Test "%s" has no ui widget', test_data['id'])
604 |
605 | self.main_loop.draw_screen()
606 |
607 | def update_test_line(self, test_data):
608 | if test_data.get('widget'):
609 | test_data['widget']._invalidate()
610 | test_data['lw_widget']._invalidate()
611 | self.main_loop.draw_screen()
612 |
613 | def show_test_detail(self, widget, test_id):
614 | test_data = self.store.test_data[test_id]
615 | output = test_data.get('output', '')
616 | # if 'exc_info' in test_data:
617 | # output += '\n' + '-'*20 + '\n'
618 | # output += '\n'.join(traceback.format_tb(test_data['exc_info'].tb))
619 |
620 | result_window = TestResultWindow(
621 | test_id,
622 | output,
623 | self.popup_close)
624 | self.popup(result_window)
625 | result_window.set_focus(0)
626 |
627 | def show_startup_error(self, title, content):
628 | popup_widget = ErrorPopupWindow(
629 | title,
630 | content,
631 | self.popup_close
632 | )
633 |
634 | self.popup(popup_widget)
635 | popup_widget.set_focus(0)
636 |
637 | def popup_close(self):
638 | self.main_loop.widget = self._popup_original
639 |
640 | def get_list_item(self, test_id, position):
641 | test_data = self.store.test_data[test_id]
642 | test_data.update({
643 | 'widget': None,
644 | 'lw_widget': None,
645 | 'position': position,
646 | 'id': test_id,
647 | })
648 | test_line = TestLine(test_data)
649 | test_data['widget'] = test_line
650 | # logger.debug('widget set for %s: %s', test_id, test_line)
651 | urwid.connect_signal(test_line, 'click', self.show_test_detail, test_id)
652 | test_line_attr = urwid.AttrMap(test_line, None, focus_map='reversed')
653 | test_data['lw_widget'] = test_line_attr
654 | return test_line_attr
655 |
656 | def test_listbox(self, test_list):
657 | list_items = []
658 | for position, test_id in enumerate(test_list):
659 | test_line_attr = self.get_list_item(test_id, position)
660 | list_items.append(test_line_attr)
661 | return urwid.ListBox(urwid.SimpleFocusListWalker(list_items))
662 |
663 | def focus_failed_sibling(self, direction):
664 | next_id = self.store.get_failed_sibling(self.w_test_listbox.focus_position, direction)
665 | if next_id is not None:
666 | next_pos = self.store.get_test_position(next_id)
667 | self.w_test_listbox.set_focus(next_pos, 'above' if direction == 1 else 'below')
668 | self.w_test_listbox._invalidate()
669 |
670 | def set_listbox_focus(self, test_data):
671 | # set listbox focus if not already focused on first failed
672 | if not self._first_failed_focused:
673 | try:
674 | self.w_test_listbox.set_focus(test_data['position'], 'above')
675 | self.w_test_listbox._invalidate()
676 | except IndexError:
677 | pass
678 |
679 | def get_selected_testline(self):
680 | focus_widget, idx = self.w_test_listbox.get_focus()
681 | test_line = focus_widget.original_widget
682 | return test_line
683 |
684 | def quit(self):
685 | self.pipe_semaphore.set()
686 | if self.runner_process and self.runner_process.is_alive():
687 | self.runner_process.terminate()
688 | logger.log(DEBUG_B, 'releasing semaphore')
689 | raise urwid.ExitMainLoop()
690 |
691 | def unhandled_keypress(self, key):
692 | # quit program
693 | if key in ('q', 'Q'):
694 | self.quit()
695 |
696 | # focus search/filter bar
697 | elif key == '/':
698 | self.w_main.original_widget.set_focus(2)
699 |
700 | # start new search/filtering
701 | elif key == 'ctrl f':
702 | self.w_filter_edit.set_edit_text('')
703 | self.w_main.original_widget.set_focus(2)
704 |
705 | # run all tests
706 | elif key == 'R' or key == 'ctrl f5':
707 | self.run_tests(False)
708 |
709 | # rerun failed tests
710 | elif key == 'r' or key == 'f5':
711 | self.run_tests(True)
712 |
713 | # move cursor down
714 | elif key == 'meta down':
715 | self.focus_failed_sibling(1)
716 |
717 | # move cursor up
718 | elif key == 'meta up':
719 | self.focus_failed_sibling(-1)
720 |
721 | # run single test on cursor
722 | elif key == 's':
723 | test_line = self.get_selected_testline()
724 | test_id = test_line.test_data['id']
725 | self.run_selected_tests(
726 | failed_only=False,
727 | filtered=True,
728 | select_tests=[test_id]
729 | )
730 |
731 | # toggle to show only failed test
732 | elif key == 'f4':
733 | self.store.show_failed_only = not self.store.show_failed_only
734 |
735 |
736 | @click.command(context_settings={
737 | 'ignore_unknown_options': True,
738 | 'allow_extra_args': True,
739 | })
740 | @click.option('--debug/--no-debug', default=False, show_default=True, help='Enable debug logging')
741 | @click.pass_context
742 | def main(ctx, debug):
743 | logging_tools.configure('pytui-ui.log', debug)
744 | logger = get_logger('ui')
745 | logger.info('Configured logging')
746 |
747 | ui = TestRunnerUI(PytestRunner, debug, ctx.args)
748 | ui.run()
749 |
750 |
751 | if __name__ == '__main__':
752 | main()
753 |
--------------------------------------------------------------------------------
/setup.cfg:
--------------------------------------------------------------------------------
1 | [flake8]
2 | ignore = E402, W504
3 | max-line-length = 99
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup
2 |
3 | from pytui.settings import VERSION
4 |
5 |
6 | setup(
7 | name='pytest-ui',
8 | description='Text User Interface for running python tests',
9 | version=VERSION,
10 | license='MIT',
11 | platforms=['linux', 'osx', 'win32'],
12 | packages=['pytui'],
13 | url='https://github.com/martinsmid/pytest-ui',
14 | author_email='martin.smid@gmail.com',
15 | author='Martin Smid',
16 | entry_points={
17 | 'console_scripts': [
18 | 'pytui = pytui.ui:main',
19 | ]
20 | },
21 | install_requires=[
22 | 'future',
23 | 'pytest',
24 | 'tblib',
25 | 'urwid',
26 | 'click',
27 | ],
28 | tests_require=[
29 | 'mock'
30 | ],
31 | classifiers=[
32 | 'Development Status :: 4 - Beta',
33 | 'Intended Audience :: Developers',
34 | 'Operating System :: POSIX',
35 | 'Operating System :: Microsoft :: Windows',
36 | 'Operating System :: MacOS :: MacOS X',
37 | 'Topic :: Software Development :: Testing',
38 | 'Topic :: Utilities',
39 | 'Programming Language :: Python :: 2',
40 | 'Programming Language :: Python :: 3',
41 | ],
42 | )
43 |
--------------------------------------------------------------------------------
/test_projects/test_module_a/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/martinsmid/pytest-ui/15fcbe04a6467cc6f7a373ef6156acc44f0ba5ec/test_projects/test_module_a/__init__.py
--------------------------------------------------------------------------------
/test_projects/test_module_a/test_feat_1.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | import unittest
3 |
4 | import logging
5 |
6 | logging.basicConfig()
7 | logger = logging.getLogger(__name__)
8 |
9 | class TestOutputCapturing(unittest.TestCase):
10 | def test_feat_1_case_1(self):
11 | print('hello')
12 | logger.debug('hello at the debug level')
13 |
14 | def test_feat_1_case_2(self):
15 | self.assertEqual(True, False)
16 |
17 | def test_feat_1_case_3(self):
18 | logger.error('hello at the error level')
19 |
20 | def test_feat_1_case_4(self):
21 | pass
22 |
--------------------------------------------------------------------------------
/test_projects/test_module_a/test_import_error_in_test.py:
--------------------------------------------------------------------------------
1 | import unittest
2 |
3 | class TestImportError(unittest.TestCase):
4 | def test_import_error_inside(self):
5 | import error
6 |
--------------------------------------------------------------------------------
/test_projects/test_module_a/test_pytest_options.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import unittest
3 |
4 | class TestPytestOptions(unittest.TestCase):
5 | @unittest.skip
6 | def test_unittest_skip(self):
7 | raise Exception('This shouldn\'t run')
8 |
9 | @pytest.mark.skip
10 | def test_pytest_skip(self):
11 | raise Exception('This shouldn\'t run')
12 |
13 | @pytest.mark.xfail(reason='This fails as expected (no strict)')
14 | def test_xfail_correct_easy(self):
15 | raise Exception('This is an expected failure')
16 |
17 | @pytest.mark.xfail(reason='''This should fail, but it doesn't, but nobody cares''')
18 | def test_xfail_wrong_easy(self):
19 | pass
20 |
21 | @pytest.mark.xfail(reason='This fails as expected (strict)', strict=True)
22 | def test_xfail_correct_strict(self):
23 | raise Exception('This is an expected failure')
24 |
25 | @pytest.mark.xfail(reason='''This should fail, but it doesn't. Pytest cares''', strict=True)
26 | def test_xfail_wrong_strict(self):
27 | pass
28 |
--------------------------------------------------------------------------------
/test_projects/test_module_b/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/martinsmid/pytest-ui/15fcbe04a6467cc6f7a373ef6156acc44f0ba5ec/test_projects/test_module_b/__init__.py
--------------------------------------------------------------------------------
/test_projects/test_module_b/test_feat_3.py:
--------------------------------------------------------------------------------
1 | # coding: utf-8
2 | import unittest
3 | import time
4 |
5 | class TestE(unittest.TestCase):
6 | def test_many_lines(self):
7 | for i in xrange(100):
8 | print 'Many lines', i
9 |
10 | def test_feat_1_case_1(self):
11 | time.sleep(0.5)
12 | self.fail('Artificial fail one')
13 |
14 | def test_feat_1_case_2(self):
15 | time.sleep(0.5)
16 | self.fail('Artificial fail two')
17 |
18 | def test_feat_1_case_3(self):
19 | time.sleep(0.5)
20 |
21 | def test_feat_1_case_4(self):
22 | for i in xrange(10):
23 | time.sleep(0.1)
24 | print 'Few lines %d' % i
25 |
26 | def test_long_traceback(self):
27 | def recursive(n):
28 | if n == 0:
29 | raise Exception(u'\u1155\u1166'.encode('utf-8'))
30 | recursive(n-1)
31 |
32 | recursive(100)
33 |
--------------------------------------------------------------------------------
/test_projects/test_module_b/test_feat_4.py:
--------------------------------------------------------------------------------
1 | import time
2 | import unittest
3 |
4 |
5 | class TestG(unittest.TestCase):
6 | def test_feat_1_case_1(self):
7 | pass
8 |
9 | def test_feat_1_case_2(self):
10 | pass
11 |
12 | def test_feat_1_case_3(self):
13 | pass
14 |
15 | def test_feat_1_case_4(self):
16 | pass
17 |
18 |
19 | class TestH(unittest.TestCase):
20 | def test_feat_1_case_1(self):
21 | time.sleep(0.1)
22 |
23 | def test_feat_1_case_2(self):
24 | time.sleep(0.1)
25 |
26 | def test_feat_1_case_3(self):
27 | time.sleep(0.1)
28 |
29 | def test_feat_1_case_4(self):
30 | time.sleep(0.1)
31 |
32 | def test_feat_1_case_5(self):
33 | time.sleep(0.1)
34 |
35 | def test_feat_1_case_6(self):
36 | time.sleep(0.1)
37 | raise Exception('Wrong')
38 |
39 | def test_feat_1_case_7(self):
40 | time.sleep(0.1)
41 | raise Exception('Wrong')
42 |
43 | def test_feat_1_case_8(self):
44 | time.sleep(0.1)
45 |
46 | def test_feat_1_case_9(self):
47 | time.sleep(0.1)
48 |
49 | def test_feat_1_case_10(self):
50 | time.sleep(0.1)
51 | raise Exception('Wrong')
52 |
53 | def test_feat_1_case_11(self):
54 | time.sleep(0.1)
55 |
56 | def test_feat_1_case_12(self):
57 | time.sleep(0.1)
58 |
--------------------------------------------------------------------------------
/test_projects/test_module_c/test_import_error_in_module.py:
--------------------------------------------------------------------------------
1 | import error
2 |
--------------------------------------------------------------------------------
/test_projects/test_module_c/test_syntax_error.py:
--------------------------------------------------------------------------------
1 | def x:
2 | pass
3 |
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/martinsmid/pytest-ui/15fcbe04a6467cc6f7a373ef6156acc44f0ba5ec/tests/__init__.py
--------------------------------------------------------------------------------
/tests/test_common.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 | from __future__ import unicode_literals
3 |
4 | import pytest
5 | from unittest import TestCase
6 | try:
7 | from unittest import mock
8 | except ImportError:
9 | import mock
10 | from pytui.common import get_filter_regex_str
11 | from pytui.ui import Store
12 |
13 | @pytest.mark.parametrize(
14 | 'input,expected',
15 | [
16 | ('abc#efg', 'a.*?b.*?c.*efg'),
17 | ('abc#efg#ijkl', 'a.*?b.*?c.*efg.*i.*?j.*?k.*?l'),
18 | ('#efg#', '.*efg.*'),
19 | ]
20 | )
21 | def test_filter_regex_str(input, expected):
22 | regex = get_filter_regex_str(input)
23 | assert regex == expected
24 |
25 |
26 | def test_fitler_match():
27 | store = Store(mock.Mock())
28 | store.is_test_failed = lambda x: True
29 | store.item_collected('test_1_abdefghij')
30 | store.item_collected('test_2_bheifhefe')
31 | store.item_collected('test_3_abefg')
32 | store.item_collected('test_4_axxbxxcefg')
33 | store.item_collected('test_5_axxbxxcxxefg')
34 | store.item_collected('test_6_axxbxxcxxexfg')
35 |
36 | store.set_filter('abc#efg')
37 | result = store._get_tests()
38 | assert dict(result) == {
39 | 'test_4_axxbxxcefg': {
40 | 'id': 'test_4_axxbxxcefg'
41 | },
42 | 'test_5_axxbxxcxxefg': {
43 | 'id': 'test_5_axxbxxcxxefg'
44 | },
45 | }
46 |
47 | store.set_filter('#xcx#')
48 | result = store._get_tests()
49 | assert dict(result) == {
50 | 'test_5_axxbxxcxxefg': {
51 | 'id': 'test_5_axxbxxcxxefg'
52 | },
53 | 'test_6_axxbxxcxxexfg': {
54 | 'id': 'test_6_axxbxxcxxexfg'
55 | }
56 | }
57 |
--------------------------------------------------------------------------------
/tests/test_runner.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 | from __future__ import absolute_import
3 |
4 | try:
5 | from unittest import mock
6 | except ImportError:
7 | import mock
8 | import logging
9 | import tempfile
10 |
11 | from unittest import TestCase
12 |
13 | from pytui.runner import PytestRunner, Runner
14 |
15 |
16 | logging.basicConfig()
17 | logger = logging.getLogger(__name__)
18 | logger.setLevel('DEBUG')
19 |
20 |
21 | class PytestRunnerTests(TestCase):
22 | def setUp(self):
23 | self.pipe_mock = tempfile.TemporaryFile()
24 | self.pipe_size_mock = mock.Mock()
25 | self.pipe_semaphore_mock = mock.Mock()
26 |
27 | def test_skipping(self):
28 | runner = PytestRunner(
29 | self.pipe_mock.fileno(),
30 | self.pipe_size_mock,
31 | self.pipe_semaphore_mock
32 | )
33 | with mock.patch.object(PytestRunner, 'pipe_send') as pipe_send_mock:
34 | logger.debug('------ runner init ------')
35 | exitcode, _description = runner.init_tests(['test_projects/test_module_a/'])
36 | assert exitcode == 0
37 | # logger.debug(pipe_send_mock.call_args_list)
38 |
39 | logger.debug('------ runner run_tests ------')
40 | exitcode, _description = runner.run_tests(False, 'xfail', ['test_projects/test_module_a/'])
41 | assert exitcode == 1
42 | logger.debug(pipe_send_mock.call_args_list)
43 |
44 | @mock.patch.object(PytestRunner, 'init_tests', return_value=(1, None))
45 | @mock.patch.object(Runner, 'pipe_send')
46 | def test_pytest_exitcode(self, pipe_send_mock, init_tests_mock):
47 | """
48 | Test whether set_pytest_error(exitcode=1) is sent to ui from runner throught the pipe.
49 | """
50 | PytestRunner.process_init_tests(
51 | self.pipe_mock.fileno(),
52 | self.pipe_size_mock,
53 | self.pipe_semaphore_mock,
54 | True,
55 | ['test_projects/test_module_a/'],
56 | )
57 |
58 | assert pipe_send_mock.call_args_list == [
59 | mock.call('set_pytest_error', exitcode=1, description=None),
60 | mock.call('init_finished')
61 | ]
62 |
--------------------------------------------------------------------------------
/tox.ini:
--------------------------------------------------------------------------------
1 | [tox]
2 | envlist = py{27,36,37,38}-pytest{3,4,5,6}
3 |
4 | [testenv]
5 | deps =
6 | py27: mock
7 | pytest3: pytest<4
8 | pytest4: pytest<5
9 | commands =
10 | pytest
11 |
12 | setenv =
13 | PYTHONPATH = {toxinidir}/pytui
14 |
--------------------------------------------------------------------------------