├── .github └── ISSUE_TEMPLATE.md ├── .gitignore ├── .travis.yml ├── AUTHORS.md ├── CONTRIBUTING.md ├── HISTORY.md ├── HtmlTestRunner ├── __init__.py ├── result.py ├── runner.py └── template │ └── report_template.html ├── LICENSE ├── MANIFEST.in ├── Makefile ├── README.md ├── docs ├── console_output.png ├── example_template.html ├── installation.md └── test_results.gif ├── requirements_dev.txt ├── setup.cfg ├── setup.py ├── tests ├── __init__.py ├── test.py ├── test2.py └── test_HtmlTestRunner.py ├── tox.ini └── travis_pypi_setup.py /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | * HtmlTestRunner version: 2 | * Python version: 3 | * Operating System: 4 | 5 | ### Description 6 | 7 | Describe what you were trying to get done. 8 | Tell us what happened, what went wrong, and what you expected to happen. 9 | 10 | ### What I Did 11 | 12 | ``` 13 | Paste the command(s) you ran and the output. 14 | If there was a crash, please include the traceback here. 15 | ``` 16 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | 27 | # PyInstaller 28 | # Usually these files are written by a python script from a template 29 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 30 | *.manifest 31 | *.spec 32 | 33 | # Installer logs 34 | pip-log.txt 35 | pip-delete-this-directory.txt 36 | 37 | # Unit test / coverage reports 38 | htmlcov/ 39 | .tox/ 40 | .coverage 41 | .coverage.* 42 | .cache 43 | nosetests.xml 44 | coverage.xml 45 | *,cover 46 | .hypothesis/ 47 | 48 | # Translations 49 | *.mo 50 | *.pot 51 | 52 | # Django stuff: 53 | *.log 54 | 55 | # Sphinx documentation 56 | docs/_build/ 57 | 58 | # PyBuilder 59 | target/ 60 | 61 | # pyenv python configuration file 62 | .python-version 63 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | # This file was autogenerated and will overwrite each time you run travis_pypi_setup.py 2 | deploy: 3 | true: 4 | condition: $TOXENV == py35 5 | repo: oldani/HtmlTestRunner 6 | tags: true 7 | distributions: sdist bdist_wheel 8 | provider: pypi 9 | user: ordanis 10 | password: 11 | secure: !!binary | 12 | ai9oZFpXTGpXYStwSXo4VzhPOXgweU9CUlA3U0lTdUx5MVJqVFI4RjRiSCtsNlpadlZlZXFvY3dL 13 | REg4THl5dDVhb2dQWHVESnZ0NFczSm5zK2JwMW1nMWxnUE9NbjJHS3pvMFFVZEZnZGxOREQyamNK 14 | NjhBY2JIaklEYlJ3RzhEQWpUSDFlT3VWVkJCeXUxYXQyQnczbTArajNkOVhjRWUxSE5aVlphaFhG 15 | SExDSkFXdXBUbTMwOHJDMlFHTW1sT1ZxUTlGSFFmZ0pTdlpkY0I1RHprMDFBSVIxaDNicitjYmhl 16 | U0pOMkVXK3NiUkwvTmVTUENmRGJIVjdIWFpzSVVwYzZVR3Y5aFJwSlduUVNwVnJIOU9XdW56Q1Uv 17 | dzkzOURmQWM1c0thc01mR3BzZTN1ZE9DL21NNmVLRmhqTVdOQUtOUE1QYU02ZzJGaWNGODQ5MThV 18 | YnZpMkc4Ym5lM0EwSUNFQjZEVlpzR1Bib0tHVC9XUkJHeHRKNG5xYVp0ajV4SmZoQkk3SXErKy9h 19 | MVI5dW9CdERGSVJ0VTNPT0pDamNLN2dwVFVWTHFVMDRyODVqMS9uZkVvcUpGOXBmdnorTzhxaGJx 20 | Yjdhbjd1eWUzVFlhaXRsWGRwdi83WWdlZDd6RzA5d3JYWDZKemJobzZuTllTRWFRa2s3L0VoM3hG 21 | bytzQ3Rza1dqTXpKVVhTN21vMXg1M2k4WUZndCtnNFpKWnhSS2dXWWtPc0tFNEpJdjF5MGhOcU1X 22 | RVQvTUlYNG1CbnhSSTJCbGd2c1pnRWVaTW8rekt4ZzRyZFc0eHNXL1B0MGszMlFwSHpsSkNNQ3BH 23 | aXcrbXVraFJicWcya3lRc0ZIWUh2eGtjNHFGeGJaSHd2L3dHbWNpcXpuZGFvaXQrY0FaakJsUjA9 24 | env: 25 | - TOXENV=py35 26 | - TOXENV=py34 27 | 28 | install: pip install -U tox 29 | language: python 30 | python: 3.5 31 | script: tox -e ${TOXENV} 32 | -------------------------------------------------------------------------------- /AUTHORS.md: -------------------------------------------------------------------------------- 1 | # Credits 2 | 3 | 4 | ## Development Lead 5 | 6 | * Ordanis Sanchez Suero 7 | 8 | 9 | ## Contributors 10 | 11 | * James Sloan 12 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | 4 | Contributions are welcome, and they are greatly appreciated! Every 5 | little bit helps, and credit will always be given. 6 | 7 | You can contribute in many ways: 8 | 9 | ## Types of Contributions 10 | 11 | 12 | #### Report Bugs 13 | 14 | Report bugs [here](https://github.com/oldani/HtmlTestRunner/issues). 15 | 16 | If you are reporting a bug, please include: 17 | 18 | * Your operating system name and version. 19 | * Any details about your local setup that might be helpful in troubleshooting. 20 | * Detailed steps to reproduce the bug. 21 | 22 | #### Fix Bugs 23 | 24 | Look through the GitHub issues for bugs. Anything tagged with "bug" 25 | and "help wanted" is open to whoever wants to implement it. 26 | 27 | #### Implement Features 28 | 29 | Look through the GitHub issues for features. Anything tagged with "enhancement" 30 | and "help wanted" is open to whoever wants to implement it. 31 | 32 | #### Write Documentation 33 | 34 | HtmlTestRunner could always use more documentation, whether as part of the 35 | official HtmlTestRunner docs, in docstrings, or even on the web in blog posts, 36 | articles, and such. 37 | 38 | #### Submit Feedback 39 | 40 | The best way to send feedback is to file an [issue](https://github.com/oldani/HtmlTestRunner/issues). 41 | 42 | If you are proposing a feature: 43 | 44 | * Explain in detail how it would work. 45 | * Keep the scope as narrow as possible, to make it easier to implement. 46 | * Remember that this is a volunteer-driven project, and that contributions 47 | are welcome :) 48 | 49 | ## Get Started! 50 | 51 | 52 | Ready to contribute? Here's how to set up `HtmlTestRunner` for local development. 53 | 54 | 1. Fork the `HtmlTestRunner` repo on GitHub. 55 | 2. Clone your fork locally: 56 | 57 | $ git clone git@github.com:your_name_here/HtmlTestRunner.git 58 | 59 | 3. Install your local copy into a virtualenv. Assuming you have virtualenvwrapper installed, this is how you set up your fork for local development: 60 | 61 | $ mkvirtualenv HtmlTestRunner 62 | $ cd HtmlTestRunner/ 63 | $ pip install -r requirements_dev.txt 64 | $ python setup.py develop 65 | 66 | 4. Create a branch for local development: 67 | 68 | $ git checkout -b name-of-your-bugfix-or-feature 69 | 70 | Now you can make your changes locally. 71 | 72 | 5. When you're done making changes, check that your changes pass flake8 and the tests, including testing other Python versions with tox:: 73 | 74 | $ flake8 HtmlTestRunner tests 75 | $ python setup.py test or py.test 76 | $ tox 77 | 78 | 6. Commit your changes and push your branch to GitHub:: 79 | 80 | $ git add . 81 | $ git commit -m "Your detailed description of your changes." 82 | $ git push origin name-of-your-bugfix-or-feature 83 | 84 | 7. Submit a pull request through the GitHub website. 85 | 86 | 87 | ## Pull Request Guidelines 88 | 89 | 90 | Before you submit a pull request, check that it meets these guidelines: 91 | 92 | 1. The pull request should include tests. 93 | 2. If the pull request adds functionality, the docs should be updated. All 94 | functions should have docstrings. 95 | 3. The pull request should work for Python 2.7 and 3.5. Check 96 | [Travis test](https://travis-ci.org/oldani/HtmlTestRunner/pull_requests) 97 | and make sure that the tests pass for all supported Python versions. 98 | 99 | ## Tips 100 | 101 | 102 | To run a subset of tests:: 103 | ```bash 104 | $ python -m unittest tests.test_HtmlTestRunner 105 | ``` -------------------------------------------------------------------------------- /HISTORY.md: -------------------------------------------------------------------------------- 1 | # History 2 | 3 | 4 | ## 1 (2017-01-28) 5 | 6 | * First release on PyPI. 7 | 8 | ## 1.0.1 (2017-01-29) 9 | 10 | * Rename package due to conflict in PyPI. 11 | 12 | ## 1.0.2 (2017-01-29) 13 | 14 | * Fix broken docs. 15 | 16 | ## 1.0.3 (2017-01-29) 17 | 18 | * Fix bug with the template not beign include in the package. 19 | 20 | ## 1.1.0 (2017-06-25) 21 | 22 | * Improment of documantation 23 | * Custom templates feature 24 | * Minor internal changes 25 | 26 | ## 1.1.1 (2017-07-16) 27 | 28 | * Fix print func bug inseted on 1.1.0 29 | * Fix loading default template in py27 inserted on 1.1.0 30 | * Fix time elapsed output in console. 31 | * Fix template overwrite when runned in the same minute. 32 | 33 | 34 | ## 1.1.2 (2018-01-27) 35 | 36 | * Minor fix 37 | 38 | 39 | ## 1.2 (2019-03-15) 40 | 41 | * Fixed Template wording 42 | * Add support for combining reports into single report. 43 | * Add support for test cases with underscores in name. 44 | * Add optional timestamp of filenames. 45 | * Add optional automatic opening of generated reports in browser tab/ 46 | * Add support for optional user variables to be passed to template. 47 | * Add tracebacks to reports. 48 | * Add stdout to reports. 49 | * Add print relative paths to generated reports 50 | * Made default output directory the current working directory so there are now no required args 51 | * Update and adjusted readme 52 | * Changed use of deprecated _TextTestResult -> TextTestResult 53 | * Changed format of template slightly 54 | expanded test case names to include full path to classes (should avoid clashes from * duplicate names) 55 | * Simply test method names 56 | * Update docstrings and deleted unused method 57 | * Add check for template_args to be dict-like 58 | * Add optional report naming 59 | * Add support for subtests 60 | * Add support for skipped tests skip reasons 61 | * Change template to support subtests in sub-tables 62 | * Fixed bug where non-combined tests had summaries with details from all tests 63 | * Tweaked format of HTML has that info buttons line up better 64 | -------------------------------------------------------------------------------- /HtmlTestRunner/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | from .runner import HTMLTestRunner 3 | 4 | 5 | __author__ = """Ordanis Sanchez Suero""" 6 | __email__ = 'ordanisanchez@gmail.com' 7 | __version__ = '1.2.1' 8 | -------------------------------------------------------------------------------- /HtmlTestRunner/result.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | 3 | import os 4 | import sys 5 | import time 6 | import copy 7 | import traceback 8 | from unittest import TestResult, TextTestResult 9 | from unittest.result import failfast 10 | 11 | from jinja2 import Template 12 | 13 | 14 | DEFAULT_TEMPLATE = os.path.join(os.path.dirname(__file__), "template", "report_template.html") 15 | 16 | 17 | def load_template(template): 18 | """ Try to read a file from a given path, if file 19 | does not exist, load default one. """ 20 | file = None 21 | try: 22 | if template: 23 | with open(template, "r") as f: 24 | file = f.read() 25 | except Exception as err: 26 | print("Error: Your Template wasn't loaded", err, 27 | "Loading Default Template", sep="\n") 28 | finally: 29 | if not file: 30 | with open(DEFAULT_TEMPLATE, "r") as f: 31 | file = f.read() 32 | return file 33 | 34 | 35 | def render_html(template, **kwargs): 36 | template_file = load_template(template) 37 | if template_file: 38 | template = Template(template_file) 39 | return template.render(**kwargs) 40 | 41 | 42 | def testcase_name(test_method): 43 | testcase = type(test_method) 44 | 45 | module = testcase.__module__ + "." 46 | if module == "__main__.": 47 | module = "" 48 | result = module + testcase.__name__ 49 | return result 50 | 51 | 52 | def strip_module_names(testcase_names): 53 | """Examine all given test case names and strip them the minimal 54 | names needed to distinguish each. This prevents cases where test 55 | cases housed in different files but with the same names cause clashes.""" 56 | result = copy.copy(testcase_names) 57 | for i, testcase in enumerate(testcase_names): 58 | classname = testcase.split(".")[-1] 59 | duplicate_found = False 60 | testcase_names_ = copy.copy(testcase_names) 61 | del testcase_names_[i] 62 | for testcase_ in testcase_names_: 63 | classname_ = testcase_.split(".")[-1] 64 | if classname_ == classname: 65 | duplicate_found = True 66 | if not duplicate_found: 67 | result[i] = classname 68 | return result 69 | 70 | 71 | class _TestInfo(object): 72 | """" Keeps information about the execution of a test method. """ 73 | 74 | (SUCCESS, FAILURE, ERROR, SKIP) = range(4) 75 | 76 | def __init__(self, test_result, test_method, outcome=SUCCESS, 77 | err=None, subTest=None): 78 | self.test_result = test_result 79 | self.outcome = outcome 80 | self.elapsed_time = 0 81 | self.err = err 82 | self.stdout = test_result._stdout_data 83 | self.stderr = test_result._stderr_data 84 | 85 | self.is_subtest = subTest is not None 86 | 87 | self.test_description = self.test_result.getDescription(test_method) 88 | self.test_exception_info = ( 89 | '' if outcome in (self.SUCCESS, self.SKIP) 90 | else self.test_result._exc_info_to_string( 91 | self.err, test_method)) 92 | 93 | self.test_name = testcase_name(test_method) 94 | if not self.is_subtest: 95 | self.test_id = test_method.id() 96 | else: 97 | self.test_id = subTest.id() 98 | 99 | def id(self): 100 | return self.test_id 101 | 102 | def test_finished(self): 103 | self.elapsed_time = self.test_result.stop_time - self.test_result.start_time 104 | 105 | def get_description(self): 106 | return self.test_description 107 | 108 | def get_error_info(self): 109 | return self.test_exception_info 110 | 111 | 112 | class _SubTestInfos(object): 113 | # TODO: make better: inherit _TestInfo? 114 | (SUCCESS, FAILURE, ERROR, SKIP) = range(4) 115 | 116 | def __init__(self, test_id, subtests): 117 | self.subtests = subtests 118 | self.test_id = test_id 119 | self.outcome = self.check_outcome() 120 | 121 | def check_outcome(self): 122 | outcome = _TestInfo.SUCCESS 123 | for subtest in self.subtests: 124 | if subtest.outcome != _TestInfo.SUCCESS: 125 | outcome = _TestInfo.FAILURE 126 | break 127 | return outcome 128 | 129 | 130 | class HtmlTestResult(TextTestResult): 131 | """ A test result class that express test results in Html. """ 132 | 133 | start_time = None 134 | stop_time = None 135 | default_prefix = "TestResults_" 136 | 137 | def __init__(self, stream, descriptions, verbosity): 138 | TextTestResult.__init__(self, stream, descriptions, verbosity) 139 | self.buffer = True 140 | self._stdout_data = None 141 | self._stderr_data = None 142 | self.successes = [] 143 | self.subtests = {} 144 | self.callback = None 145 | self.infoclass = _TestInfo 146 | self.report_files = [] 147 | 148 | def _prepare_callback(self, test_info, target_list, verbose_str, 149 | short_str): 150 | """ Appends a 'info class' to the given target list and sets a 151 | callback method to be called by stopTest method.""" 152 | target_list.append(test_info) 153 | 154 | def callback(): 155 | """ Print test method outcome to the stream and elapsed time too.""" 156 | test_info.test_finished() 157 | 158 | if self.showAll: 159 | self.stream.writeln( 160 | "{} ({:3f})s".format(verbose_str, test_info.elapsed_time)) 161 | elif self.dots: 162 | self.stream.write(short_str) 163 | 164 | self.callback = callback 165 | 166 | def getDescription(self, test): 167 | """ Return the test description if not have test name. """ 168 | return str(test) 169 | 170 | def startTest(self, test): 171 | """ Called before execute each method. """ 172 | self.start_time = time.time() 173 | TestResult.startTest(self, test) 174 | 175 | if self.showAll: 176 | self.stream.write(" " + self.getDescription(test)) 177 | self.stream.write(" ... ") 178 | 179 | def _save_output_data(self): 180 | try: 181 | self._stdout_data = sys.stdout.getvalue() 182 | self._stderr_data = sys.stderr.getvalue() 183 | except AttributeError: 184 | pass 185 | 186 | def stopTest(self, test): 187 | """ Called after excute each test method. """ 188 | self._save_output_data() 189 | TextTestResult.stopTest(self, test) 190 | self.stop_time = time.time() 191 | 192 | if self.callback and callable(self.callback): 193 | self.callback() 194 | self.callback = None 195 | 196 | def addSuccess(self, test): 197 | """ Called when a test executes successfully. """ 198 | self._save_output_data() 199 | self._prepare_callback(self.infoclass(self, test), self.successes, "OK", ".") 200 | 201 | @failfast 202 | def addFailure(self, test, err): 203 | """ Called when a test method fails. """ 204 | self._save_output_data() 205 | testinfo = self.infoclass(self, test, self.infoclass.FAILURE, err) 206 | self._prepare_callback(testinfo, self.failures, "FAIL", "F") 207 | 208 | @failfast 209 | def addError(self, test, err): 210 | """" Called when a test method raises an error. """ 211 | self._save_output_data() 212 | testinfo = self.infoclass(self, test, self.infoclass.ERROR, err) 213 | self._prepare_callback(testinfo, self.errors, 'ERROR', 'E') 214 | 215 | def addSubTest(self, testcase, test, err): 216 | """ Called when a subTest completes. """ 217 | self._save_output_data() 218 | # TODO: should ERROR cases be considered here too? 219 | if err is None: 220 | testinfo = self.infoclass(self, testcase, self.infoclass.SUCCESS, err, subTest=test) 221 | self._prepare_callback(testinfo, self.successes, "OK", ".") 222 | else: 223 | testinfo = self.infoclass(self, testcase, self.infoclass.FAILURE, err, subTest=test) 224 | self._prepare_callback(testinfo, self.failures, "FAIL", "F") 225 | 226 | test_id_components = str(testcase).rstrip(')').split(' (') 227 | test_id = test_id_components[1] + '.' + test_id_components[0] 228 | if test_id not in self.subtests: 229 | self.subtests[test_id] = [] 230 | self.subtests[test_id].append(testinfo) 231 | 232 | def addSkip(self, test, reason): 233 | """" Called when a test method was skipped. """ 234 | self._save_output_data() 235 | testinfo = self.infoclass(self, test, self.infoclass.SKIP, reason) 236 | self._prepare_callback(testinfo, self.skipped, "SKIP", "S") 237 | 238 | def printErrorList(self, flavour, errors): 239 | """ 240 | Writes information about the FAIL or ERROR to the stream. 241 | """ 242 | for test_info in errors: 243 | self.stream.writeln(self.separator1) 244 | self.stream.writeln( 245 | '{} [{:3f}s]: {}'.format(flavour, test_info.elapsed_time, 246 | test_info.test_id) 247 | ) 248 | self.stream.writeln(self.separator2) 249 | self.stream.writeln('%s' % test_info.get_error_info()) 250 | 251 | def _get_info_by_testcase(self): 252 | """ Organize test results by TestCase module. """ 253 | 254 | tests_by_testcase = {} 255 | 256 | subtest_names = set(self.subtests.keys()) 257 | for test_name, subtests in self.subtests.items(): 258 | subtest_info = _SubTestInfos(test_name, subtests) 259 | testcase_name = ".".join(test_name.split(".")[:-1]) 260 | if testcase_name not in tests_by_testcase: 261 | tests_by_testcase[testcase_name] = [] 262 | tests_by_testcase[testcase_name].append(subtest_info) 263 | 264 | for tests in (self.successes, self.failures, self.errors, self.skipped): 265 | for test_info in tests: 266 | # subtests will be contained by _SubTestInfos objects but there is also the 267 | # case where all subtests pass and the method is added as a success as well 268 | # which must be filtered out 269 | if test_info.is_subtest or test_info.test_id in subtest_names: 270 | continue 271 | if isinstance(test_info, tuple): # TODO: does this ever occur? 272 | test_info = test_info[0] 273 | testcase_name = ".".join(test_info.test_id.split(".")[:-1]) 274 | if testcase_name not in tests_by_testcase: 275 | tests_by_testcase[testcase_name] = [] 276 | tests_by_testcase[testcase_name].append(test_info) 277 | 278 | # unittest tests in alphabetical order based on test name so re-assert this 279 | for testcase in tests_by_testcase.values(): 280 | testcase.sort(key=lambda x: x.test_id) 281 | 282 | return tests_by_testcase 283 | 284 | @staticmethod 285 | def _format_duration(elapsed_time): 286 | """Format the elapsed time in seconds, or milliseconds if the duration is less than 1 second.""" 287 | if elapsed_time > 1: 288 | duration = '{:2.2f} s'.format(elapsed_time) 289 | else: 290 | duration = '{:d} ms'.format(int(elapsed_time * 1000)) 291 | return duration 292 | 293 | def get_results_summary(self, tests): 294 | """Create a summary of the outcomes of all given tests.""" 295 | 296 | failures = errors = skips = successes = 0 297 | for test in tests: 298 | outcome = test.outcome 299 | if outcome == test.ERROR: 300 | errors += 1 301 | elif outcome == test.FAILURE: 302 | failures += 1 303 | elif outcome == test.SKIP: 304 | skips += 1 305 | elif outcome == test.SUCCESS: 306 | successes += 1 307 | 308 | elapsed_time = 0 309 | for testinfo in tests: 310 | if not isinstance(testinfo, _SubTestInfos): 311 | elapsed_time += testinfo.elapsed_time 312 | else: 313 | for subtest in testinfo.subtests: 314 | elapsed_time += subtest.elapsed_time 315 | 316 | results_summary = { 317 | "total": len(tests), 318 | "error": errors, 319 | "failure": failures, 320 | "skip": skips, 321 | "success": successes, 322 | "duration": self._format_duration(elapsed_time) 323 | } 324 | 325 | return results_summary 326 | 327 | def _get_header_info(self, tests, start_time): 328 | results_summary = self.get_results_summary(tests) 329 | 330 | header_info = { 331 | "start_time": start_time, 332 | "status": results_summary 333 | } 334 | return header_info 335 | 336 | def _get_report_summaries(self, all_results, testRunner): 337 | """ Generate headers and summaries for all given test cases.""" 338 | summaries = {} 339 | for test_case_class_name, test_case_tests in all_results.items(): 340 | summaries[test_case_class_name] = self.get_results_summary(test_case_tests) 341 | 342 | return summaries 343 | 344 | def generate_reports(self, testRunner): 345 | """ Generate report(s) for all given test cases that have been run. """ 346 | status_tags = ('success', 'danger', 'warning', 'info') 347 | all_results = self._get_info_by_testcase() 348 | summaries = self._get_report_summaries(all_results, testRunner) 349 | 350 | if not testRunner.combine_reports: 351 | for test_case_class_name, test_case_tests in all_results.items(): 352 | header_info = self._get_header_info(test_case_tests, testRunner.start_time) 353 | html_file = render_html( 354 | testRunner.template, 355 | title=testRunner.report_title, 356 | header_info=header_info, 357 | all_results={test_case_class_name: test_case_tests}, 358 | status_tags=status_tags, 359 | summaries=summaries, 360 | **testRunner.template_args 361 | ) 362 | # append test case name if multiple reports to be generated 363 | if testRunner.report_name is None: 364 | report_name_body = self.default_prefix + test_case_class_name 365 | else: 366 | report_name_body = "{}_{}".format(testRunner.report_name, test_case_class_name) 367 | self.generate_file(testRunner, report_name_body, html_file) 368 | 369 | else: 370 | header_info = self._get_header_info( 371 | [item for sublist in all_results.values() for item in sublist], 372 | testRunner.start_time 373 | ) 374 | html_file = render_html( 375 | testRunner.template, 376 | title=testRunner.report_title, 377 | header_info=header_info, 378 | all_results=all_results, 379 | status_tags=status_tags, 380 | summaries=summaries, 381 | **testRunner.template_args 382 | ) 383 | # if available, use user report name 384 | if testRunner.report_name is not None: 385 | report_name_body = testRunner.report_name 386 | else: 387 | report_name_body = self.default_prefix + "_".join(strip_module_names(list(all_results.keys()))) 388 | self.generate_file(testRunner, report_name_body, html_file) 389 | 390 | def generate_file(self, testRunner, report_name, report): 391 | """ Generate the report file in the given path. """ 392 | dir_to = testRunner.output 393 | if not os.path.exists(dir_to): 394 | os.makedirs(dir_to) 395 | 396 | if testRunner.timestamp: 397 | report_name += "_" + testRunner.timestamp 398 | report_name += ".html" 399 | 400 | path_file = os.path.abspath(os.path.join(dir_to, report_name)) 401 | self.stream.writeln(os.path.relpath(path_file)) 402 | self.report_files.append(path_file) 403 | with open(path_file, 'w', encoding='utf-8') as report_file: 404 | report_file.write(report) 405 | 406 | def _exc_info_to_string(self, err, test): 407 | """ Converts a sys.exc_info()-style tuple of values into a string.""" 408 | # if six.PY3: 409 | # # It works fine in python 3 410 | # try: 411 | # return super(_HTMLTestResult, self)._exc_info_to_string( 412 | # err, test) 413 | # except AttributeError: 414 | # # We keep going using the legacy python <= 2 way 415 | # pass 416 | 417 | # This comes directly from python2 unittest 418 | exctype, value, tb = err 419 | # Skip test runner traceback levels 420 | while tb and self._is_relevant_tb_level(tb): 421 | tb = tb.tb_next 422 | 423 | if exctype is test.failureException: 424 | # Skip assert*() traceback levels 425 | msg_lines = traceback.format_exception(exctype, value, tb) 426 | else: 427 | msg_lines = traceback.format_exception(exctype, value, tb) 428 | 429 | if self.buffer: 430 | # Only try to get sys.stderr as it might not be 431 | # StringIO yet, e.g. when test fails during __call__ 432 | try: 433 | error = sys.stderr.getvalue() 434 | except AttributeError: 435 | error = None 436 | if error: 437 | if not error.endswith('\n'): 438 | error += '\n' 439 | msg_lines.append(error) 440 | # This is the extra magic to make sure all lines are str 441 | encoding = getattr(sys.stdout, 'encoding', 'utf-8') 442 | lines = [] 443 | for line in msg_lines: 444 | if not isinstance(line, str): 445 | # utf8 shouldn't be hard-coded, but not sure f 446 | line = line.encode(encoding) 447 | lines.append(line) 448 | 449 | return ''.join(lines) 450 | -------------------------------------------------------------------------------- /HtmlTestRunner/runner.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import time 3 | from datetime import datetime 4 | 5 | from unittest import TextTestRunner 6 | from .result import HtmlTestResult 7 | 8 | UTF8 = "UTF-8" 9 | 10 | 11 | class HTMLTestRunner(TextTestRunner): 12 | """" A test runner class that output the results. """ 13 | 14 | time_format = "%Y-%m-%d_%H-%M-%S" 15 | 16 | def __init__(self, output="./reports/", verbosity=2, stream=sys.stderr, 17 | descriptions=True, failfast=False, buffer=False, 18 | report_title=None, report_name=None, template=None, resultclass=None, 19 | add_timestamp=True, open_in_browser=False, 20 | combine_reports=False, template_args=None): 21 | self.verbosity = verbosity 22 | self.output = output 23 | self.encoding = UTF8 24 | 25 | TextTestRunner.__init__(self, stream, descriptions, verbosity, 26 | failfast=failfast, buffer=buffer) 27 | 28 | if add_timestamp: 29 | self.timestamp = time.strftime(self.time_format) 30 | else: 31 | self.timestamp = "" 32 | 33 | if resultclass is None: 34 | self.resultclass = HtmlTestResult 35 | else: 36 | self.resultclass = resultclass 37 | 38 | if template_args is not None and not isinstance(template_args, dict): 39 | raise ValueError("template_args must be a dict-like.") 40 | self.template_args = template_args or {} 41 | 42 | self.report_title = report_title or "Unittest Results" 43 | self.report_name = report_name 44 | self.template = template 45 | 46 | self.open_in_browser = open_in_browser 47 | self.combine_reports = combine_reports 48 | 49 | self.start_time = 0 50 | self.time_taken = 0 51 | 52 | def _make_result(self): 53 | """ Create a TestResult object which will be used to store 54 | information about the executed tests. """ 55 | return self.resultclass(self.stream, self.descriptions, self.verbosity) 56 | 57 | def run(self, test): 58 | """ Runs the given testcase or testsuite. """ 59 | try: 60 | 61 | result = self._make_result() 62 | result.failfast = self.failfast 63 | if hasattr(test, 'properties'): 64 | # junit testsuite properties 65 | result.properties = test.properties 66 | 67 | self.stream.writeln() 68 | self.stream.writeln("Running tests... ") 69 | self.stream.writeln(result.separator2) 70 | 71 | self.start_time = datetime.now() 72 | test(result) 73 | stop_time = datetime.now() 74 | self.time_taken = stop_time - self.start_time 75 | 76 | result.printErrors() 77 | self.stream.writeln(result.separator2) 78 | run = result.testsRun 79 | self.stream.writeln("Ran {} test{} in {}".format(run, 80 | run != 1 and "s" or "", str(self.time_taken)[:7])) 81 | self.stream.writeln() 82 | 83 | expectedFails = len(result.expectedFailures) 84 | unexpectedSuccesses = len(result.unexpectedSuccesses) 85 | skipped = len(result.skipped) 86 | 87 | infos = [] 88 | if not result.wasSuccessful(): 89 | self.stream.writeln("FAILED") 90 | failed, errors = map(len, (result.failures, result.errors)) 91 | if failed: 92 | infos.append("Failures={0}".format(failed)) 93 | if errors: 94 | infos.append("Errors={0}".format(errors)) 95 | else: 96 | self.stream.writeln("OK") 97 | 98 | if skipped: 99 | infos.append("Skipped={}".format(skipped)) 100 | if expectedFails: 101 | infos.append("Expected Failures={}".format(expectedFails)) 102 | if unexpectedSuccesses: 103 | infos.append("Unexpected Successes={}".format(unexpectedSuccesses)) 104 | 105 | if infos: 106 | self.stream.writeln(" ({})".format(", ".join(infos))) 107 | else: 108 | self.stream.writeln("\n") 109 | 110 | self.stream.writeln() 111 | self.stream.writeln('Generating HTML reports... ') 112 | result.generate_reports(self) 113 | if self.open_in_browser: 114 | import webbrowser 115 | for report in result.report_files: 116 | webbrowser.open_new_tab('file://' + report) 117 | finally: 118 | pass 119 | return result 120 | -------------------------------------------------------------------------------- /HtmlTestRunner/template/report_template.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | {{ title }} 5 | 6 | 7 | 8 | 9 | 10 |
11 |
12 |
13 |

{{ title }}

14 |

Start Time: {{ header_info.start_time.strftime("%Y-%m-%d %H:%M:%S") }}

15 |

Duration: {{ header_info.status.duration }}

16 |

Summary: Total: {{ header_info.status.total }}, Pass: {{ header_info.status.success }}{% if header_info.status.failure %}, Fail: {{ header_info.status.failure }}{% endif %}{% if header_info.status.error %}, Error: {{ header_info.status.error }}{% endif %}{% if header_info.status.skip %}, Skip: {{ header_info.status.skip }}{% endif %}

17 |
18 |
19 | {%- for test_case_name, tests_results in all_results.items() %} 20 | {%- if tests_results %} 21 |
22 |
23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | {%- for test_case in tests_results %} 33 | {%- if not test_case.subtests is defined %} 34 | 35 | 36 | 49 | 54 | 55 | {%- if (test_case.stdout or test_case.err or test_case.err) and test_case.outcome != test_case.SKIP %} 56 | 57 | 62 | 63 | {%- endif %} 64 | {%- if (test_case.stdout or test_case.err or test_case.err) and test_case.outcome == test_case.SKIP %} 65 | 66 | 70 | 71 | {%- endif %} 72 | {%- else %} 73 | 74 | 75 | 84 | 89 | 90 | {%- if test_case.subtests %} 91 | 92 | 126 | 127 | {%- endif %} 128 | {%- endif %} 129 | {%- endfor %} 130 | 131 | 134 | 135 | 136 |
{{ test_case_name }}Status
{{ test_case.test_id.split(".")[-1] }} 37 | 38 | {%- if test_case.outcome == test_case.SUCCESS -%} 39 | Pass 40 | {%- elif test_case.outcome == test_case.SKIP -%} 41 | Skip 42 | {%- elif test_case.outcome == test_case.FAILURE -%} 43 | Fail 44 | {%- else -%} 45 | Error 46 | {%- endif -%} 47 | 48 | 50 | {%- if (test_case.stdout or test_case.err) %} 51 | 52 | {%- endif %} 53 |
58 | {%- if test_case.stdout %}

{{ test_case.stdout }}

{% endif %} 59 | {%- if test_case.err %}

{{ test_case.err[0].__name__ }}: {{ test_case.err[1] }}

{% endif %} 60 | {%- if test_case.err %}

{{ test_case.test_exception_info }}

{% endif %} 61 |
67 | {%- if test_case.stdout %}

{{ test_case.stdout }}

{% endif %} 68 | {%- if test_case.err %}

{{ test_case.err }}

{% endif %} 69 |
{{ test_case.test_id.split(".")[-1] }} 76 | 77 | {%- if test_case.outcome == test_case.SUCCESS -%} 78 | Pass 79 | {%- else -%} 80 | Fail 81 | {%- endif -%} 82 | 83 | 85 | {%- if test_case.subtests %} 86 | 87 | {%- endif %} 88 |
93 | 94 | 95 | 96 | {%- for subtest in test_case.subtests %} 97 | 98 | 99 | 108 | 113 | 114 | {%- if subtest.err or subtest.err %} 115 | 116 | 120 | 121 | {%- endif %} 122 | {% endfor %} 123 | 124 |
{{ subtest.test_id.split(".")[-1] }} 100 | 101 | {%- if subtest.outcome == subtest.SUCCESS -%} 102 | Pass 103 | {%- else -%} 104 | Fail 105 | {%- endif -%} 106 | 107 | 109 | {%- if subtest.err %} 110 | 111 | {%- endif %} 112 |
117 | {%- if subtest.err %}

{{ subtest.err[0].__name__ }}: {{ subtest.err[1] }}

{% endif %} 118 | {%- if subtest.err %}

{{ subtest.test_exception_info }}

{% endif %} 119 |
125 |
132 | Total: {{ summaries[test_case_name].total }}, Pass: {{ summaries[test_case_name].success }}{% if summaries[test_case_name].failure %}, Fail: {{ summaries[test_case_name].failure }}{% endif %}{% if summaries[test_case_name].error %}, Error: {{ summaries[test_case_name].error }}{% endif %}{% if summaries[test_case_name].skip %}, Skip: {{ summaries[test_case_name].skip }}{% endif %} -- Duration: {{ summaries[test_case_name].duration }} 133 |
137 |
138 |
139 | {%- endif %} 140 | {%- endfor %} 141 |
142 | 143 | 161 | 162 | 163 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | MIT License 3 | 4 | Copyright (c) 2017, Ordanis Sanchez Suero 5 | 6 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 7 | 8 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 9 | 10 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 11 | 12 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | 2 | include AUTHORS.rst 3 | 4 | include CONTRIBUTING.rst 5 | include HISTORY.rst 6 | include LICENSE 7 | include README.rst 8 | 9 | recursive-include * *.html 10 | recursive-include tests * 11 | recursive-exclude * __pycache__ 12 | recursive-exclude * *.py[co] 13 | 14 | recursive-include docs *.rst conf.py Makefile make.bat *.jpg *.png *.gif 15 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: clean clean-test clean-pyc clean-build docs help 2 | .DEFAULT_GOAL := help 3 | define BROWSER_PYSCRIPT 4 | import os, webbrowser, sys 5 | try: 6 | from urllib import pathname2url 7 | except: 8 | from urllib.request import pathname2url 9 | 10 | webbrowser.open("file://" + pathname2url(os.path.abspath(sys.argv[1]))) 11 | endef 12 | export BROWSER_PYSCRIPT 13 | 14 | define PRINT_HELP_PYSCRIPT 15 | import re, sys 16 | 17 | for line in sys.stdin: 18 | match = re.match(r'^([a-zA-Z_-]+):.*?## (.*)$$', line) 19 | if match: 20 | target, help = match.groups() 21 | print("%-20s %s" % (target, help)) 22 | endef 23 | export PRINT_HELP_PYSCRIPT 24 | BROWSER := python -c "$$BROWSER_PYSCRIPT" 25 | 26 | help: 27 | @python -c "$$PRINT_HELP_PYSCRIPT" < $(MAKEFILE_LIST) 28 | 29 | clean: clean-build clean-pyc clean-test ## remove all build, test, coverage and Python artifacts 30 | 31 | 32 | clean-build: ## remove build artifacts 33 | rm -fr build/ 34 | rm -fr dist/ 35 | rm -fr .eggs/ 36 | find . -name '*.egg-info' -exec rm -fr {} + 37 | find . -name '*.egg' -exec rm -f {} + 38 | 39 | clean-pyc: ## remove Python file artifacts 40 | find . -name '*.pyc' -exec rm -f {} + 41 | find . -name '*.pyo' -exec rm -f {} + 42 | find . -name '*~' -exec rm -f {} + 43 | find . -name '__pycache__' -exec rm -fr {} + 44 | 45 | clean-test: ## remove test and coverage artifacts 46 | rm -fr .tox/ 47 | rm -f .coverage 48 | rm -fr htmlcov/ 49 | 50 | ##lint: ## check style with flake8 51 | ##flake8 HtmlTestRunner tests 52 | 53 | test: ## run tests quickly with the default Python 54 | 55 | python setup.py test 56 | 57 | ##test-all: ## run tests on every Python version with tox 58 | ##tox 59 | 60 | ##coverage: ## check code coverage quickly with the default Python 61 | 62 | ##coverage run --source HtmlTestRunner setup.py test 63 | 64 | ##coverage report -m 65 | ##coverage html 66 | ##$(BROWSER) htmlcov/index.html 67 | 68 | ##docs: ## generate Sphinx HTML documentation, including API docs 69 | # rm -f docs/HtmlTestRunner.rst 70 | # rm -f docs/modules.rst 71 | # sphinx-apidoc -o docs/ HtmlTestRunner 72 | # $(MAKE) -C docs clean 73 | # $(MAKE) -C docs html 74 | # $(BROWSER) docs/_build/html/index.html 75 | 76 | ##servedocs: docs ## compile the docs watching for changes 77 | ##watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D . 78 | 79 | release: clean ## package and upload a release 80 | python setup.py sdist upload 81 | python setup.py bdist_wheel upload 82 | 83 | dist: clean ## builds source and wheel package 84 | python setup.py sdist 85 | python setup.py bdist_wheel 86 | ls -l dist 87 | 88 | install: clean ## install the package to the active Python's site-packages 89 | python setup.py install 90 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # HtmlTestRunner 2 | 3 | 4 | [![Pypi link](https://img.shields.io/pypi/v/html-testRunner.svg)](https://pypi.python.org/pypi/html-testRunner) 5 | [![Travis job](https://img.shields.io/travis/oldani/HtmlTestRunner.svg)](https://travis-ci.org/oldani/HtmlTestRunner) 6 | 7 | 8 | 9 | HtmlTest runner is a unittest test runner that saves results in a human-readable HTML format. 10 | 11 | This Package was inspired by ``unittest-xml-reporting`` and 12 | ``HtmlTestRunner by tungwaiyip`` and began by combining the methodology of the former with the functionality of the latter. 13 | 14 | ## Table of Content 15 | 16 | - [Intallation](#installation) 17 | - [Usage](#usage) 18 | - [Console Output](#console-output) 19 | - [Test Results](#test-result) 20 | - [Todo](#todo) 21 | - [Contributing](#contributing) 22 | - [Credits](#credits) 23 | 24 | ## Installation 25 | 26 | 27 | To install HtmlTestRunner, run this command in your terminal: 28 | 29 | ```batch 30 | $ pip install html-testRunner 31 | ``` 32 | 33 | This is the preferred method to install HtmlTestRunner, as it will always install the most recent stable release. 34 | If you don't have [pip](https://pip.pypa.io) installed, this [Python installation guide](http://docs.python-guide.org/en/latest/starting/installation/) can guide 35 | you through the process. 36 | 37 | 38 | ## Usage: 39 | 40 | ### With unittest.main() 41 | 42 | ```python 43 | 44 | import HtmlTestRunner 45 | import unittest 46 | 47 | 48 | class TestStringMethods(unittest.TestCase): 49 | """ Example test for HtmlRunner. """ 50 | 51 | def test_upper(self): 52 | self.assertEqual('foo'.upper(), 'FOO') 53 | 54 | def test_isupper(self): 55 | self.assertTrue('FOO'.isupper()) 56 | self.assertFalse('Foo'.isupper()) 57 | 58 | def test_split(self): 59 | s = 'hello world' 60 | self.assertEqual(s.split(), ['hello', 'world']) 61 | # check that s.split fails when the separator is not a string 62 | with self.assertRaises(TypeError): 63 | s.split(2) 64 | 65 | def test_error(self): 66 | """ This test should be marked as error one. """ 67 | raise ValueError 68 | 69 | def test_fail(self): 70 | """ This test should fail. """ 71 | self.assertEqual(1, 2) 72 | 73 | @unittest.skip("This is a skipped test.") 74 | def test_skip(self): 75 | """ This test should be skipped. """ 76 | pass 77 | 78 | if __name__ == '__main__': 79 | unittest.main(testRunner=HtmlTestRunner.HTMLTestRunner()) 80 | ``` 81 | 82 | Just import `HtmlTestRunner` from package, then pass it to `unittest.main` with the `testRunner` keyword. 83 | Tests will be saved under a reports/ directory by default (the `output` kwarg controls this.). 84 | 85 | ### With Test Suites 86 | `HtmlTestRunner` can also be used with `test suites`; just create a runner instance and call the run method with your suite. 87 | Here an example: 88 | 89 | ```python 90 | from unittest import TestLoader, TestSuite 91 | from HtmlTestRunner import HTMLTestRunner 92 | import ExampleTest 93 | import Example2Test 94 | 95 | example_tests = TestLoader().loadTestsFromTestCase(ExampleTest) 96 | example2_tests = TestLoader().loadTestsFromTestCase(Example2Test) 97 | 98 | suite = TestSuite([example_tests, example2_tests]) 99 | 100 | runner = HTMLTestRunner(output='example_suite') 101 | 102 | runner.run(suite) 103 | ``` 104 | 105 | ### Combining Reports into a Single Report 106 | 107 | By default, separate reports will be produced for each `TestCase`. 108 | The `combine_reports` boolean kwarg can be used to tell `HTMLTestRunner` to instead produce a single report: 109 | ```python 110 | import HtmlTestRunner 111 | h = HtmlTestRunner.HTMLTestRunner(combine_reports=True).run(suite) 112 | ``` 113 | 114 | ### Setting a filename 115 | By default the name of the HTML file(s) produced will be created by joining the names of each test case together. 116 | The `report_name` kwarg can be used to specify a custom filename. 117 | For example, the following will produce a report file called "MyReport.html": 118 | 119 | ```python 120 | import HtmlTestRunner 121 | h = HtmlTestRunner.HTMLTestRunner(combine_reports=True, report_name="MyReport", add_timestamp=False).run(suite) 122 | ``` 123 | 124 | ## Console output: 125 | 126 | ![Console output](docs/console_output.png) 127 | 128 | This is an example of the console output expected when using `HTMLTestRunner`. 129 | 130 | 131 | ## Test Result: 132 | 133 | ![Test Results](docs/test_results.gif) 134 | 135 | This is a sample of the results from the template that came by default with the runner. 136 | 137 | ## Custom Templates: 138 | 139 | If you want to use your own template you can pass the absolute path when instantiating the `HTMLTestRunner` class using the `template` kwarg: 140 | ```python 141 | import HtmlTestRunner 142 | h = HtmlTestRunner.HTMLTestRunner(template='path/to/template') 143 | ``` 144 | Your template must use `jinja2` syntax, since this is the engine we use. 145 | 146 | 147 | When using any template, the following variables will be available by default for use by `jinja2`: 148 | 149 | - `title`: This is the report title - by default this is "Unittests Results" but can be changed using the `report_title` kwarg 150 | - `headers`: This is a dict with 2 items: 151 | - `start_time`: A `datetime` object representing when the test was run 152 | - `status`: A dict of of the same form as the sub-dicts described below for `summaries` but for all tests combined 153 | - `all_results`: A dict - keys are the names of each test case and values are lists containing test result objects (see the source code or the template for what information these provide) 154 | - `summaries`: A dict - keys are the names of each test case and values are dicts containing: 155 | - `total`: The total number of tests 156 | - `success`: The number of passed tests 157 | - `failure`: The number of failed tests 158 | - `error`: The number of errored tests 159 | - `skip`: The number of skipped tests 160 | - `duration`: A string showing how long all these tests took to run in either seconds or milliseconds 161 | 162 | Furthermore, you can provide any number of further variables to access from the template using the `template_args` kwarg. 163 | For example, if you wanted to have the name of the logged in user available to insert into reports that could be achieved as follows: 164 | ```python 165 | import getpass 166 | import HtmlTestRunner 167 | 168 | template_args = { 169 | "user": getpass.getuser() 170 | } 171 | h = HtmlTestRunner.HTMLTestRunner(template='path/to/template', template_args=template_args) 172 | ``` 173 | 174 | Now the user name can be accessed from a template using `jinja2` syntax: `{{ user }}`. 175 | 176 | 177 | Click [here](docs/example_template.html) for a template example, this is the default one shipped with the package. 178 | 179 | 180 | 181 | ## TODO 182 | 183 | - [ ] Add Test 184 | - [ ] Improve documentation 185 | - [x] Add custom templates 186 | - [ ] Add xml results 187 | - [ ] Add support for Python2.7 188 | - [x] Add support for one report when running test suites. 189 | 190 | ## Contributing 191 | 192 | Contributions are welcome, and they are greatly appreciated! Every 193 | little bit helps, and credit will always be given. 194 | 195 | For more info please click [here](./CONTRIBUTING.md) 196 | 197 | ## Credits 198 | 199 | This package was created with Cookiecutter and the `audreyr/cookiecutter-pypackage` project template. 200 | 201 | - [Cookiecutter](https://github.com/audreyr/cookiecutter) 202 | - [audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage) 203 | 204 | -------------------------------------------------------------------------------- /docs/console_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/oldani/HtmlTestRunner/8529dc39c348411ed5177fe281f18a456680f803/docs/console_output.png -------------------------------------------------------------------------------- /docs/example_template.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | {{ title }} 5 | 6 | 7 | 8 | 9 | 10 |
11 |
12 |
13 |

{{ title }}

14 |

Start Time: {{ header_info.start_time.strftime("%Y-%m-%d %H:%M:%S") }}

15 |

Duration: {{ header_info.duration }}

16 |

Summary: {{ header_info.status }}

17 |
18 |
19 | {% for test_case_name, tests_results in all_results.items() %} 20 | {% if tests_results %} 21 |
22 |
23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | {% for test_case in tests_results %} 32 | 33 | 34 | 50 | 51 | {% if test_case.stdout or test_case.err or test_case.err %} 52 | 53 | 58 | 59 | {% endif %} 60 | {% endfor %} 61 | 62 | 65 | 66 | 67 |
{{ test_case_name }}Status
{{ test_case.test_description }} 35 | 36 | {% if test_case.outcome == test_case.SUCCESS %} 37 | Pass 38 | {% elif test_case.outcome == test_case.SKIP %} 39 | Skip 40 | {% elif test_case.outcome == test_case.FAILURE %} 41 | Fail 42 | {% else %} 43 | Error 44 | {% endif %} 45 | 46 | {% if test_case.stdout or test_case.err %} 47 |   48 | {% endif %} 49 |
54 | {% if test_case.stdout %}

{{ test_case.stdout }}

{% endif %} 55 | {% if test_case.err %}

{{ test_case.err[0].__name__ }}: {{ test_case.err[1] }}

{% endif %} 56 | {% if test_case.err %}

{{ test_case.test_exception_info }}

{% endif %} 57 |
63 | {{ summaries[test_case_name] }} 64 |
68 |
69 |
70 | {% endif %} 71 | {% endfor %} 72 |
73 | 74 | 91 | 92 | 93 | -------------------------------------------------------------------------------- /docs/installation.md: -------------------------------------------------------------------------------- 1 | # Installation 2 | 3 | 4 | ## Stable release 5 | 6 | 7 | To install HtmlTestRunner, run this command in your terminal: 8 | 9 | ```batch 10 | 11 | $ pip install html-testRunner 12 | ``` 13 | 14 | This is the preferred method to install HtmlTestRunner, as it will always install the most recent stable release. If you don't have [pip](https://pip.pypa.io) installed, this [Python installation guide](http://docs.python-guide.org/en/latest/starting/installation/) can guide 15 | you through the process. 16 | 17 | 18 | # From sources 19 | 20 | The sources for HtmlTestRunner can be downloaded from the [Github repo](https://github.com/oldani/HtmlTestRunner). 21 | 22 | You can either clone the public repository: 23 | 24 | ```batch 25 | 26 | $ git clone git://github.com/oldani/HtmlTestRunner 27 | ``` 28 | 29 | Or download the [tarball](https://github.com/oldani/HtmlTestRunner/tarball/master): 30 | 31 | ```batch 32 | 33 | $ curl -OL https://github.com/oldani/HtmlTestRunner/tarball/master 34 | ``` 35 | 36 | Once you have a copy of the source, you can install it with: 37 | 38 | ```batch 39 | 40 | $ python setup.py install 41 | ``` 42 | -------------------------------------------------------------------------------- /docs/test_results.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/oldani/HtmlTestRunner/8529dc39c348411ed5177fe281f18a456680f803/docs/test_results.gif -------------------------------------------------------------------------------- /requirements_dev.txt: -------------------------------------------------------------------------------- 1 | pip==8.1.2 2 | bumpversion==0.5.3 3 | wheel==0.29.0 4 | watchdog==0.8.3 5 | flake8==2.6.0 6 | tox==2.3.1 7 | coverage==4.1 8 | Sphinx==1.4.8 9 | cryptography==1.7 10 | pyyaml>=4.2b1 11 | 12 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [bumpversion] 2 | current_version = 1.2.1 3 | commit = True 4 | tag = True 5 | 6 | [bumpversion:file:setup.py] 7 | search = version='{current_version}' 8 | replace = version='{new_version}' 9 | 10 | [bumpversion:file:HtmlTestRunner/__init__.py] 11 | search = __version__ = '{current_version}' 12 | replace = __version__ = '{new_version}' 13 | 14 | [bdist_wheel] 15 | universal = 1 16 | 17 | [flake8] 18 | exclude = docs 19 | 20 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """ 5 | =============================== 6 | HtmlTestRunner 7 | =============================== 8 | 9 | 10 | .. image:: https://img.shields.io/pypi/v/html-testRunner.svg 11 | :target: https://pypi.python.org/pypi/html-testRunner 12 | 13 | .. image:: https://img.shields.io/travis/oldani/HtmlTestRunner.svg 14 | :target: https://travis-ci.org/oldani/HtmlTestRunner 15 | 16 | 17 | 18 | HtmlTest runner is a unittest test runner that save test results 19 | in Html files, for human readable presentation of results. 20 | 21 | This Package was inspired in ``unittest-xml-reporting`` and 22 | ``HtmlTestRunner by tungwaiyip``. 23 | 24 | Usage: 25 | -------------- 26 | 27 | .. code-block:: python 28 | 29 | import HtmlTestRunner 30 | import unittest 31 | 32 | 33 | class TestStringMethods(unittest.TestCase): 34 | 35 | def test_upper(self): 36 | self.assertEqual('foo'.upper(), 'FOO') 37 | 38 | def test_error(self): 39 | "\"" This test should be marked as error one. ""\" 40 | raise ValueError 41 | 42 | def test_fail(self): 43 | "\"" This test should fail. ""\" 44 | self.assertEqual(1, 2) 45 | 46 | @unittest.skip("This is a skipped test.") 47 | def test_skip(self): 48 | "\"" This test should be skipped. ""\" 49 | pass 50 | 51 | if __name__ == '__main__': 52 | unittest.main(testRunner=HtmlTestRunner.HTMLTestRunner(output='example_dir')) 53 | 54 | As simple as import the class an initialize it, it only have one request 55 | parameter that is output, this one is use to place the report in a sub 56 | direcotry in ``reports`` directory. 57 | 58 | Links: 59 | --------- 60 | 61 | * `Github `_ 62 | """ 63 | 64 | from setuptools import setup 65 | 66 | requirements = [ 67 | # Package requirements here 68 | "Jinja2>=2.10.1" 69 | ] 70 | 71 | test_requirements = [ 72 | # Package test requirements here 73 | ] 74 | 75 | setup( 76 | name='html-testRunner', 77 | version='1.2.1', 78 | description="A Test Runner in python, for Human Readable HTML Reports", 79 | long_description=__doc__, 80 | author="Ordanis Sanchez Suero", 81 | author_email='ordanisanchez@gmail.com', 82 | url='https://github.com/oldani/HtmlTestRunner', 83 | packages=[ 84 | 'HtmlTestRunner', 85 | ], 86 | package_dir={'HtmlTestRunner': 87 | 'HtmlTestRunner'}, 88 | include_package_data=True, 89 | install_requires=requirements, 90 | license="MIT license", 91 | zip_safe=False, 92 | keywords='HtmlTestRunner TestRunner Html Reports', 93 | classifiers=[ 94 | 'Development Status :: 4 - Beta', 95 | 'Intended Audience :: Developers', 96 | 'License :: OSI Approved :: MIT License', 97 | 'Natural Language :: English', 98 | 'Programming Language :: Python :: 3.5', 99 | ], 100 | test_suite='tests', 101 | tests_require=test_requirements 102 | ) 103 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | -------------------------------------------------------------------------------- /tests/test.py: -------------------------------------------------------------------------------- 1 | # import HtmlTestRunner 2 | # import unittest 3 | 4 | 5 | # class TestStringMethods(unittest.TestCase): 6 | # """ Example test for HtmlRunner. """ 7 | 8 | # def test_upper(self): 9 | # self.assertEqual('foo'.upper(), 'FOO') 10 | 11 | # def test_isupper(self): 12 | # self.assertTrue('FOO'.isupper()) 13 | # self.assertFalse('Foo'.isupper()) 14 | 15 | # def test_split(self): 16 | # s = 'hello world' 17 | # self.assertEqual(s.split(), ['hello', 'world']) 18 | # # check that s.split fails when the separator is not a string 19 | # with self.assertRaises(TypeError): 20 | # s.split(2) 21 | 22 | # def test_error(self): 23 | # """ This test should be marked as error one. """ 24 | # raise ValueError 25 | 26 | # def test_fail(self): 27 | # """ This test should fail. """ 28 | # self.assertEqual(1, 2) 29 | 30 | # @unittest.skip("This is a skipped test.") 31 | # def test_skip(self): 32 | # """ This test should be skipped. """ 33 | # pass 34 | 35 | # def test_subs_fail(self): 36 | # test_string = "test1" 37 | # for i, char in enumerate(test_string): 38 | # with self.subTest(i=i): 39 | # self.assertEqual(char, "1") 40 | 41 | # with self.subTest(test_string=test_string): 42 | # # subtests that error will appear as a failure presently 43 | # raise AttributeError 44 | 45 | 46 | # class MoreTests(unittest.TestCase): 47 | # def test_1(self): 48 | # print("This is different to test2.MoreTests.test_1") 49 | # self.assertEqual(100, -100) 50 | 51 | 52 | # if __name__ == '__main__': 53 | # unittest.main( 54 | # testRunner=HtmlTestRunner.HTMLTestRunner( 55 | # open_in_browser=True, 56 | # combine_reports=True, 57 | # template_args={} 58 | # ) 59 | # ) 60 | -------------------------------------------------------------------------------- /tests/test2.py: -------------------------------------------------------------------------------- 1 | # import unittest 2 | 3 | # from HtmlTestRunner import HTMLTestRunner 4 | 5 | # from test import TestStringMethods 6 | # from test import MoreTests as MoreTests_ 7 | 8 | 9 | # class My_Tests(unittest.TestCase): 10 | 11 | # def test_one(self): 12 | # self.assertTrue(True) 13 | 14 | # def test_two(self): 15 | # # demonstrate that stdout is captured in passing tests 16 | # print("HOLA CARACOLA") 17 | # self.assertTrue(True) 18 | 19 | # def test_three(self): 20 | # self.assertTrue(True) 21 | 22 | # def test_1(self): 23 | # # demonstrate that stdout is captured in failing tests 24 | # print("HELLO") 25 | # self.assertTrue(False) 26 | 27 | # def test_2(self): 28 | # self.assertTrue(False) 29 | 30 | # def test_3(self): 31 | # self.assertTrue(False) 32 | 33 | # def test_z_subs_pass(self): 34 | # for i in range(2): 35 | # with self.subTest(i=i): 36 | # print("i = {}".format(i)) # this won't appear for now 37 | # self.assertEqual(i, i) 38 | 39 | 40 | # class MoreTests(unittest.TestCase): 41 | # def test_1(self): 42 | # print("This is different to test.MoreTests.test_1") 43 | # self.assertAlmostEqual(1, 1.1, delta=0.05) 44 | 45 | 46 | # if __name__ == '__main__': 47 | # tests = unittest.TestLoader().loadTestsFromTestCase(My_Tests) 48 | # other_tests = unittest.TestLoader().loadTestsFromTestCase(TestStringMethods) 49 | # more_tests = unittest.TestLoader().loadTestsFromTestCase(MoreTests) 50 | # more_tests_ = unittest.TestLoader().loadTestsFromTestCase(MoreTests_) 51 | # suite = unittest.TestSuite([tests, other_tests, more_tests, more_tests_]) 52 | # HTMLTestRunner( 53 | # report_title='TEST COMBINED', 54 | # report_name="MyReports", 55 | # add_timestamp=False, 56 | # open_in_browser=True, 57 | # combine_reports=True 58 | # ).run(suite) 59 | 60 | # tests = unittest.TestLoader().loadTestsFromTestCase(My_Tests) 61 | # other_tests = unittest.TestLoader().loadTestsFromTestCase(TestStringMethods) 62 | # more_tests = unittest.TestLoader().loadTestsFromTestCase(MoreTests) 63 | # more_tests_ = unittest.TestLoader().loadTestsFromTestCase(MoreTests_) 64 | # suite = unittest.TestSuite([tests, other_tests, more_tests, more_tests_]) 65 | # HTMLTestRunner( 66 | # report_title='TEST SEPARATE', 67 | # report_name="MyReports", 68 | # open_in_browser=True, 69 | # combine_reports=False 70 | # ).run(suite) 71 | -------------------------------------------------------------------------------- /tests/test_HtmlTestRunner.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """ 5 | test_HtmlTestRunner 6 | ---------------------------------- 7 | 8 | Tests for `HtmlTestRunner` module. 9 | """ 10 | 11 | 12 | import sys 13 | import unittest 14 | 15 | import HtmlTestRunner 16 | 17 | 18 | 19 | class TestHtmltestrunner(unittest.TestCase): 20 | 21 | def setUp(self): 22 | pass 23 | 24 | def tearDown(self): 25 | pass 26 | 27 | def test_000_something(self): 28 | pass 29 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = py27, py34, py35, py36, flake8 3 | 4 | [testenv:flake8] 5 | basepython=python 6 | deps=flake8 7 | commands=flake8 HtmlTestRunner 8 | 9 | [testenv] 10 | setenv = 11 | PYTHONPATH = {toxinidir}:{toxinidir}/HtmlTestRunner 12 | 13 | commands = python setup.py test 14 | 15 | ; If you want to make tox run the tests with the same versions, create a 16 | ; requirements.txt with the pinned versions and uncomment the following lines: 17 | ; deps = 18 | ; -r{toxinidir}/requirements.txt 19 | -------------------------------------------------------------------------------- /travis_pypi_setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | """Update encrypted deploy password in Travis config file 4 | """ 5 | 6 | 7 | from __future__ import print_function 8 | import base64 9 | import json 10 | import os 11 | from getpass import getpass 12 | import yaml 13 | from cryptography.hazmat.primitives.serialization import load_pem_public_key 14 | from cryptography.hazmat.backends import default_backend 15 | from cryptography.hazmat.primitives.asymmetric.padding import PKCS1v15 16 | 17 | 18 | try: 19 | from urllib import urlopen 20 | except: 21 | from urllib.request import urlopen 22 | 23 | 24 | GITHUB_REPO = 'oldani/HtmlTestRunner' 25 | TRAVIS_CONFIG_FILE = os.path.join( 26 | os.path.dirname(os.path.abspath(__file__)), '.travis.yml') 27 | 28 | 29 | def load_key(pubkey): 30 | """Load public RSA key, with work-around for keys using 31 | incorrect header/footer format. 32 | 33 | Read more about RSA encryption with cryptography: 34 | https://cryptography.io/latest/hazmat/primitives/asymmetric/rsa/ 35 | """ 36 | try: 37 | return load_pem_public_key(pubkey.encode(), default_backend()) 38 | except ValueError: 39 | # workaround for https://github.com/travis-ci/travis-api/issues/196 40 | pubkey = pubkey.replace('BEGIN RSA', 'BEGIN').replace('END RSA', 'END') 41 | return load_pem_public_key(pubkey.encode(), default_backend()) 42 | 43 | 44 | def encrypt(pubkey, password): 45 | """Encrypt password using given RSA public key and encode it with base64. 46 | 47 | The encrypted password can only be decrypted by someone with the 48 | private key (in this case, only Travis). 49 | """ 50 | key = load_key(pubkey) 51 | encrypted_password = key.encrypt(password, PKCS1v15()) 52 | return base64.b64encode(encrypted_password) 53 | 54 | 55 | def fetch_public_key(repo): 56 | """Download RSA public key Travis will use for this repo. 57 | 58 | Travis API docs: http://docs.travis-ci.com/api/#repository-keys 59 | """ 60 | keyurl = 'https://api.travis-ci.org/repos/{0}/key'.format(repo) 61 | data = json.loads(urlopen(keyurl).read().decode()) 62 | if 'key' not in data: 63 | errmsg = "Could not find public key for repo: {}.\n".format(repo) 64 | errmsg += "Have you already added your GitHub repo to Travis?" 65 | raise ValueError(errmsg) 66 | return data['key'] 67 | 68 | 69 | def prepend_line(filepath, line): 70 | """Rewrite a file adding a line to its beginning. 71 | """ 72 | with open(filepath) as f: 73 | lines = f.readlines() 74 | 75 | lines.insert(0, line) 76 | 77 | with open(filepath, 'w') as f: 78 | f.writelines(lines) 79 | 80 | 81 | def load_yaml_config(filepath): 82 | with open(filepath) as f: 83 | return yaml.load(f) 84 | 85 | 86 | def save_yaml_config(filepath, config): 87 | with open(filepath, 'w') as f: 88 | yaml.dump(config, f, default_flow_style=False) 89 | 90 | 91 | def update_travis_deploy_password(encrypted_password): 92 | """Update the deploy section of the .travis.yml file 93 | to use the given encrypted password. 94 | """ 95 | config = load_yaml_config(TRAVIS_CONFIG_FILE) 96 | 97 | config['deploy']['password'] = dict(secure=encrypted_password) 98 | 99 | save_yaml_config(TRAVIS_CONFIG_FILE, config) 100 | 101 | line = ('# This file was autogenerated and will overwrite' 102 | ' each time you run travis_pypi_setup.py\n') 103 | prepend_line(TRAVIS_CONFIG_FILE, line) 104 | 105 | 106 | def main(args): 107 | public_key = fetch_public_key(args.repo) 108 | password = args.password or getpass('PyPI password: ') 109 | update_travis_deploy_password(encrypt(public_key, password.encode())) 110 | print("Wrote encrypted password to .travis.yml -- you're ready to deploy") 111 | 112 | 113 | if '__main__' == __name__: 114 | import argparse 115 | parser = argparse.ArgumentParser(description=__doc__) 116 | parser.add_argument('--repo', default=GITHUB_REPO, 117 | help='GitHub repo (default: %s)' % GITHUB_REPO) 118 | parser.add_argument('--password', 119 | help='PyPI password (will prompt if not provided)') 120 | 121 | args = parser.parse_args() 122 | main(args) 123 | --------------------------------------------------------------------------------