├── .gitignore ├── LICENSE ├── README.textile ├── setup.py └── src └── test_extensions ├── __init__.py ├── common.py ├── django_common.py ├── examples ├── examples.py └── twillexamples.py ├── management ├── __init__.py └── commands │ ├── __init__.py │ ├── runtester.py │ └── test.py ├── testrunners ├── __init__.py ├── codecoverage.py ├── figleafcoverage.py ├── nodatabase.py ├── xmloutput.py └── xmlunit │ ├── __init__.py │ └── unittest.py └── twill.py /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | *~ 3 | *.pyc 4 | build/* 5 | dist/* 6 | src/*.egg-info/* 7 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2008 Gareth Rushgrove 2 | 3 | Permission is hereby granted, free of charge, to any person 4 | obtaining a copy of this software and associated documentation 5 | files (the "Software"), to deal in the Software without 6 | restriction, including without limitation the rights to use, 7 | copy, modify, merge, publish, distribute, sublicense, and/or sell 8 | copies of the Software, and to permit persons to whom the 9 | Software is furnished to do so, subject to the following 10 | conditions: 11 | 12 | The above copyright notice and this permission notice shall be 13 | included in all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 16 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 17 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 18 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 19 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 20 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 21 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 22 | OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /README.textile: -------------------------------------------------------------------------------- 1 | PyUnit provides a basic set of assertions which can get you started with unit testing python, but it's always useful to have more. Django also has a few specific requirements and common patterns when it comes to testing. This set of classes aims to provide a useful starting point for both these situations. 2 | 3 | The application also overrides the default Django test runner, adding a few useful features: 4 | 5 | h2. Installation 6 | 7 | Just add the project to your INSTALLED_APPS. 8 | 9 |
INSTALLED_APPS = (
10 | 	'test_extensions',
11 | )
12 | 13 | Note that this application steals the test command from django, overriding it with extra toys. If another application in your INSTALLED_APPS does this too then the last one in the list will win. South migrations does this in order to use the django syncdb command for testing, which test_extensions does too. As of 0.4 test_extensions works with South, just as long as you include it after south in the list of installed apps. 14 | 15 | h2. Assertions 16 | 17 | See the examples directory in the src/test_extensions directory for details of a large number of useful assertions for testing django apps: 18 | 19 | * assert_response_contains 20 | * assert_response_doesnt_contain 21 | * assert_regex_contains 22 | * assert_render_matches 23 | * assert_code 24 | * assert_render 25 | * assert_render_matches 26 | * assert_doesnt_render 27 | * assert_render_contains 28 | * assert_render_doesnt_contain 29 | 30 | h2. Test Runners 31 | 32 | h3. XMLUnit 33 | 34 | Sometimes it's nice to have a file reporting the results of a test run. Some applications such as CruiseControl can use this to display the results in a user interface. 35 | 36 |
python manage.py test --xml
37 | 38 | h3. Code Coverage 39 | 40 | If you want to know what code is being run when you run your test suite then codecoverage is for you. These two flags use two different third party libraries to calculate coverage statistics. The first dumps the results to stdout, --xmlcoverage creates a cobertura-compatible xml output, and the last one creates a series of files displaying the results. 41 | 42 |
python manage.py test --coverage
43 |
python manage.py test --xmlcoverage
44 |
python manage.py test --figleaf
45 | 46 | h3. No Database 47 | 48 | Sometimes your don't want the overhead of setting up a database during testing, probably because your application just doesn't use it. 49 | 50 |
python manage.py test --nodb
51 |
python manage.py test --nodb --coverage
52 |
python manage.py test --nodb --xmlcoverage
53 | 54 | *WARNING* Don't use this if you use the ORM in your app. An "outstanding issue":http://github.com/garethr/django-test-extensions/issues#issue/13 means that you can get into trouble. Your tests will still hit the database, but it will be your non test data. 55 | 56 | h2. Local Continuous Integration Command 57 | 58 | Thanks to Roberto Aguilar (http://github.com/rca) for providing a auto-reloading version of the test runner. Run the runtester command and it should run your test suite whenever you change a file (similar to how runserver reloads the server each time you change something.) 59 | 60 | See "this thread":http://groups.google.com/group/django-developers/browse_thread/thread/ac991410674c195d/f7179acc65e686cc 61 | from the Django Developer list of more information and discussion. 62 | 63 |
python manage.py runtester
64 | 65 | h2. Licence 66 | 67 | XMLUnit is included out of convenience. It was written by Marc-Elian Begin and is Copyright (c) Members of the EGEE Collaboration. 2004. http://www.eu-egee.org 68 | 69 | The rest of the code is licensed under an MIT license: 70 | 71 | Copyright (c) 2008 Gareth Rushgrove 72 | 73 | Permission is hereby granted, free of charge, to any person 74 | obtaining a copy of this software and associated documentation 75 | files (the "Software"), to deal in the Software without 76 | restriction, including without limitation the rights to use, 77 | copy, modify, merge, publish, distribute, sublicense, and/or sell 78 | copies of the Software, and to permit persons to whom the 79 | Software is furnished to do so, subject to the following 80 | conditions: 81 | 82 | The above copyright notice and this permission notice shall be 83 | included in all copies or substantial portions of the Software. 84 | 85 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 86 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 87 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 88 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 89 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 90 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 91 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 92 | OTHER DEALINGS IN THE SOFTWARE. 93 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | 3 | setup( 4 | name = "django-test-extensions", 5 | version = "0.14", 6 | author = "Gareth Rushgrove", 7 | author_email = "gareth@morethanseven.net", 8 | url = "http://github.com/garethr/django-test-extensions/", 9 | 10 | packages = find_packages('src'), 11 | package_dir = {'':'src'}, 12 | license = "MIT License", 13 | keywords = "django testing", 14 | description = "A few classes to make testing django applications easier", 15 | install_requires=[ 16 | 'setuptools', 17 | 'BeautifulSoup', 18 | 'coverage', 19 | ], 20 | classifiers = [ 21 | "Intended Audience :: Developers", 22 | "License :: OSI Approved :: MIT License", 23 | 'Operating System :: OS Independent', 24 | 'Programming Language :: Python', 25 | 'Topic :: Software Development :: Testing', 26 | ] 27 | ) 28 | -------------------------------------------------------------------------------- /src/test_extensions/__init__.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | import traceback 4 | 5 | from django.utils import autoreload 6 | 7 | _mtimes = autoreload._mtimes 8 | _win = autoreload._win 9 | _code_changed = autoreload.code_changed 10 | _error_files = [] 11 | def my_code_changed(): 12 | global _mtimes, _win 13 | for filename in filter(lambda v: v, map(lambda m: getattr(m, "__file__", None), sys.modules.values())) + _error_files: 14 | if filename.endswith(".pyc") or filename.endswith(".pyo"): 15 | filename = filename[:-1] 16 | if not os.path.exists(filename): 17 | continue # File might be in an egg, so it can't be reloaded. 18 | stat = os.stat(filename) 19 | mtime = stat.st_mtime 20 | if _win: 21 | mtime -= stat.st_ctime 22 | if filename not in _mtimes: 23 | _mtimes[filename] = mtime 24 | continue 25 | if mtime != _mtimes[filename]: 26 | _mtimes = {} 27 | try: 28 | del _error_files[_error_files.index(filename)] 29 | except ValueError: pass 30 | return True 31 | return False 32 | 33 | def check_errors(fn): 34 | def wrapper(*args, **kwargs): 35 | try: 36 | fn(*args, **kwargs) 37 | except (ImportError, IndentationError, 38 | NameError, SyntaxError, TypeError): 39 | et, ev, tb = sys.exc_info() 40 | 41 | if getattr(ev, 'filename', None) is None: 42 | # get the filename from the last item in the stack 43 | filename = traceback.extract_tb(tb)[-1][0] 44 | else: 45 | filename = ev.filename 46 | 47 | if filename not in _error_files: 48 | _error_files.append(filename) 49 | 50 | raise 51 | 52 | return wrapper 53 | 54 | _main = autoreload.main 55 | def my_main(main_func, args=None, kwargs=None): 56 | wrapped_main_func = check_errors(main_func) 57 | _main(wrapped_main_func, args=args, kwargs=kwargs) 58 | 59 | autoreload.code_changed = my_code_changed 60 | autoreload.main = my_main 61 | -------------------------------------------------------------------------------- /src/test_extensions/common.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | 4 | # Test classes inherit from the Django TestCase 5 | from django.test import TestCase 6 | 7 | # If you're wanting to do direct database queries you'll need this 8 | from django.db import connection 9 | 10 | # The BeautifulSoup HTML parser is useful for testing markup fragments 11 | from BeautifulSoup import BeautifulSoup as Soup 12 | 13 | # needed to login to the admin 14 | from django.contrib.auth.models import User 15 | from django.utils.encoding import smart_str 16 | 17 | 18 | class Common(TestCase): 19 | """ 20 | This class contains a number of custom assertions which 21 | extend the default Django assertions. Use this as the super 22 | class for you tests rather than django.test.TestCase 23 | """ 24 | 25 | # a list of fixtures for loading data before each test 26 | fixtures = [] 27 | 28 | def setUp(self): 29 | """ 30 | setUp is run before each test in the class. Use it for 31 | initilisation and creating mock objects to test 32 | """ 33 | pass 34 | 35 | def tearDown(self): 36 | """ 37 | tearDown is run after each test in the class. Use it for 38 | cleaning up data created during each test 39 | """ 40 | pass 41 | 42 | # A few useful helpers methods 43 | 44 | def execute_sql(*sql): 45 | "execute a SQL query and return the cursor" 46 | cursor = connection.cursor() 47 | cursor.execute(*sql) 48 | return cursor 49 | 50 | # Custom assertions 51 | 52 | def assert_equal(self, *args, **kwargs): 53 | 'Assert that two values are equal' 54 | 55 | return self.assertEqual(*args, **kwargs) 56 | 57 | def assert_not_equal(self, *args, **kwargs): 58 | "Assert that two values are not equal" 59 | return not self.assertNotEqual(*args, **kwargs) 60 | 61 | def assert_contains(self, needle, haystack, diagnostic=''): 62 | 'Assert that one value (the hasystack) contains another value (the needle)' 63 | diagnostic = diagnostic + "\nContent should contain `%s' but doesn't:\n%s" % (needle, haystack) 64 | diagnostic = diagnostic.strip() 65 | return self.assert_(needle in haystack, diagnostic) 66 | 67 | def assert_doesnt_contain(self, needle, haystack): # CONSIDER deprecate me for deny_contains 68 | "Assert that one value (the hasystack) does not contain another value (the needle)" 69 | return self.assert_(needle not in haystack, "Content should not contain `%s' but does:\n%s" % (needle, haystack)) 70 | 71 | def deny_contains(self, needle, haystack): 72 | "Assert that one value (the hasystack) does not contain another value (the needle)" 73 | return self.assert_(needle not in haystack, "Content should not contain `%s' but does:\n%s" % (needle, haystack)) 74 | 75 | def assert_regex_contains(self, pattern, string, flags=None): 76 | 'Assert that the given regular expression matches the string' 77 | flags = flags or 0 78 | disposition = re.search(pattern, string, flags) 79 | self.assertTrue(disposition != None, repr(smart_str(pattern)) + ' should match ' + repr(smart_str(string))) 80 | 81 | def deny_regex_contains(self, pattern, slug): 82 | 'Deny that the given regular expression pattern matches a string' 83 | 84 | r = re.compile(pattern) 85 | 86 | self.assertEqual( None, 87 | r.search(smart_str(slug)), 88 | pattern + ' should not match ' + smart_str(slug) ) 89 | 90 | def assert_count(self, expected, model): 91 | "Assert that their are the expected number of instances of a given model" 92 | actual = model.objects.count() 93 | self.assert_equal(expected, actual, "%s should have %d objects, had %d" % (model.__name__, expected, actual)) 94 | 95 | def assert_counts(self, expected_counts, models): 96 | "Assert than a list of numbers is equal to the number of instances of a list of models" 97 | if len(expected_counts) != len(models): 98 | raise("Number of counts and number of models should be equal") 99 | actual_counts = [model.objects.count() for model in models] 100 | self.assert_equal(expected_counts, actual_counts, "%s should have counts %s but had %s" % ([m.__name__ for m in models], expected_counts, actual_counts)) 101 | 102 | def assert_is_instance(self, model, obj): 103 | "Assert than a given object is an instance of a model" 104 | self.assert_(isinstance(obj, model), "%s should be instance of %s" % (obj, model)) 105 | 106 | def assert_raises(self, *args, **kwargs): 107 | "Assert than a given function and arguments raises a given exception" 108 | return self.assertRaises(*args, **kwargs) 109 | 110 | def assert_attrs(self, obj, **kwargs): 111 | "Assert a given object has a given set of attribute values" 112 | for key in sorted(kwargs.keys()): 113 | expected = kwargs[key] 114 | actual = getattr(obj, key) 115 | self.assert_equal(expected, actual, u"Object's %s expected to be `%s', is `%s' instead" % (key, expected, actual)) 116 | 117 | def assert_key_exists(self, key, item): 118 | "Assert than a given key exists in a given item" 119 | try: 120 | self.assertTrue(key in item) 121 | except AssertionError: 122 | print 'no %s in %s' % (key, item) 123 | raise AssertionError 124 | 125 | def assert_file_exists(self, file_path): 126 | "Assert a given file exists" 127 | self.assertTrue(os.path.exists(file_path), "%s does not exist!" % file_path) 128 | 129 | def assert_has_attr(self, obj, attr): 130 | "Assert a given object has a give attribute, without checking the values" 131 | try: 132 | getattr(obj, attr) 133 | assert(True) 134 | except AttributeError: 135 | assert(False) 136 | 137 | def _xml_to_tree(self, xml, forgiving=False): 138 | from lxml import etree 139 | self._xml = xml 140 | 141 | if not isinstance(xml, basestring): 142 | self._xml = str(xml) # TODO tostring 143 | return xml 144 | 145 | if ' 0, xpath + ' should match ' + self._xml) 157 | node = nodes[0] 158 | if kw.get('verbose', False): 159 | self.reveal_xml(node) 160 | return node 161 | 162 | def reveal_xml(self, node): 163 | 'Spews an XML node as source, for diagnosis' 164 | from lxml import etree 165 | print etree.tostring(node, pretty_print=True) 166 | 167 | def deny_xml(self, xml, xpath): 168 | 'Check that a given extent of XML or HTML does not contain a given XPath' 169 | tree = self._xml_to_tree(xml) 170 | nodes = tree.xpath(xpath) 171 | self.assertEqual(0, len(nodes), xpath + ' should not appear in ' + self._xml) -------------------------------------------------------------------------------- /src/test_extensions/django_common.py: -------------------------------------------------------------------------------- 1 | # Test classes inherit from the Django TestCase 2 | from common import Common 3 | import re 4 | 5 | # needed to login to the admin 6 | from django.contrib.auth.models import User 7 | from django.utils.encoding import smart_str 8 | 9 | from django.template import Template, Context 10 | 11 | class DjangoCommon(Common): 12 | """ 13 | This class contains a number of custom assertions which 14 | extend the default Django assertions. Use this as the super 15 | class for you tests rather than django.test.TestCase 16 | """ 17 | 18 | # a list of fixtures for loading data before each test 19 | fixtures = [] 20 | 21 | def setUp(self): 22 | """ 23 | setUp is run before each test in the class. Use it for 24 | initilisation and creating mock objects to test 25 | """ 26 | pass 27 | 28 | def tearDown(self): 29 | """ 30 | tearDown is run after each test in the class. Use it for 31 | cleaning up data created during each test 32 | """ 33 | pass 34 | 35 | # A few useful helpers methods 36 | 37 | def login_as_admin(self): 38 | "Create, then login as, an admin user" 39 | # Only create the user if they don't exist already ;) 40 | try: 41 | User.objects.get(username="admin") 42 | except User.DoesNotExist: 43 | user = User.objects.create_user('admin', 'admin@example.com', 'password') 44 | user.is_staff = True 45 | user.is_superuser = True 46 | user.save() 47 | 48 | if not self.client.login(username='admin', password='password'): 49 | raise Exception("Login failed") 50 | 51 | # Some assertions need to know which template tag libraries to load 52 | # so we provide a list of templatetag libraries 53 | template_tag_libraries = [] 54 | 55 | def render(self, template, **kwargs): 56 | "Return the rendering of a given template including loading of template tags" 57 | template = "".join(["{%% load %s %%}" % lib for lib in self.template_tag_libraries]) + template 58 | return Template(template).render(Context(kwargs)).strip() 59 | 60 | # Custom assertions 61 | 62 | def assert_response_contains(self, fragment, response): 63 | "Assert that a response object contains a given string" 64 | self.assert_(fragment in response.content, "Response should contain `%s' but doesn't:\n%s" % (fragment, response.content)) 65 | 66 | def assert_response_doesnt_contain(self, fragment, response): 67 | "Assert that a response object does not contain a given string" 68 | self.assert_(fragment not in response.content, "Response should not contain `%s' but does:\n%s" % (fragment, response.content)) 69 | 70 | def assert_render_matches(self, template, match_regexp, vars={}): 71 | "Assert than the output from rendering a given template with a given context matches a given regex" 72 | r = re.compile(match_regexp) 73 | actual = Template(template).render(Context(vars)) 74 | self.assert_(r.match(actual), "Expected: %s\nGot: %s" % ( 75 | match_regexp, actual 76 | )) 77 | 78 | def assert_code(self, response, code): 79 | "Assert that a given response returns a given HTTP status code" 80 | self.assertEqual(code, response.status_code, "HTTP Response status code should be %d, and is %d" % (code, response.status_code)) 81 | 82 | def assertNotContains(self, response, text, status_code=200): # overrides Django's assertion, because all diagnostics should be stated positively!!! 83 | """ 84 | Asserts that a response indicates that a page was retrieved 85 | successfully, (i.e., the HTTP status code was as expected), and that 86 | ``text`` doesn't occurs in the content of the response. 87 | """ 88 | self.assertEqual(response.status_code, status_code, 89 | "Retrieving page: Response code was %d (expected %d)'" % 90 | (response.status_code, status_code)) 91 | text = smart_str(text, response._charset) 92 | self.assertEqual(response.content.count(text), 93 | 0, "Response should not contain '%s'" % text) 94 | 95 | def assert_render(self, expected, template, **kwargs): 96 | "Asserts than a given template and context render a given fragment" 97 | self.assert_equal(expected, self.render(template, **kwargs)) 98 | 99 | def assert_render_matches(self, match_regexp, template, vars={}): 100 | r = re.compile(match_regexp) 101 | actual = Template(template).render(Context(vars)) 102 | self.assert_(r.match(actual), "Expected: %s\nGot: %s" % ( 103 | match_regexp, actual 104 | )) 105 | 106 | def assert_doesnt_render(self, expected, template, **kwargs): 107 | "Asserts than a given template and context don't render a given fragment" 108 | self.assert_not_equal(expected, self.render(template, **kwargs)) 109 | 110 | def assert_render_contains(self, expected, template, **kwargs): 111 | "Asserts than a given template and context rendering contains a given fragment" 112 | self.assert_contains(expected, self.render(template, **kwargs)) 113 | 114 | def assert_render_doesnt_contain(self, expected, template, **kwargs): 115 | "Asserts than a given template and context rendering does not contain a given fragment" 116 | self.assert_doesnt_contain(expected, self.render(template, **kwargs)) 117 | 118 | def assert_mail(self, funk): 119 | ''' 120 | checks that the called block shouts out to the world 121 | 122 | returns either a single mail object or a list of more than one 123 | ''' 124 | 125 | from django.core import mail 126 | previous_mails = len(mail.outbox) 127 | funk() 128 | mails = mail.outbox[ previous_mails : ] 129 | assert [] != mails, 'the called block produced no mails' 130 | if len(mails) == 1: return mails[0] 131 | return mails 132 | 133 | def assert_latest(self, query_set, lamb): 134 | pks = list(query_set.values_list('pk', flat=True).order_by('-pk')) 135 | high_water_mark = (pks+[0])[0] 136 | lamb() 137 | 138 | # NOTE we ass-ume the database generates primary keys in monotonic order. 139 | # Don't use these techniques in production, 140 | # or in the presence of a pro DBA 141 | 142 | nu_records = list(query_set.filter(pk__gt=high_water_mark).order_by('pk')) 143 | if len(nu_records) == 1: return nu_records[0] 144 | if nu_records: return nu_records # treating the returned value as a scalar or list 145 | # implicitly asserts it is a scalar or list 146 | source = open(lamb.func_code.co_filename, 'r').readlines()[lamb.func_code.co_firstlineno - 1] 147 | source = source.replace('lambda:', '').strip() 148 | model_name = str(query_set.model) 149 | 150 | self.assertFalse(True, 'The called block, `' + source + 151 | '` should produce new ' + model_name + ' records') 152 | 153 | def deny_mail(self, funk): 154 | '''checks that the called block keeps its opinions to itself''' 155 | 156 | from django.core import mail 157 | previous_mails = len(mail.outbox) 158 | funk() 159 | mails = mail.outbox[ previous_mails : ] 160 | assert [] == mails, 'the called block should produce no mails' 161 | 162 | def assert_model_changes(self, mod, item, frum, too, lamb): 163 | source = open(lamb.func_code.co_filename, 'r').readlines()[lamb.func_code.co_firstlineno - 1] 164 | source = source.replace('lambda:', '').strip() 165 | model = str(mod.__class__).replace("'>", '').split('.')[-1] 166 | 167 | should = '%s.%s should equal `%s` before your activation line, `%s`' % \ 168 | (model, item, frum, source) 169 | 170 | self.assertEqual(frum, mod.__dict__[item], should) 171 | lamb() 172 | mod = mod.__class__.objects.get(pk=mod.pk) 173 | 174 | should = '%s.%s should equal `%s` after your activation line, `%s`' % \ 175 | (model, item, too, source) 176 | 177 | self.assertEqual(too, mod.__dict__[item], should) 178 | return mod 179 | -------------------------------------------------------------------------------- /src/test_extensions/examples/examples.py: -------------------------------------------------------------------------------- 1 | from test_extensions.common import Common 2 | 3 | class Examples(Common): 4 | """ 5 | This class contains a number of example tests using the common custom assertions. 6 | Note that these tests won't run as they refer to code that does not exist. Note also 7 | that all tests begin with test_. This ensures the test runner picks them up. 8 | """ 9 | 10 | def test_always_pass(self): 11 | "Demonstration of how to pass a test based on logic" 12 | self.assertTrue(True) 13 | 14 | def test_always_fail(self): 15 | "Demonstration of how to fail a test based on logic" 16 | self.assertFalse(False, "Something went wrong") 17 | 18 | def test_presense_of_management_command(self): 19 | "Check to see whether a given management commands is available" 20 | try: 21 | call_command('management_command_name') 22 | except Exception: 23 | self.assert_("management_command_name management command missing") 24 | 25 | def test_object_type(self): 26 | "Simple test to check a given object type" 27 | example = Object() 28 | self.assert_is_instance(Object,example) 29 | 30 | def test_assert_raises(self): 31 | "Test whether a given function raises a given exception" 32 | path = os.path.join('/example_directory', 'invalid_file_name') 33 | self.assert_raises(ExampleException, example_function, path) 34 | 35 | def test_assert_attrs(self): 36 | "Demonstration of assert_attrs, checks that a given object has a given set of attributes" 37 | event = Event(title="Title",description="Description") 38 | self.assert_attrs(event, 39 | title = 'Title', 40 | description = 'Description' 41 | ) 42 | 43 | def test_assert_counts(self): 44 | "Demonstration of assert_counts using a set of fictional objects" 45 | self.assert_counts([1, 1, 1, 1], [Object1, Object2, Object3, Object4]) 46 | 47 | def test_assert_code(self): 48 | "Use the HTTP client to make a request and then check the HTTP status code in the response" 49 | response = self.client.get('/') 50 | self.assert_code(response, 200) 51 | 52 | def test_creation_of_objects_in_admin(self): 53 | "Demonstration of an admin test to check successful object creation" 54 | form = { 55 | 'title': 'title', 56 | 'description': 'description', 57 | } 58 | self.login_as_admin() 59 | self.assert_counts([0], [Object]) 60 | response = self.client.post('/admin/objects/object/add/', form) 61 | self.assert_code(response, 302) 62 | self.assert_counts([1], [Object]) 63 | 64 | def test_you_can_delete_objects_you_created(self): 65 | "Test object deletion via the admin" 66 | self.login_as_admin() 67 | form = { 68 | 'title': 'title', 69 | 'description': 'description', 70 | } 71 | self.assert_counts([0], [Object]) 72 | self.client.post('/admin/objects/object/add/', form) 73 | self.assert_counts([1], [Object]) 74 | response = self.client.post('/admin/objects/object/%d/delete/' % Object.objects.get().id, {'post':'yes'}) 75 | self.assert_counts([0], [Object]) 76 | 77 | def test_assert_renders(self): 78 | "Example template tag test to check correct rendering" 79 | expected = 'output of template tag' 80 | self.assert_renders('{% load templatetag %}{% templatetag %}', expected) 81 | 82 | def test_assert_render_matches(self): 83 | "Example of testing template tags using regex match" 84 | self.assert_render_matches( 85 | r'^/assets/a/common/blah.js\?cachebust=[\d\.]+$', 86 | '{% load static %}{% static "a/common/blah.js" %}', 87 | ) 88 | 89 | def test_simple_addition(self): 90 | "Pattern for testing input/output of a function" 91 | for (augend, addend, result, msg) in [ 92 | (1, 2, 3, "1+2=3"), 93 | (1, 3, 4, "1+3=4"), 94 | ]: 95 | self.assert_equal(result,augend.plus(addend)) 96 | 97 | def test_response_contains(self): 98 | "Example of a test for content on a given view" 99 | response = self.client.get('/example/') 100 | self.assert_response_contains('

', response) 101 | self.assert_response_doesnt_contain("Not on page", response) 102 | 103 | def test_using_beautiful_soup(self): 104 | "Example test for content on a given view, this time using the BeautifulSoup parser" 105 | response = self.client.get('/example/') 106 | soup = BeautifulSoup(response.content) 107 | self.assert_equal("Page Title", soup.find("title").string.strip()) 108 | 109 | def test_get_tables_that_should_exist(self): 110 | "Useful pattern for checking for the existence of all required tables" 111 | tables_that_exist = [row[0] for row in _execute("SHOW TABLES").fetchall()] 112 | self.assert_equal(True, 'objects_object' in tables_that_exist) -------------------------------------------------------------------------------- /src/test_extensions/examples/twillexamples.py: -------------------------------------------------------------------------------- 1 | from test_extensions.twill import TwillCommon 2 | 3 | class TwillExample(TwillCommon): 4 | """ 5 | Example Twill tests using the TwillCommon class. 6 | """ 7 | 8 | # set a global url. Note you might do this from a settings file. 9 | url = "http://www.example.com/" 10 | 11 | def test_for_200_status_code(self): 12 | "Does the provided url return an HTTP status code of 200" 13 | self.code("200") 14 | 15 | def test_for_h1(self): 16 | "Does the url return content which matches the given regex" 17 | self.find("") -------------------------------------------------------------------------------- /src/test_extensions/management/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/garethr/django-test-extensions/e77d2cb16820fd4d5030ba52bbfe148a96ae8066/src/test_extensions/management/__init__.py -------------------------------------------------------------------------------- /src/test_extensions/management/commands/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/garethr/django-test-extensions/e77d2cb16820fd4d5030ba52bbfe148a96ae8066/src/test_extensions/management/commands/__init__.py -------------------------------------------------------------------------------- /src/test_extensions/management/commands/runtester.py: -------------------------------------------------------------------------------- 1 | from django.core.management.base import BaseCommand 2 | from django.utils import autoreload 3 | import os 4 | import sys 5 | import time 6 | 7 | INPROGRESS_FILE = 'testing.inprogress' 8 | 9 | def get_test_command(): 10 | """ 11 | Return an instance of the Command class to use. 12 | 13 | This method can be patched in to run a test command other than the on in 14 | core Django. For example, to make a runtester for South: 15 | 16 | from django.core.management.commands import runtester 17 | from django.core.management.commands.runtester import Command 18 | 19 | def get_test_command(): 20 | from south.management.commands.test import Command as TestCommand 21 | return TestCommand() 22 | 23 | runtester.get_test_command = get_test_command 24 | """ 25 | 26 | from test_extensions.management.commands.test import Command as TestCommand 27 | return TestCommand() 28 | 29 | def my_reloader_thread(): 30 | """ 31 | Wait for a test run to complete before exiting. 32 | """ 33 | 34 | # If a file is saved while tests are being run, the base reloader just 35 | # kills the process. This is bad because it wedges the database and then 36 | # the user is prompted to delete the database. Instead, wait for 37 | # INPROGRESS_FILE to disappear, then exit. Exiting the thread will then 38 | # rerun the suite. 39 | while autoreload.RUN_RELOADER: 40 | if autoreload.code_changed(): 41 | while os.path.exists(INPROGRESS_FILE): 42 | time.sleep(1) 43 | sys.exit(3) # force reload 44 | time.sleep(1) 45 | 46 | # monkeypatch the reloader_thread function with the one above 47 | autoreload.reloader_thread = my_reloader_thread 48 | 49 | class Command(BaseCommand): 50 | option_list = BaseCommand.option_list 51 | help = "Starts a command that tests upon saving files." 52 | args = '[optional apps to test]' 53 | 54 | # Validation is called explicitly each time the suite is run 55 | requires_model_validation = False 56 | 57 | def handle(self, *args, **options): 58 | if os.path.exists(INPROGRESS_FILE): 59 | os.remove(INPROGRESS_FILE) 60 | 61 | def inner_run(): 62 | try: 63 | open(INPROGRESS_FILE, 'wb').close() 64 | 65 | test_command = get_test_command() 66 | test_command.handle(*args, **options) 67 | finally: 68 | if os.path.exists(INPROGRESS_FILE): 69 | os.remove(INPROGRESS_FILE) 70 | 71 | autoreload.main(inner_run) 72 | -------------------------------------------------------------------------------- /src/test_extensions/management/commands/test.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from optparse import make_option 3 | 4 | from django.core import management 5 | from django.conf import settings 6 | from django.db.models import get_app, get_apps 7 | from django.core.management.base import BaseCommand 8 | 9 | # Django versions prior to 1.2 don't include the DjangoTestSuiteRunner class; 10 | # Django versions since 1.2 include multi-database support, which doesn't play 11 | # nicely with the database setup in the XML test runner. 12 | try: 13 | from django.test.simple import DjangoTestSuiteRunner 14 | xml_runner = 'test_extensions.testrunners.xmloutput.XMLTestSuiteRunner' 15 | except ImportError: # We are in a version prior to 1.2 16 | xml_runner = 'test_extensions.testrunners.xmloutput.run_tests' 17 | 18 | skippers = [] 19 | 20 | class Command(BaseCommand): 21 | option_list = BaseCommand.option_list 22 | 23 | if '--verbosity' not in [opt.get_opt_string() for opt in BaseCommand.option_list]: 24 | option_list += \ 25 | make_option('--verbosity', action='store', dest='verbosity', 26 | default='0', 27 | type='choice', choices=['0', '1', '2'], 28 | help='Verbosity level; 0=minimal, 1=normal, 2=all'), 29 | 30 | option_list += ( 31 | make_option('--noinput', action='store_false', dest='interactive', 32 | default=True, 33 | help='Tells Django to NOT prompt the user for input of any kind.'), 34 | make_option('--callgraph', action='store_true', dest='callgraph', 35 | default=False, 36 | help='Generate execution call graph (slow!)'), 37 | make_option('--coverage', action='store_true', dest='coverage', 38 | default=False, 39 | help='Show coverage details'), 40 | make_option('--coverage_html_only', action='store_true', dest='coverage_html_only', 41 | default=False, 42 | help='Supress stdout output if using HTML output. Else, is ignored'), 43 | make_option('--xmlcoverage', action='store_true', dest='xmlcoverage', 44 | default=False, 45 | help='Show coverage details and write them into a xml file'), 46 | make_option('--figleaf', action='store_true', dest='figleaf', 47 | default=False, 48 | help='Produce figleaf coverage report'), 49 | make_option('--xml', action='store_true', dest='xml', default=False, 50 | help='Produce JUnit-type xml output'), 51 | make_option('--nodb', action='store_true', dest='nodb', default=False, 52 | help='No database required for these tests'), 53 | make_option('--failfast', action='store_true', dest='failfast', 54 | default=False, 55 | help='Tells Django to stop running the test suite after first failed test.'), 56 | 57 | ) 58 | help = """Custom test command which allows for 59 | specifying different test runners.""" 60 | args = '[appname ...]' 61 | 62 | requires_model_validation = True 63 | 64 | def handle(self, *test_labels, **options): 65 | 66 | verbosity = int(options.get('verbosity', 1)) 67 | interactive = options.get('interactive', True) 68 | callgraph = options.get('callgraph', False) 69 | failfast = options.get("failfast", False) 70 | coverage_html_only = options.get("coverage_html_only", False) 71 | 72 | # it's quite possible someone, lets say South, might have stolen 73 | # the syncdb command from django. For testing purposes we should 74 | # probably put it back. Migrations don't really make sense 75 | # for tests. Actually the South test runner does this too. 76 | management.get_commands() 77 | management._commands['syncdb'] = 'django.core' 78 | 79 | if options.get('nodb'): 80 | if options.get('xmlcoverage'): 81 | test_runner_name = 'test_extensions.testrunners.nodatabase.run_tests_with_xmlcoverage' 82 | elif options.get('coverage'): 83 | test_runner_name = 'test_extensions.testrunners.nodatabase.run_tests_with_coverage' 84 | else: 85 | test_runner_name = 'test_extensions.testrunners.nodatabase.run_tests' 86 | elif options.get('xmlcoverage'): 87 | test_runner_name = 'test_extensions.testrunners.codecoverage.run_tests_xml' 88 | elif options.get ('coverage'): 89 | test_runner_name = 'test_extensions.testrunners.codecoverage.run_tests' 90 | elif options.get('figleaf'): 91 | test_runner_name = 'test_extensions.testrunners.figleafcoverage.run_tests' 92 | elif options.get('xml'): 93 | test_runner_name = xml_runner 94 | else: 95 | test_runner_name = settings.TEST_RUNNER 96 | 97 | test_path = test_runner_name.split('.') 98 | # Allow for Python 2.5 relative paths 99 | if len(test_path) > 1: 100 | test_module_name = '.'.join(test_path[:-1]) 101 | else: 102 | test_module_name = '.' 103 | test_module = __import__(test_module_name, {}, {}, test_path[-1]) 104 | test_runner = getattr(test_module, test_path[-1]) 105 | 106 | if hasattr(settings, 'SKIP_TESTS'): 107 | if not test_labels: 108 | test_labels = list() 109 | for app in get_apps(): 110 | test_labels.append(app.__name__.split('.')[-2]) 111 | for app in settings.SKIP_TESTS: 112 | try: 113 | test_labels = list(test_labels) 114 | test_labels.remove(app) 115 | except ValueError: 116 | pass 117 | 118 | test_options = dict(verbosity=verbosity, 119 | interactive=interactive) 120 | 121 | if options.get('coverage'): 122 | test_options["callgraph"] = callgraph 123 | test_options["html_only"] = coverage_html_only 124 | 125 | try: 126 | failures = test_runner(test_labels, **test_options) 127 | except TypeError: #Django 1.2 128 | test_options["failfast"] = failfast 129 | failures = test_runner(**test_options).run_tests(test_labels) 130 | 131 | if failures: 132 | sys.exit(failures) 133 | -------------------------------------------------------------------------------- /src/test_extensions/testrunners/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/garethr/django-test-extensions/e77d2cb16820fd4d5030ba52bbfe148a96ae8066/src/test_extensions/testrunners/__init__.py -------------------------------------------------------------------------------- /src/test_extensions/testrunners/codecoverage.py: -------------------------------------------------------------------------------- 1 | import coverage 2 | import os, sys 3 | from inspect import getmembers, ismodule 4 | 5 | from django.conf import settings 6 | from django.test.simple import run_tests as django_test_runner 7 | from django.db.models import get_app, get_apps 8 | from django.utils.functional import curry 9 | 10 | from nodatabase import run_tests as nodatabase_run_tests 11 | 12 | def is_wanted_module(mod): 13 | included = getattr(settings, "COVERAGE_INCLUDE_MODULES", []) 14 | excluded = getattr(settings, "COVERAGE_EXCLUDE_MODULES", []) 15 | 16 | marked_to_include = None 17 | 18 | for exclude in excluded: 19 | if exclude.endswith("*"): 20 | if mod.__name__.startswith(exclude[:-1]): 21 | marked_to_include = False 22 | elif mod.__name__ == exclude: 23 | marked_to_include = False 24 | 25 | for include in included: 26 | if include.endswith("*"): 27 | if mod.__name__.startswith(include[:-1]): 28 | marked_to_include = True 29 | elif mod.__name__ == include: 30 | marked_to_include = True 31 | 32 | # marked_to_include=None handles not user-defined states 33 | if marked_to_include is None: 34 | if included and excluded: 35 | # User defined exactly what they want, so exclude other 36 | marked_to_include = False 37 | elif excluded: 38 | # User could define what the want not, so include other. 39 | marked_to_include = True 40 | elif included: 41 | # User enforced what they want, so exclude other 42 | marked_to_include = False 43 | else: 44 | # Usar said nothing, so include anything 45 | marked_to_include = True 46 | 47 | return marked_to_include 48 | 49 | 50 | def get_coverage_modules(app_module): 51 | """ 52 | Returns a list of modules to report coverage info for, given an 53 | application module. 54 | """ 55 | app_path = app_module.__name__.split('.')[:-1] 56 | coverage_module = __import__('.'.join(app_path), {}, {}, app_path[-1]) 57 | 58 | return [coverage_module] + [attr for name, attr in 59 | getmembers(coverage_module) if ismodule(attr) and name != 'tests'] 60 | 61 | def get_all_coverage_modules(app_module): 62 | """ 63 | Returns all possible modules to report coverage on, even if they 64 | aren't loaded. 65 | """ 66 | # We start off with the imported models.py, so we need to import 67 | # the parent app package to find the path. 68 | app_path = app_module.__name__.split('.')[:-1] 69 | app_package = __import__('.'.join(app_path), {}, {}, app_path[-1]) 70 | app_dirpath = app_package.__path__[-1] 71 | 72 | mod_list = [] 73 | for root, dirs, files in os.walk(app_dirpath): 74 | root_path = app_path + root[len(app_dirpath):].split(os.path.sep)[1:] 75 | excludes = getattr(settings, 'EXCLUDE_FROM_COVERAGE', []) 76 | if app_path[0] not in excludes: 77 | for file in files: 78 | if file.lower().endswith('.py'): 79 | mod_name = file[:-3].lower() 80 | try: 81 | mod = __import__('.'.join(root_path + [mod_name]), 82 | {}, {}, mod_name) 83 | except ImportError: 84 | pass 85 | else: 86 | mod_list.append(mod) 87 | 88 | return mod_list 89 | 90 | def run_tests(test_labels, verbosity=1, interactive=True, 91 | extra_tests=[], nodatabase=False, xml_out=False, callgraph=False, html_only=False): 92 | """ 93 | Test runner which displays a code coverage report at the end of the 94 | run. 95 | """ 96 | cov = coverage.coverage() 97 | cov.erase() 98 | cov.use_cache(0) 99 | 100 | test_labels = test_labels or getattr(settings, "TEST_APPS", None) 101 | cover_branch = getattr(settings, "COVERAGE_BRANCH_COVERAGE", False) 102 | cov = coverage.coverage(branch=cover_branch, cover_pylib=False) 103 | cov.use_cache(0) 104 | 105 | coverage_modules = [] 106 | if test_labels: 107 | for label in test_labels: 108 | # Don't report coverage if you're only running a single 109 | # test case. 110 | if '.' not in label: 111 | app = get_app(label) 112 | coverage_modules.extend(get_all_coverage_modules(app)) 113 | else: 114 | for app in get_apps(): 115 | coverage_modules.extend(get_all_coverage_modules(app)) 116 | 117 | morfs = filter(is_wanted_module, coverage_modules) 118 | 119 | if callgraph: 120 | try: 121 | import pycallgraph 122 | #_include = [i.__name__ for i in coverage_modules] 123 | _included = getattr(settings, "COVERAGE_INCLUDE_MODULES", []) 124 | _excluded = getattr(settings, "COVERAGE_EXCLUDE_MODULES", []) 125 | 126 | _included = [i.strip('*')+'*' for i in _included] 127 | _excluded = [i.strip('*')+'*' for i in _included] 128 | 129 | _filter_func = pycallgraph.GlobbingFilter( 130 | include=_included or ['*'], 131 | #include=['lotericas.*'], 132 | #exclude=[], 133 | #max_depth=options.max_depth, 134 | ) 135 | 136 | pycallgraph_enabled = True 137 | except ImportError: 138 | pycallgraph_enabled = False 139 | else: 140 | pycallgraph_enabled = False 141 | 142 | cov.start() 143 | 144 | if pycallgraph_enabled: 145 | pycallgraph.start_trace(filter_func=_filter_func) 146 | 147 | if nodatabase: 148 | results = nodatabase_run_tests(test_labels, verbosity, interactive, 149 | extra_tests) 150 | else: 151 | results = django_test_runner(test_labels, verbosity, interactive, 152 | extra_tests) 153 | 154 | if callgraph and pycallgraph_enabled: 155 | pycallgraph.stop_trace() 156 | 157 | cov.stop() 158 | 159 | if getattr(settings, "COVERAGE_HTML_REPORT", False) or \ 160 | os.environ.get("COVERAGE_HTML_REPORT"): 161 | output_dir = getattr(settings, "COVERAGE_HTML_DIRECTORY", "covhtml") 162 | report_method = curry(cov.html_report, directory=output_dir) 163 | if callgraph and pycallgraph_enabled: 164 | callgraph_path = output_dir + '/' + 'callgraph.png' 165 | pycallgraph.make_dot_graph(callgraph_path) 166 | 167 | print >>sys.stdout 168 | print >>sys.stdout, "Coverage HTML reports were output to '%s'" %output_dir 169 | if callgraph: 170 | if pycallgraph_enabled: 171 | print >>sys.stdout, "Call graph was output to '%s'" %callgraph_path 172 | else: 173 | print >>sys.stdout, "Call graph was not generated: Install 'pycallgraph' module to do so" 174 | 175 | else: 176 | report_method = cov.report 177 | 178 | if coverage_modules: 179 | if xml_out: 180 | # using the same output directory as the --xml function uses for testing 181 | if not os.path.isdir(os.path.join("temp", "xml")): 182 | os.makedirs(os.path.join("temp", "xml")) 183 | output_filename = 'temp/xml/coverage_output.xml' 184 | cov.xml_report(morfs=coverage_modules, outfile=output_filename) 185 | 186 | if not html_only: 187 | cov.report(coverage_modules, show_missing=1) 188 | 189 | return results 190 | 191 | 192 | def run_tests_xml (test_labels, verbosity=1, interactive=True, 193 | extra_tests=[], nodatabase=False): 194 | return run_tests(test_labels, verbosity, interactive, 195 | extra_tests, nodatabase, xml_out=True) 196 | -------------------------------------------------------------------------------- /src/test_extensions/testrunners/figleafcoverage.py: -------------------------------------------------------------------------------- 1 | import os 2 | import commands 3 | 4 | from django.test.utils import setup_test_environment, teardown_test_environment 5 | from django.test.simple import run_tests as django_test_runner 6 | 7 | import figleaf 8 | 9 | def run_tests(test_labels, verbosity=1, interactive=True, extra_tests=[]): 10 | setup_test_environment() 11 | figleaf.start() 12 | test_results = django_test_runner(test_labels, verbosity, interactive, extra_tests) 13 | figleaf.stop() 14 | if not os.path.isdir(os.path.join("temp", "figleaf")): os.makedirs(os.path.join("temp", "figleaf")) 15 | file_name = "temp/figleaf/test_output.figleaf" 16 | figleaf.write_coverage(file_name) 17 | output = commands.getoutput("figleaf2html " + file_name + " --output-directory=temp/figleaf") 18 | print output 19 | return test_results -------------------------------------------------------------------------------- /src/test_extensions/testrunners/nodatabase.py: -------------------------------------------------------------------------------- 1 | """ 2 | Test runner that doesn't use the database. Contributed by 3 | Bradley Wright 4 | """ 5 | 6 | import os 7 | import unittest 8 | from glob import glob 9 | 10 | from django.test.utils import setup_test_environment, teardown_test_environment 11 | from django.conf import settings 12 | from django.test.simple import get_app, build_test, build_suite 13 | 14 | import coverage 15 | 16 | def run_tests(test_labels, verbosity=1, interactive=True, extra_tests=[]): 17 | """ 18 | Run the unit tests for all the test labels in the provided list. 19 | Labels must be of the form: 20 | - app.TestClass.test_method 21 | Run a single specific test method 22 | - app.TestClass 23 | Run all the test methods in a given class 24 | - app 25 | Search for doctests and unittests in the named application. 26 | 27 | When looking for tests, the test runner will look in the models and 28 | tests modules for the application. 29 | 30 | A list of 'extra' tests may also be provided; these tests 31 | will be added to the test suite. 32 | 33 | Returns the number of tests that failed. 34 | """ 35 | setup_test_environment() 36 | 37 | settings.DEBUG = False 38 | suite = unittest.TestSuite() 39 | 40 | modules_to_cover = [] 41 | 42 | # if passed a list of tests... 43 | if test_labels: 44 | for label in test_labels: 45 | if '.' in label: 46 | suite.addTest(build_test(label)) 47 | else: 48 | app = get_app(label) 49 | suite.addTest(build_suite(app)) 50 | # ...otherwise use all installed 51 | else: 52 | for app in get_apps(): 53 | # skip apps named "Django" because they use a database 54 | if not app.__name__.startswith('django'): 55 | # get the actual app name 56 | app_name = app.__name__.replace('.models', '') 57 | # get a list of the files inside that module 58 | files = glob('%s/*.py' % app_name) 59 | # remove models because we don't use them, stupid 60 | new_files = [i for i in files if not i.endswith('models.py')] 61 | modules_to_cover.extend(new_files) 62 | # actually test the file 63 | suite.addTest(build_suite(app)) 64 | 65 | for test in extra_tests: 66 | suite.addTest(test) 67 | 68 | result = unittest.TextTestRunner(verbosity=verbosity).run(suite) 69 | 70 | teardown_test_environment() 71 | 72 | return len(result.failures) + len(result.errors) 73 | 74 | def run_tests_with_coverage(test_labels, verbosity=1, interactive=True, extra_tests=[], xml_out=False): 75 | """ 76 | Run the unit tests for all the test labels in the provided list. 77 | Labels must be of the form: 78 | - app.TestClass.test_method 79 | Run a single specific test method 80 | - app.TestClass 81 | Run all the test methods in a given class 82 | - app 83 | Search for doctests and unittests in the named application. 84 | 85 | When looking for tests, the test runner will look in the models and 86 | tests modules for the application. 87 | 88 | A list of 'extra' tests may also be provided; these tests 89 | will be added to the test suite. 90 | 91 | Returns the number of tests that failed. 92 | """ 93 | setup_test_environment() 94 | 95 | settings.DEBUG = False 96 | suite = unittest.TestSuite() 97 | 98 | modules_to_cover = [] 99 | 100 | # start doing some coverage action 101 | cov = coverage.coverage() 102 | cov.erase() 103 | cov.start() 104 | 105 | # if passed a list of tests... 106 | if test_labels: 107 | for label in test_labels: 108 | if '.' in label: 109 | suite.addTest(build_test(label)) 110 | else: 111 | app = get_app(label) 112 | suite.addTest(build_suite(app)) 113 | # ...otherwise use all installed 114 | else: 115 | for app in get_apps(): 116 | # skip apps named "Django" because they use a database 117 | if not app.__name__.startswith('django'): 118 | # get the actual app name 119 | app_name = app.__name__.replace('.models', '') 120 | # get a list of the files inside that module 121 | files = glob('%s/*.py' % app_name) 122 | # remove models because we don't use them, stupid 123 | new_files = [i for i in files if not i.endswith('models.py')] 124 | modules_to_cover.extend(new_files) 125 | # actually test the file 126 | suite.addTest(build_suite(app)) 127 | 128 | for test in extra_tests: 129 | suite.addTest(test) 130 | 131 | result = unittest.TextTestRunner(verbosity=verbosity).run(suite) 132 | 133 | teardown_test_environment() 134 | 135 | # stop coverage 136 | cov.stop() 137 | 138 | # output results 139 | print '' 140 | print '--------------------------' 141 | print 'Unit test coverage results' 142 | print '--------------------------' 143 | print '' 144 | if xml_out: 145 | # using the same output directory as the --xml function uses for testing 146 | if not os.path.isdir(os.path.join("temp", "xml")): 147 | os.makedirs(os.path.join("temp", "xml")) 148 | output_filename = 'temp/xml/coverage_output.xml' 149 | cov.xml_report(morfs=coverage_modules, outfile=output_filename) 150 | cov.report(modules_to_cover, show_missing=1) 151 | 152 | return len(result.failures) + len(result.errors) 153 | 154 | def run_tests_with_xmlcoverage(test_labels, verbosity=1, interactive=True, extra_tests=[]): 155 | return run_tests_with_coverage(test_labels, verbosity, interactive, extra_tests, xml_out=True) 156 | 157 | -------------------------------------------------------------------------------- /src/test_extensions/testrunners/xmloutput.py: -------------------------------------------------------------------------------- 1 | import time, traceback, string 2 | 3 | from xmlunit.unittest import _WritelnDecorator, XmlTextTestRunner as his_XmlTextTestRunner 4 | 5 | from django.test.simple import * 6 | from xml.sax.saxutils import escape 7 | 8 | try: 9 | # The django.utils.unittest alias is available in Django >= 1.3 10 | from django.utils import unittest 11 | except ImportError: 12 | import unittest 13 | 14 | try: 15 | class XMLTestSuiteRunner(DjangoTestSuiteRunner): 16 | def run_suite(self, suite, **kwargs): 17 | return XMLTestRunner(verbosity=self.verbosity).run(suite) 18 | except NameError: # DjangoTestSuiteRunner is not available in Django < 1.2 19 | pass 20 | 21 | def run_tests(test_labels, verbosity=1, interactive=True, extra_tests=[]): 22 | setup_test_environment() 23 | 24 | settings.DEBUG = False 25 | suite = unittest.TestSuite() 26 | 27 | if test_labels: 28 | for label in test_labels: 29 | if '.' in label: 30 | suite.addTest(build_test(label)) 31 | else: 32 | app = get_app(label) 33 | suite.addTest(build_suite(app)) 34 | else: 35 | for app in get_apps(): 36 | suite.addTest(build_suite(app)) 37 | 38 | for test in extra_tests: 39 | suite.addTest(test) 40 | 41 | old_name = settings.DATABASE_NAME 42 | from django.db import connection 43 | connection.creation.create_test_db(verbosity, autoclobber=not interactive) 44 | result = XMLTestRunner(verbosity=verbosity).run(suite) 45 | connection.creation.destroy_test_db(old_name, verbosity) 46 | 47 | teardown_test_environment() 48 | 49 | return len(result.failures) + len(result.errors) 50 | 51 | 52 | class XMLTestRunner(his_XmlTextTestRunner): 53 | def _makeResult(self): 54 | return _XmlTextTestResult(self.testResults, self.descriptions, self.verbosity) 55 | 56 | class _XmlTextTestResult(unittest.TestResult): 57 | """A test result class that can print xml formatted text results to a stream. 58 | 59 | Used by XmlTextTestRunner. 60 | """ 61 | #separator1 = '=' * 70 62 | #separator2 = '-' * 70 63 | def __init__(self, stream, descriptions, verbosity): 64 | unittest.TestResult.__init__(self) 65 | self.stream = _WritelnDecorator(stream) 66 | self.showAll = verbosity > 1 67 | self.descriptions = descriptions 68 | self._lastWas = 'success' 69 | self._errorsAndFailures = "" 70 | self._startTime = 0.0 71 | self.params="" 72 | 73 | def getDescription(self, test): 74 | if self.descriptions: 75 | return test.shortDescription() or str(test) 76 | else: 77 | return str(test) 78 | 79 | def startTest(self, test): # CONSIDER why are there 2 startTests in here? 80 | self._startTime = time.time() 81 | test._extraXML = '' 82 | test._extraAssertions = [] 83 | unittest.TestResult.startTest(self, test) 84 | self.stream.write('') 97 | if self._lastWas != 'success': 98 | if self._lastWas == 'error': 99 | self.stream.write(self._errorsAndFailures) 100 | elif self._lastWas == 'failure': 101 | self.stream.write(self._errorsAndFailures) 102 | else: 103 | assert(False) 104 | 105 | seen = {} 106 | 107 | for assertion in test._extraAssertions: 108 | if not seen.has_key(assertion): 109 | self._addAssertion(assertion[:110]) # :110 avoids tl;dr TODO use a lexical truncator 110 | seen[assertion] = True 111 | 112 | self.stream.write('') 113 | self._errorsAndFailures = "" 114 | 115 | if test._extraXML != '': 116 | self.stream.write(test._extraXML) 117 | 118 | def _addAssertion(self, diagnostic): 119 | diagnostic = _cleanHTML(diagnostic) 120 | self.stream.write('' + diagnostic + '') 121 | 122 | def addSuccess(self, test): 123 | unittest.TestResult.addSuccess(self, test) 124 | self._lastWas = 'success' 125 | 126 | def addError(self, test, err): 127 | unittest.TestResult.addError(self, test, err) 128 | if err[0] is KeyboardInterrupt: 129 | self.shouldStop = 1 130 | self._lastWas = 'error' 131 | self._errorsAndFailures += '' % err[0].__name__ 132 | for line in apply(traceback.format_exception, err): 133 | for l in line.split("\n")[:-1]: 134 | self._errorsAndFailures += escape(l) 135 | self._errorsAndFailures += "" 136 | 137 | def addFailure(self, test, err): 138 | unittest.TestResult.addFailure(self, test, err) 139 | if err[0] is KeyboardInterrupt: 140 | self.shouldStop = 1 141 | self._lastWas = 'failure' 142 | self._errorsAndFailures += '' % err[0].__name__ 143 | for line in apply(traceback.format_exception, err): 144 | for l in line.split("\n")[:-1]: 145 | self._errorsAndFailures += escape(l) 146 | self._errorsAndFailures += "" 147 | 148 | def printErrors(self): 149 | pass #assert False 150 | 151 | def printErrorList(self, flavour, errors): 152 | assert False 153 | 154 | def _cleanHTML(whut): 155 | return whut.replace('"', '"'). \ 156 | replace('<', '<'). \ 157 | replace('>', '>') 158 | -------------------------------------------------------------------------------- /src/test_extensions/testrunners/xmlunit/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/garethr/django-test-extensions/e77d2cb16820fd4d5030ba52bbfe148a96ae8066/src/test_extensions/testrunners/xmlunit/__init__.py -------------------------------------------------------------------------------- /src/test_extensions/testrunners/xmlunit/unittest.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | ''' 3 | Copyright (c) Members of the EGEE Collaboration. 2004. 4 | http://www.eu-egee.org 5 | 6 | File: unittest.py 7 | 8 | Authors: Marc-Elian Begin 9 | 10 | Version info: $Id: unittest.py,v 1.5 2004/10/20 21:22:08 mbegin Exp $ 11 | Release: $Name: $ 12 | 13 | Revision history: 14 | $Log: unittest.py,v $ 15 | Revision 1.5 2004/10/20 21:22:08 mbegin 16 | Attempted to create a generic set of Python tasks, and refactored accordingly 17 | 18 | Revision 1.4 2004/10/19 17:56:46 mbegin 19 | Refactoring and fixed bug on total testsuite test time 20 | 21 | Revision 1.3 2004/10/19 16:40:54 mbegin 22 | Removed requirement for Python 2.3. Now works with 2.2 23 | 24 | Revision 1.2 2004/09/10 12:45:38 mbegin 25 | Upgraded unittest.py such that the new 'writeParameter' method writes xml 26 | elements in the system-out element of the test report. The new util.py file 27 | provides a function to print the name/value pair for a given test report. 28 | 29 | Revision 1.1.1.1 2004/07/22 13:07:25 leanne 30 | Initial import of unit testing tools 31 | 32 | 33 | 34 | Python unit testing framework, based on Erich Gamma's JUnit and Kent Beck's 35 | Smalltalk testing framework. 36 | 37 | This module contains the core framework classes that form the basis of 38 | specific test cases and suites (TestCase, TestSuite etc.), and also a 39 | text-based utility class for running the tests and reporting the results 40 | (TextTestRunner). 41 | 42 | Simple usage: 43 | 44 | import unittest 45 | 46 | class IntegerArithmenticTestCase(unittest.TestCase): 47 | def testAdd(self): ## test method names begin 'test*' 48 | self.assertEquals((1 + 2), 3) 49 | self.assertEquals(0 + 1, 1) 50 | def testMultiply(self); 51 | self.assertEquals((0 * 10), 0) 52 | self.assertEquals((5 * 8), 40) 53 | 54 | if __name__ == '__main__': 55 | unittest.main() 56 | 57 | Further information is available in the bundled documentation, and from 58 | 59 | http://pyunit.sourceforge.net/ 60 | 61 | Copyright (c) 1999, 2000, 2001 Steve Purcell 62 | This module is free software, and you may redistribute it and/or modify 63 | it under the same terms as Python itself, so long as this copyright message 64 | and disclaimer are retained in their original form. 65 | 66 | IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, 67 | SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF 68 | THIS CODE, EVEN IF THE AUTHOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH 69 | DAMAGE. 70 | 71 | THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT 72 | LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A 73 | PARTICULAR PURPOSE. THE CODE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, 74 | AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE, 75 | SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. 76 | ''' 77 | 78 | __author__ = "Steve Purcell" 79 | __email__ = "stephen_purcell at yahoo dot com" 80 | __version__ = "$Revision: 1.5 $"[11:-2] 81 | 82 | import time 83 | import sys 84 | import traceback 85 | import string 86 | import os 87 | import types 88 | 89 | ############################################################################## 90 | # Test framework core 91 | ############################################################################## 92 | 93 | class TestResult: 94 | """Holder for test result information. 95 | 96 | Test results are automatically managed by the TestCase and TestSuite 97 | classes, and do not need to be explicitly manipulated by writers of tests. 98 | 99 | Each instance holds the total number of tests run, and collections of 100 | failures and errors that occurred among those test runs. The collections 101 | contain tuples of (testcase, exceptioninfo), where exceptioninfo is a 102 | tuple of values as returned by sys.exc_info(). 103 | """ 104 | def __init__(self): 105 | self.failures = [] 106 | self.errors = [] 107 | self.testsRun = 0 108 | self.shouldStop = 0 109 | self.params = "" 110 | 111 | def startTest(self, test): 112 | "Called when the given test is about to be run" 113 | self.testsRun = self.testsRun + 1 114 | 115 | def stopTest(self, test): 116 | "Called when the given test has been run" 117 | pass 118 | 119 | def addError(self, test, err): 120 | "Called when an error has occurred" 121 | self.errors.append((test, err)) 122 | 123 | def addFailure(self, test, err): 124 | "Called when a failure has occurred" 125 | self.failures.append((test, err)) 126 | 127 | def addSuccess(self, test): 128 | "Called when a test has completed successfully" 129 | pass 130 | 131 | def wasSuccessful(self): 132 | "Tells whether or not this result was a success" 133 | return len(self.failures) == len(self.errors) == 0 134 | 135 | def stop(self): 136 | "Indicates that the tests should be aborted" 137 | self.shouldStop = 1 138 | 139 | def __repr__(self): 140 | return "<%s run=%i errors=%i failures=%i>" % \ 141 | (self.__class__, self.testsRun, len(self.errors), 142 | len(self.failures)) 143 | 144 | 145 | class TestCase: 146 | """A class whose instances are single test cases. 147 | 148 | By default, the test code itself should be placed in a method named 149 | 'runTest'. 150 | 151 | If the fixture may be used for many test cases, create as 152 | many test methods as are needed. When instantiating such a TestCase 153 | subclass, specify in the constructor arguments the name of the test method 154 | that the instance is to execute. 155 | 156 | Test authors should subclass TestCase for their own tests. Construction 157 | and deconstruction of the test's environment ('fixture') can be 158 | implemented by overriding the 'setUp' and 'tearDown' methods respectively. 159 | 160 | If it is necessary to override the __init__ method, the base class 161 | __init__ method must always be called. It is important that subclasses 162 | should not change the signature of their __init__ method, since instances 163 | of the classes are instantiated automatically by parts of the framework 164 | in order to be run. 165 | """ 166 | 167 | # This attribute determines which exception will be raised when 168 | # the instance's assertion methods fail; test methods raising this 169 | # exception will be deemed to have 'failed' rather than 'errored' 170 | 171 | failureException = AssertionError 172 | 173 | def __init__(self, methodName='runTest'): 174 | """Create an instance of the class that will use the named test 175 | method when executed. Raises a ValueError if the instance does 176 | not have a method with the specified name. 177 | """ 178 | self.result = TestResult() 179 | try: 180 | self.__testMethodName = methodName 181 | testMethod = getattr(self, methodName) 182 | self.__testMethodDoc = testMethod.__doc__ 183 | except AttributeError: 184 | raise ValueError, "no such test method in %s: %s" % \ 185 | (self.__class__, methodName) 186 | 187 | def setUp(self): 188 | "Hook method for setting up the test fixture before exercising it." 189 | pass 190 | 191 | def tearDown(self): 192 | "Hook method for deconstructing the test fixture after testing it." 193 | pass 194 | 195 | def countTestCases(self): 196 | return 1 197 | 198 | # def defaultTestResult(self): 199 | # return TestResult() 200 | 201 | def shortDescription(self): 202 | """Returns a one-line description of the test, or None if no 203 | description has been provided. 204 | 205 | The default implementation of this method returns the first line of 206 | the specified test method's docstring. 207 | """ 208 | doc = self.__testMethodDoc 209 | return doc and string.strip(string.split(doc, "\n")[0]) or None 210 | 211 | def id(self): 212 | return "%s.%s" % (self.__class__, self.__testMethodName) 213 | 214 | def __str__(self): 215 | return "%s (%s)" % (self.__testMethodName, self.__class__) 216 | 217 | def __repr__(self): 218 | return "<%s testMethod=%s>" % \ 219 | (self.__class__, self.__testMethodName) 220 | 221 | def run(self, result=None): 222 | if result is not None: self.result = result 223 | return self(result) 224 | 225 | def __call__(self, result=None): 226 | if result is not None: self.result = result 227 | result.startTest(self) 228 | testMethod = getattr(self, self.__testMethodName) 229 | try: 230 | try: 231 | self.setUp() 232 | except: 233 | result.addError(self,self.__exc_info()) 234 | return 235 | 236 | ok = 0 237 | try: 238 | testMethod() 239 | ok = 1 240 | except self.failureException, e: 241 | result.addFailure(self,self.__exc_info()) 242 | except: 243 | result.addError(self,self.__exc_info()) 244 | 245 | try: 246 | self.tearDown() 247 | except: 248 | result.addError(self,self.__exc_info()) 249 | ok = 0 250 | if ok: result.addSuccess(self) 251 | finally: 252 | result.stopTest(self) 253 | 254 | def debug(self): 255 | """Run the test without collecting errors in a TestResult""" 256 | self.setUp() 257 | getattr(self, self.__testMethodName)() 258 | self.tearDown() 259 | 260 | def __exc_info(self): 261 | """Return a version of sys.exc_info() with the traceback frame 262 | minimised; usually the top level of the traceback frame is not 263 | needed. 264 | """ 265 | exctype, excvalue, tb = sys.exc_info() 266 | if sys.platform[:4] == 'java': ## tracebacks look different in Jython 267 | return (exctype, excvalue, tb) 268 | newtb = tb.tb_next 269 | if newtb is None: 270 | return (exctype, excvalue, tb) 271 | return (exctype, excvalue, newtb) 272 | 273 | def fail(self, msg=None): 274 | """Fail immediately, with the given message.""" 275 | raise self.failureException, msg 276 | 277 | def failIf(self, expr, msg=None): 278 | "Fail the test if the expression is true." 279 | if expr: raise self.failureException, msg 280 | 281 | def failUnless(self, expr, msg=None): 282 | """Fail the test unless the expression is true.""" 283 | if not expr: raise self.failureException, msg 284 | 285 | def failUnlessRaises(self, excClass, callableObj, *args, **kwargs): 286 | """Fail unless an exception of class excClass is thrown 287 | by callableObj when invoked with arguments args and keyword 288 | arguments kwargs. If a different type of exception is 289 | thrown, it will not be caught, and the test case will be 290 | deemed to have suffered an error, exactly as for an 291 | unexpected exception. 292 | """ 293 | try: 294 | apply(callableObj, args, kwargs) 295 | except excClass: 296 | return 297 | else: 298 | if hasattr(excClass,'__name__'): excName = excClass.__name__ 299 | else: excName = str(excClass) 300 | raise self.failureException, excName 301 | 302 | def failUnlessEqual(self, first, second, msg=None): 303 | """Fail if the two objects are unequal as determined by the '!=' 304 | operator. 305 | """ 306 | if first != second: 307 | raise self.failureException, (msg or '%s != %s' % (first, second)) 308 | 309 | def failIfEqual(self, first, second, msg=None): 310 | """Fail if the two objects are equal as determined by the '==' 311 | operator. 312 | """ 313 | if first == second: 314 | raise self.failureException, (msg or '%s == %s' % (first, second)) 315 | 316 | assertEqual = assertEquals = failUnlessEqual 317 | 318 | assertNotEqual = assertNotEquals = failIfEqual 319 | 320 | assertRaises = failUnlessRaises 321 | 322 | assert_ = failUnless 323 | 324 | def writeParameter(self, paramName, paramValue): 325 | testcase = self.__class__ 326 | testmethod = self.id().split('.')[-1] 327 | self.result.params += " " % (paramName, paramValue, testcase, testmethod) 328 | 329 | 330 | class TestSuite: 331 | """A test suite is a composite test consisting of a number of TestCases. 332 | 333 | For use, create an instance of TestSuite, then add test case instances. 334 | When all tests have been added, the suite can be passed to a test 335 | runner, such as TextTestRunner. It will run the individual test cases 336 | in the order in which they were added, aggregating the results. When 337 | subclassing, do not forget to call the base class constructor. 338 | """ 339 | def __init__(self, tests=()): 340 | self._tests = [] 341 | self.addTests(tests) 342 | 343 | def __repr__(self): 344 | return "<%s tests=%s>" % (self.__class__, self._tests) 345 | 346 | __str__ = __repr__ 347 | 348 | def countTestCases(self): 349 | cases = 0 350 | for test in self._tests: 351 | cases = cases + test.countTestCases() 352 | return cases 353 | 354 | def addTest(self, test): 355 | self._tests.append(test) 356 | 357 | def addTests(self, tests): 358 | for test in tests: 359 | self.addTest(test) 360 | 361 | def run(self, result): 362 | return self(result) 363 | 364 | def __call__(self, result): 365 | for test in self._tests: 366 | if result.shouldStop: 367 | break 368 | test(result) 369 | return result 370 | 371 | def debug(self): 372 | """Run the tests without collecting errors in a TestResult""" 373 | for test in self._tests: test.debug() 374 | 375 | 376 | 377 | class FunctionTestCase(TestCase): 378 | """A test case that wraps a test function. 379 | 380 | This is useful for slipping pre-existing test functions into the 381 | PyUnit framework. Optionally, set-up and tidy-up functions can be 382 | supplied. As with TestCase, the tidy-up ('tearDown') function will 383 | always be called if the set-up ('setUp') function ran successfully. 384 | """ 385 | 386 | def __init__(self, testFunc, setUp=None, tearDown=None, 387 | description=None): 388 | TestCase.__init__(self) 389 | self.__setUpFunc = setUp 390 | self.__tearDownFunc = tearDown 391 | self.__testFunc = testFunc 392 | self.__description = description 393 | 394 | def setUp(self): 395 | if self.__setUpFunc is not None: 396 | self.__setUpFunc() 397 | 398 | def tearDown(self): 399 | if self.__tearDownFunc is not None: 400 | self.__tearDownFunc() 401 | 402 | def runTest(self): 403 | self.__testFunc() 404 | 405 | def id(self): 406 | return self.__testFunc.__name__ 407 | 408 | def __str__(self): 409 | return "%s (%s)" % (self.__class__, self.__testFunc.__name__) 410 | 411 | def __repr__(self): 412 | return "<%s testFunc=%s>" % (self.__class__, self.__testFunc) 413 | 414 | def shortDescription(self): 415 | if self.__description is not None: return self.__description 416 | doc = self.__testFunc.__doc__ 417 | return doc and string.strip(string.split(doc, "\n")[0]) or None 418 | 419 | 420 | 421 | ############################################################################## 422 | # Locating and loading tests 423 | ############################################################################## 424 | 425 | class TestLoader: 426 | """This class is responsible for loading tests according to various 427 | criteria and returning them wrapped in a Test 428 | """ 429 | testMethodPrefix = 'test' 430 | sortTestMethodsUsing = cmp 431 | suiteClass = TestSuite 432 | 433 | def loadTestsFromTestCase(self, testCaseClass): 434 | """Return a suite of all tests cases contained in testCaseClass""" 435 | return self.suiteClass(map(testCaseClass, 436 | self.getTestCaseNames(testCaseClass))) 437 | 438 | def loadTestsFromModule(self, module): 439 | """Return a suite of all tests cases contained in the given module""" 440 | tests = [] 441 | for name in dir(module): 442 | obj = getattr(module, name) 443 | if type(obj) == types.ClassType and issubclass(obj, TestCase): 444 | tests.append(self.loadTestsFromTestCase(obj)) 445 | return self.suiteClass(tests) 446 | 447 | def loadTestsFromName(self, name, module=None): 448 | """Return a suite of all tests cases given a string specifier. 449 | 450 | The name may resolve either to a module, a test case class, a 451 | test method within a test case class, or a callable object which 452 | returns a TestCase or TestSuite instance. 453 | 454 | The method optionally resolves the names relative to a given module. 455 | """ 456 | parts = string.split(name, '.') 457 | if module is None: 458 | if not parts: 459 | raise ValueError, "incomplete test name: %s" % name 460 | else: 461 | parts_copy = parts[:] 462 | while parts_copy: 463 | try: 464 | module = __import__(string.join(parts_copy,'.')) 465 | break 466 | except ImportError: 467 | del parts_copy[-1] 468 | if not parts_copy: raise 469 | parts = parts[1:] 470 | obj = module 471 | for part in parts: 472 | obj = getattr(obj, part) 473 | 474 | import unittest 475 | if type(obj) == types.ModuleType: 476 | return self.loadTestsFromModule(obj) 477 | elif type(obj) == types.ClassType and issubclass(obj, unittest.TestCase): 478 | return self.loadTestsFromTestCase(obj) 479 | elif type(obj) == types.UnboundMethodType: 480 | return obj.im_class(obj.__name__) 481 | elif callable(obj): 482 | test = obj() 483 | if not isinstance(test, unittest.TestCase) and \ 484 | not isinstance(test, unittest.TestSuite): 485 | raise ValueError, \ 486 | "calling %s returned %s, not a test" % (obj,test) 487 | return test 488 | else: 489 | raise ValueError, "don't know how to make test from: %s" % obj 490 | 491 | def loadTestsFromNames(self, names, module=None): 492 | """Return a suite of all tests cases found using the given sequence 493 | of string specifiers. See 'loadTestsFromName()'. 494 | """ 495 | suites = [] 496 | for name in names: 497 | suites.append(self.loadTestsFromName(name, module)) 498 | return self.suiteClass(suites) 499 | 500 | def getTestCaseNames(self, testCaseClass): 501 | """Return a sorted sequence of method names found within testCaseClass 502 | """ 503 | testFnNames = filter(lambda n,p=self.testMethodPrefix: n[:len(p)] == p, 504 | dir(testCaseClass)) 505 | for baseclass in testCaseClass.__bases__: 506 | for testFnName in self.getTestCaseNames(baseclass): 507 | if testFnName not in testFnNames: # handle overridden methods 508 | testFnNames.append(testFnName) 509 | if self.sortTestMethodsUsing: 510 | testFnNames.sort(self.sortTestMethodsUsing) 511 | return testFnNames 512 | 513 | 514 | 515 | defaultTestLoader = TestLoader() 516 | 517 | 518 | ############################################################################## 519 | # Patches for old functions: these functions should be considered obsolete 520 | ############################################################################## 521 | 522 | def _makeLoader(prefix, sortUsing, suiteClass=None): 523 | loader = TestLoader() 524 | loader.sortTestMethodsUsing = sortUsing 525 | loader.testMethodPrefix = prefix 526 | if suiteClass: loader.suiteClass = suiteClass 527 | return loader 528 | 529 | def getTestCaseNames(testCaseClass, prefix, sortUsing=cmp): 530 | return _makeLoader(prefix, sortUsing).getTestCaseNames(testCaseClass) 531 | 532 | def makeSuite(testCaseClass, prefix='test', sortUsing=cmp, suiteClass=TestSuite): 533 | return _makeLoader(prefix, sortUsing, suiteClass).loadTestsFromTestCase(testCaseClass) 534 | 535 | def findTestCases(module, prefix='test', sortUsing=cmp, suiteClass=TestSuite): 536 | return _makeLoader(prefix, sortUsing, suiteClass).loadTestsFromModule(module) 537 | 538 | 539 | ############################################################################## 540 | # Text UI 541 | ############################################################################## 542 | 543 | class _WritelnDecorator: 544 | """Used to decorate file-like objects with a handy 'writeln' method""" 545 | def __init__(self,stream): 546 | self.stream = stream 547 | 548 | def __getattr__(self, attr): 549 | return getattr(self.stream,attr) 550 | 551 | def writeln(self, *args): 552 | if args: apply(self.write, args) 553 | self.write('\n') # text-mode streams translate to \r\n if needed 554 | 555 | class _XmlTextTestResult(TestResult): 556 | """A test result class that can print xml formatted text results to a stream. 557 | 558 | Used by XmlTextTestRunner. 559 | """ 560 | def __init__(self, stream, descriptions, verbosity): 561 | TestResult.__init__(self) 562 | self.stream = _WritelnDecorator(stream) 563 | self.showAll = verbosity > 1 564 | self.descriptions = descriptions 565 | self._lastWas = 'success' 566 | self._errorsAndFailures = "" 567 | self._startTime = 0.0 568 | 569 | def getDescription(self, test): 570 | if self.descriptions: 571 | return test.shortDescription() or str(test) 572 | else: 573 | return str(test) 574 | 575 | def startTest(self, test): 576 | self._startTime = time.time() 577 | TestResult.startTest(self, test) 578 | self.stream.write('') 587 | else: 588 | self.stream.write('>') 589 | if self._lastWas == 'error': 590 | self.stream.write(self._errorsAndFailures) 591 | elif self._lastWas == 'failure': 592 | self.stream.write(self._errorsAndFailures) 593 | else: 594 | assert(false) 595 | self.stream.write('') 596 | self._errorsAndFailures = "" 597 | 598 | def addSuccess(self, test): 599 | TestResult.addSuccess(self, test) 600 | self._lastWas = 'success' 601 | 602 | def addError(self, test, err): 603 | TestResult.addError(self, test, err) 604 | if err[0] is KeyboardInterrupt: 605 | self.shouldStop = 1 606 | self._lastWas = 'error' 607 | self._errorsAndFailures += '' % err[0] 608 | for line in apply(traceback.format_exception, err): 609 | for l in string.split(line,"\n")[:-1]: 610 | self._errorsAndFailures += "%s" % l 611 | self._errorsAndFailures += "" 612 | 613 | def addFailure(self, test, err): 614 | TestResult.addFailure(self, test, err) 615 | if err[0] is KeyboardInterrupt: 616 | self.shouldStop = 1 617 | self._lastWas = 'failure' 618 | self._errorsAndFailures += '' % err[0] 619 | for line in apply(traceback.format_exception, err): 620 | for l in string.split(line,"\n")[:-1]: 621 | self._errorsAndFailures += "%s" % l 622 | self._errorsAndFailures += "" 623 | 624 | def printErrors(self): 625 | assert false 626 | 627 | def printErrorList(self, flavour, errors): 628 | assert false 629 | 630 | class _TextTestResult(TestResult): 631 | """A test result class that can print formatted text results to a stream. 632 | 633 | Used by TextTestRunner. 634 | """ 635 | separator1 = '=' * 70 636 | separator2 = '-' * 70 637 | 638 | def __init__(self, stream, descriptions, verbosity): 639 | TestResult.__init__(self) 640 | self.stream = stream 641 | self.showAll = verbosity > 1 642 | self.dots = verbosity == 1 643 | self.descriptions = descriptions 644 | 645 | def getDescription(self, test): 646 | if self.descriptions: 647 | return test.shortDescription() or str(test) 648 | else: 649 | return str(test) 650 | 651 | def startTest(self, test): 652 | TestResult.startTest(self, test) 653 | if self.showAll: 654 | self.stream.write(self.getDescription(test)) 655 | self.stream.write(" ... ") 656 | 657 | def addSuccess(self, test): 658 | TestResult.addSuccess(self, test) 659 | if self.showAll: 660 | self.stream.writeln("ok") 661 | elif self.dots: 662 | self.stream.write('.') 663 | 664 | def addError(self, test, err): 665 | TestResult.addError(self, test, err) 666 | if self.showAll: 667 | self.stream.writeln("ERROR") 668 | elif self.dots: 669 | self.stream.write('E') 670 | if err[0] is KeyboardInterrupt: 671 | self.shouldStop = 1 672 | 673 | def addFailure(self, test, err): 674 | TestResult.addFailure(self, test, err) 675 | if self.showAll: 676 | self.stream.writeln("FAIL") 677 | elif self.dots: 678 | self.stream.write('F') 679 | 680 | def printErrors(self): 681 | if self.dots or self.showAll: 682 | self.stream.writeln() 683 | self.printErrorList('ERROR', self.errors) 684 | self.printErrorList('FAIL', self.failures) 685 | 686 | def printErrorList(self, flavour, errors): 687 | for test, err in errors: 688 | self.stream.writeln(self.separator1) 689 | self.stream.writeln("%s: %s" % (flavour,self.getDescription(test))) 690 | self.stream.writeln(self.separator2) 691 | for line in apply(traceback.format_exception, err): 692 | for l in string.split(line,"\n")[:-1]: 693 | self.stream.writeln("%s" % l) 694 | 695 | 696 | class TextTestRunner: 697 | """A test runner class that displays results in textual form. 698 | 699 | It prints out the names of tests as they are run, errors as they 700 | occur, and a summary of the results at the end of the test run. 701 | """ 702 | def __init__(self, stream=sys.stderr, descriptions=1, verbosity=1): 703 | self.stream = _WritelnDecorator(stream) 704 | self.descriptions = descriptions 705 | self.verbosity = verbosity 706 | 707 | def _makeResult(self): 708 | return _TextTestResult(self.stream, self.descriptions, self.verbosity) 709 | 710 | def run(self, test): 711 | "Run the given test case or test suite." 712 | result = self._makeResult() 713 | startTime = time.time() 714 | test(result) 715 | stopTime = time.time() 716 | timeTaken = float(stopTime - startTime) 717 | result.printErrors() 718 | self.stream.writeln(result.separator2) 719 | run = result.testsRun 720 | self.stream.writeln("Ran %d test%s in %.3fs" % 721 | (run, run == 1 and "" or "s", timeTaken)) 722 | self.stream.writeln() 723 | if not result.wasSuccessful(): 724 | self.stream.write("FAILED (") 725 | failed, errored = map(len, (result.failures, result.errors)) 726 | if failed: 727 | self.stream.write("failures=%d" % failed) 728 | if errored: 729 | if failed: self.stream.write(", ") 730 | self.stream.write("errors=%d" % errored) 731 | self.stream.writeln(")") 732 | else: 733 | self.stream.writeln("OK") 734 | return result 735 | 736 | class XmlTextTestRunner: 737 | """A test runner class that displays results in xml form. 738 | 739 | The format is compatible with the Ant junit task xml format. 740 | """ 741 | 742 | class _StdOut: 743 | def __init__(self,std): 744 | self.reset() 745 | self._std = std 746 | return 747 | 748 | def write(self, string): 749 | self._string += string 750 | self._std.write(string) 751 | return 752 | 753 | def read(self): 754 | return self._string 755 | 756 | def reset(self): 757 | self._string = "" 758 | 759 | class _StringStream: 760 | def __init__(self): 761 | self.reset() 762 | return 763 | 764 | def write(self, string): 765 | self._string += string 766 | return 767 | 768 | def read(self): 769 | return self._string 770 | 771 | def reset(self): 772 | self._string = "" 773 | 774 | def __init__(self, stream=sys.stderr, descriptions=1, verbosity=1): 775 | self.descriptions = descriptions 776 | self.verbosity = verbosity 777 | self.stdout = self._StdOut(sys.stdout) 778 | sys.stdout = self.stdout 779 | self.stderr = self._StdOut(sys.stderr) 780 | sys.stderr = self.stderr 781 | self.testResults = self._StringStream() 782 | self.totalTime = 0.0 783 | self.output = None 784 | 785 | def _openOutputFile(self, fileName): 786 | 787 | if not os.path.isdir(os.path.join("temp", "xml")): os.makedirs(os.path.join("temp", "xml")) 788 | 789 | self.outputFileName = 'temp/xml/test_output.xml' 790 | self.output = open(self.outputFileName,'w') 791 | 792 | def _makeResult(self): 793 | return _XmlTextTestResult(self.testResults, self.descriptions, self.verbosity) 794 | 795 | def _resetBuffers(self): 796 | self.testResults.reset() 797 | 798 | def run(self, test): 799 | "Run the given test case or test suite." 800 | result = self._makeResult() 801 | for t in test._tests: 802 | # get the name of the unit test file name (!!!) 803 | size = len(sys.argv[0]) 804 | if sys.argv[0].endswith('.py'): 805 | (filePath,fileName) = os.path.split(sys.argv[0]) 806 | fileName = fileName.split('.')[0] 807 | else: 808 | fileName = "%s" % string.split("%s" % t._tests[0]._tests[0].__class__,'.')[0] 809 | self._openOutputFile(fileName) 810 | startTime = time.time() 811 | t(result) 812 | stopTime = time.time() 813 | timeTaken = float(stopTime - startTime) 814 | self.totalTime += timeTaken 815 | self._writeReport(result,self.totalTime) 816 | # ??? this line 817 | run = result.testsRun 818 | return result 819 | 820 | def _writeReport(self, result, timeTaken): 821 | if self.output != None: 822 | self.output.write('') 829 | self.output.write(self.testResults.read()) 830 | self.output.write('') 831 | self.output.write(result.params) 832 | self.output.write('') 835 | self.output.write('') 838 | self.output.write('') 839 | self.output.close() 840 | self._resetBuffers() 841 | # Write console report 842 | print '======================================================================' 843 | print 'Ran %d test%s in %.3fs' % (result.testsRun, result.testsRun == 1 and '' or 's', timeTaken) 844 | print '----------------------------------------------------------------------' 845 | print 'See generated report:', self.outputFileName,'\n' 846 | msg = '' 847 | if not result.wasSuccessful(): 848 | msg += "FAILED (" 849 | failed, errored = map(len, (result.failures, result.errors)) 850 | if failed: 851 | msg += "failures=%d" % failed 852 | if errored: 853 | if failed: msg += ", " 854 | msg += "errors=%d" % errored 855 | msg += ")" 856 | else: 857 | msg += "OK" 858 | print msg 859 | else: 860 | print '======================================================================' 861 | print 'No tests to run' 862 | print '----------------------------------------------------------------------' 863 | return 864 | 865 | ############################################################################## 866 | # Facilities for running tests from the command line 867 | ############################################################################## 868 | 869 | class TestProgram: 870 | """A command-line program that runs a set of tests; this is primarily 871 | for making test modules conveniently executable. 872 | """ 873 | USAGE = """\ 874 | Usage: %(progName)s [options] [test] [...] 875 | 876 | Options: 877 | -h, --help Show this message 878 | -v, --verbose Verbose output 879 | -q, --quiet Minimal output 880 | -t, --text Text output (classic output) - default 881 | -x, --xml XML output (same format as JUnit) 882 | 883 | Examples: 884 | %(progName)s - run default set of tests 885 | %(progName)s MyTestSuite - run suite 'MyTestSuite' 886 | %(progName)s MyTestCase.testSomething - run MyTestCase.testSomething 887 | %(progName)s MyTestCase - run all 'test*' test methods 888 | in MyTestCase 889 | """ 890 | def __init__(self, module='__main__', defaultTest=None, 891 | argv=None, testRunner=None, testLoader=defaultTestLoader): 892 | if type(module) == type(''): 893 | self.module = __import__(module) 894 | for part in string.split(module,'.')[1:]: 895 | self.module = getattr(self.module, part) 896 | else: 897 | self.module = module 898 | if argv is None: 899 | argv = sys.argv 900 | self.verbosity = 2 901 | self.xmlReport = False 902 | self.defaultTest = defaultTest 903 | self.testRunner = testRunner 904 | self.testLoader = testLoader 905 | self.progName = os.path.basename(argv[0]) 906 | self.parseArgs(argv) 907 | self.runTests() 908 | 909 | def usageExit(self, msg=None): 910 | if msg: print msg 911 | print self.USAGE % self.__dict__ 912 | sys.exit(2) 913 | 914 | def parseArgs(self, argv): 915 | import getopt 916 | try: 917 | options, args = getopt.getopt(argv[1:], 'hHvqxt', 918 | ['help','verbose','quiet','xml','text']) 919 | except getopt.error, msg: 920 | self.usageExit(msg) 921 | for opt, value in options: 922 | if opt in ('-h','-H','--help'): 923 | self.usageExit() 924 | if opt in ('-q','--quiet'): 925 | self.verbosity = 0 926 | if opt in ('-v','--verbose'): 927 | self.verbosity = 2 928 | if opt in ('-x','--xml'): 929 | self.xmlReport = True 930 | if opt in ('-t','--text'): 931 | self.xmlReport = False 932 | if os.getenv('PYUNITXMLOUTPUT'): 933 | self.xmlReport = True 934 | if len(args) == 0 and self.defaultTest is None: 935 | self.test = self.testLoader.loadTestsFromModule(self.module) 936 | return 937 | if len(args) > 0: 938 | self.testNames = args 939 | else: 940 | print 'Running default test:%s' % self.defaultTest 941 | self.testNames = (self.defaultTest,) 942 | self.createTests() 943 | 944 | def createTests(self): 945 | self.test = self.testLoader.loadTestsFromNames(self.testNames, 946 | self.module) 947 | 948 | def runTests(self): 949 | if self.testRunner is None: 950 | if self.xmlReport == True: 951 | self.testRunner = XmlTextTestRunner(verbosity=self.verbosity) 952 | else: 953 | self.testRunner = TextTestRunner(verbosity=self.verbosity) 954 | result = self.testRunner.run(self.test) 955 | sys.exit(not result.wasSuccessful()) 956 | 957 | main = TestProgram 958 | 959 | ############################################################################## 960 | # Executing this module from the command line 961 | ############################################################################## 962 | 963 | if __name__ == "__main__": 964 | main(module=None) 965 | -------------------------------------------------------------------------------- /src/test_extensions/twill.py: -------------------------------------------------------------------------------- 1 | # Test classes inherit from the Django TestCase 2 | from django.test import TestCase 3 | 4 | # Twill provides a simple DSL for a number of functional tasks 5 | import twill as twill 6 | from twill import commands as tc 7 | 8 | class TwillCommon(TestCase): 9 | """ 10 | A Base class for using with Twill commands. Provides a few helper methods and setup. 11 | """ 12 | def setUp(self): 13 | "Run before all tests in this class, sets the output to the console" 14 | twill.set_output(StringIO()) 15 | 16 | def find(self,regex): 17 | """ 18 | By default Twill commands throw exceptions rather than failures when 19 | an assertion fails. Here we wrap the Twill find command and return 20 | the expected response along with a helpful message. 21 | """ 22 | try: 23 | tc.go(self.url) 24 | tc.find(regex) 25 | except TwillAssertionError: 26 | self.fail("No match to '%s' on %s" % (regex, self.url)) 27 | 28 | def code(self,status): 29 | """ 30 | By default Twill commands throw exceptions rather than failures when 31 | an assertion fails. Here we wrap the Twill code command and return 32 | the expected response along with a helpful message. 33 | """ 34 | try: 35 | tc.go(self.url) 36 | tc.code(status) 37 | except TwillAssertionError: 38 | self.fail("%s did not return a %s" % (self.url, status)) --------------------------------------------------------------------------------