mirror of
https://github.com/django/django.git
synced 2024-11-29 22:56:46 +01:00
94b5bc361a
Thanks Marc Tamlyn for the suggestion.
311 lines
12 KiB
Plaintext
311 lines
12 KiB
Plaintext
=========================
|
|
Writing and running tests
|
|
=========================
|
|
|
|
.. module:: django.test
|
|
:synopsis: Testing tools for Django applications.
|
|
|
|
.. seealso::
|
|
|
|
The :doc:`testing tutorial </intro/tutorial05>`, the :doc:`testing tools
|
|
reference </topics/testing/tools>`, and the :doc:`advanced testing topics
|
|
</topics/testing/advanced>`.
|
|
|
|
This document is split into two primary sections. First, we explain how to write
|
|
tests with Django. Then, we explain how to run them.
|
|
|
|
Writing tests
|
|
=============
|
|
|
|
Django's unit tests use a Python standard library module: :mod:`unittest`. This
|
|
module defines tests using a class-based approach.
|
|
|
|
.. admonition:: unittest2
|
|
|
|
.. deprecated:: 1.7
|
|
|
|
Python 2.7 introduced some major changes to the ``unittest`` library,
|
|
adding some extremely useful features. To ensure that every Django project
|
|
could benefit from these new features, Django used to ship with a copy of
|
|
Python 2.7's ``unittest`` backported for Python 2.6 compatibility.
|
|
|
|
Since Django no longer supports Python versions older than 2.7,
|
|
``django.utils.unittest`` is deprecated. Simply use ``unittest``.
|
|
|
|
.. _unittest2: https://pypi.python.org/pypi/unittest2
|
|
|
|
Here is an example which subclasses from :class:`django.test.TestCase`,
|
|
which is a subclass of :class:`unittest.TestCase` that runs each test inside a
|
|
transaction to provide isolation::
|
|
|
|
from django.test import TestCase
|
|
from myapp.models import Animal
|
|
|
|
class AnimalTestCase(TestCase):
|
|
def setUp(self):
|
|
Animal.objects.create(name="lion", sound="roar")
|
|
Animal.objects.create(name="cat", sound="meow")
|
|
|
|
def test_animals_can_speak(self):
|
|
"""Animals that can speak are correctly identified"""
|
|
lion = Animal.objects.get(name="lion")
|
|
cat = Animal.objects.get(name="cat")
|
|
self.assertEqual(lion.speak(), 'The lion says "roar"')
|
|
self.assertEqual(cat.speak(), 'The cat says "meow"')
|
|
|
|
When you :ref:`run your tests <running-tests>`, the default behavior of the
|
|
test utility is to find all the test cases (that is, subclasses of
|
|
:class:`unittest.TestCase`) in any file whose name begins with ``test``,
|
|
automatically build a test suite out of those test cases, and run that suite.
|
|
|
|
.. versionchanged:: 1.6
|
|
|
|
Previously, Django's default test runner only discovered tests in
|
|
``tests.py`` and ``models.py`` files within a Python package listed in
|
|
:setting:`INSTALLED_APPS`.
|
|
|
|
For more details about :mod:`unittest`, see the Python documentation.
|
|
|
|
.. warning::
|
|
|
|
If your tests rely on database access such as creating or querying models,
|
|
be sure to create your test classes as subclasses of
|
|
:class:`django.test.TestCase` rather than :class:`unittest.TestCase`.
|
|
|
|
Using :class:`unittest.TestCase` avoids the cost of running each test in a
|
|
transaction and flushing the database, but if your tests interact with
|
|
the database their behavior will vary based on the order that the test
|
|
runner executes them. This can lead to unit tests that pass when run in
|
|
isolation but fail when run in a suite.
|
|
|
|
.. _running-tests:
|
|
|
|
|
|
Running tests
|
|
=============
|
|
|
|
Once you've written tests, run them using the :djadmin:`test` command of
|
|
your project's ``manage.py`` utility::
|
|
|
|
$ ./manage.py test
|
|
|
|
Test discovery is based on the unittest module's :py:ref:`built-in test
|
|
discovery <unittest-test-discovery>`. By default, this will discover tests in
|
|
any file named "test*.py" under the current working directory.
|
|
|
|
You can specify particular tests to run by supplying any number of "test
|
|
labels" to ``./manage.py test``. Each test label can be a full Python dotted
|
|
path to a package, module, ``TestCase`` subclass, or test method. For instance::
|
|
|
|
# Run all the tests in the animals.tests module
|
|
$ ./manage.py test animals.tests
|
|
|
|
# Run all the tests found within the 'animals' package
|
|
$ ./manage.py test animals
|
|
|
|
# Run just one test case
|
|
$ ./manage.py test animals.tests.AnimalTestCase
|
|
|
|
# Run just one test method
|
|
$ ./manage.py test animals.tests.AnimalTestCase.test_animals_can_speak
|
|
|
|
You can also provide a path to a directory to discover tests below that
|
|
directory::
|
|
|
|
$ ./manage.py test animals/
|
|
|
|
You can specify a custom filename pattern match using the ``-p`` (or
|
|
``--pattern``) option, if your test files are named differently from the
|
|
``test*.py`` pattern::
|
|
|
|
$ ./manage.py test --pattern="tests_*.py"
|
|
|
|
.. versionchanged:: 1.6
|
|
|
|
Previously, test labels were in the form ``applabel``,
|
|
``applabel.TestCase``, or ``applabel.TestCase.test_method``, rather than
|
|
being true Python dotted paths, and tests could only be found within
|
|
``tests.py`` or ``models.py`` files within a Python package listed in
|
|
:setting:`INSTALLED_APPS`. The ``--pattern`` option and file paths as test
|
|
labels are new in 1.6.
|
|
|
|
If you press ``Ctrl-C`` while the tests are running, the test runner will
|
|
wait for the currently running test to complete and then exit gracefully.
|
|
During a graceful exit the test runner will output details of any test
|
|
failures, report on how many tests were run and how many errors and failures
|
|
were encountered, and destroy any test databases as usual. Thus pressing
|
|
``Ctrl-C`` can be very useful if you forget to pass the :djadminopt:`--failfast`
|
|
option, notice that some tests are unexpectedly failing, and want to get details
|
|
on the failures without waiting for the full test run to complete.
|
|
|
|
If you do not want to wait for the currently running test to finish, you
|
|
can press ``Ctrl-C`` a second time and the test run will halt immediately,
|
|
but not gracefully. No details of the tests run before the interruption will
|
|
be reported, and any test databases created by the run will not be destroyed.
|
|
|
|
.. admonition:: Test with warnings enabled
|
|
|
|
It's a good idea to run your tests with Python warnings enabled:
|
|
``python -Wall manage.py test``. The ``-Wall`` flag tells Python to
|
|
display deprecation warnings. Django, like many other Python libraries,
|
|
uses these warnings to flag when features are going away. It also might
|
|
flag areas in your code that aren't strictly wrong but could benefit
|
|
from a better implementation.
|
|
|
|
|
|
.. _the-test-database:
|
|
|
|
The test database
|
|
-----------------
|
|
|
|
Tests that require a database (namely, model tests) will not use your "real"
|
|
(production) database. Separate, blank databases are created for the tests.
|
|
|
|
Regardless of whether the tests pass or fail, the test databases are destroyed
|
|
when all the tests have been executed.
|
|
|
|
By default the test databases get their names by prepending ``test_``
|
|
to the value of the :setting:`NAME` settings for the databases
|
|
defined in :setting:`DATABASES`. When using the SQLite database engine
|
|
the tests will by default use an in-memory database (i.e., the
|
|
database will be created in memory, bypassing the filesystem
|
|
entirely!). If you want to use a different database name, specify
|
|
:setting:`TEST_NAME` in the dictionary for any given database in
|
|
:setting:`DATABASES`.
|
|
|
|
Aside from using a separate database, the test runner will otherwise
|
|
use all of the same database settings you have in your settings file:
|
|
:setting:`ENGINE <DATABASE-ENGINE>`, :setting:`USER`, :setting:`HOST`, etc. The
|
|
test database is created by the user specified by :setting:`USER`, so you'll
|
|
need to make sure that the given user account has sufficient privileges to
|
|
create a new database on the system.
|
|
|
|
For fine-grained control over the character encoding of your test
|
|
database, use the :setting:`TEST_CHARSET` option. If you're using
|
|
MySQL, you can also use the :setting:`TEST_COLLATION` option to
|
|
control the particular collation used by the test database. See the
|
|
:doc:`settings documentation </ref/settings>` for details of these
|
|
advanced settings.
|
|
|
|
.. admonition:: Finding data from your production database when running tests?
|
|
|
|
If your code attempts to access the database when its modules are compiled,
|
|
this will occur *before* the test database is set up, with potentially
|
|
unexpected results. For example, if you have a database query in
|
|
module-level code and a real database exists, production data could pollute
|
|
your tests. *It is a bad idea to have such import-time database queries in
|
|
your code* anyway - rewrite your code so that it doesn't do this.
|
|
|
|
.. versionadded:: 1.7
|
|
|
|
This also applies to customized implementations of
|
|
:meth:`~django.apps.AppConfig.ready()`.
|
|
|
|
.. seealso::
|
|
|
|
The :ref:`advanced multi-db testing topics <topics-testing-advanced-multidb>`.
|
|
|
|
.. _order-of-tests:
|
|
|
|
Order in which tests are executed
|
|
---------------------------------
|
|
|
|
In order to guarantee that all ``TestCase`` code starts with a clean database,
|
|
the Django test runner reorders tests in the following way:
|
|
|
|
* All :class:`~django.test.TestCase` subclasses are run first.
|
|
|
|
* Then, all other unittests (including :class:`unittest.TestCase`,
|
|
:class:`~django.test.SimpleTestCase` and
|
|
:class:`~django.test.TransactionTestCase`) are run with no particular
|
|
ordering guaranteed nor enforced among them.
|
|
|
|
* Then any other tests (e.g. doctests) that may alter the database without
|
|
restoring it to its original state are run.
|
|
|
|
.. note::
|
|
|
|
The new ordering of tests may reveal unexpected dependencies on test case
|
|
ordering. This is the case with doctests that relied on state left in the
|
|
database by a given :class:`~django.test.TransactionTestCase` test, they
|
|
must be updated to be able to run independently.
|
|
|
|
Other test conditions
|
|
---------------------
|
|
|
|
Regardless of the value of the :setting:`DEBUG` setting in your configuration
|
|
file, all Django tests run with :setting:`DEBUG`\=False. This is to ensure that
|
|
the observed output of your code matches what will be seen in a production
|
|
setting.
|
|
|
|
Caches are not cleared after each test, and running "manage.py test fooapp" can
|
|
insert data from the tests into the cache of a live system if you run your
|
|
tests in production because, unlike databases, a separate "test cache" is not
|
|
used. This behavior `may change`_ in the future.
|
|
|
|
.. _may change: https://code.djangoproject.com/ticket/11505
|
|
|
|
Understanding the test output
|
|
-----------------------------
|
|
|
|
When you run your tests, you'll see a number of messages as the test runner
|
|
prepares itself. You can control the level of detail of these messages with the
|
|
``verbosity`` option on the command line::
|
|
|
|
Creating test database...
|
|
Creating table myapp_animal
|
|
Creating table myapp_mineral
|
|
Loading 'initial_data' fixtures...
|
|
No fixtures found.
|
|
|
|
This tells you that the test runner is creating a test database, as described
|
|
in the previous section.
|
|
|
|
Once the test database has been created, Django will run your tests.
|
|
If everything goes well, you'll see something like this::
|
|
|
|
----------------------------------------------------------------------
|
|
Ran 22 tests in 0.221s
|
|
|
|
OK
|
|
|
|
If there are test failures, however, you'll see full details about which tests
|
|
failed::
|
|
|
|
======================================================================
|
|
FAIL: test_was_published_recently_with_future_poll (polls.tests.PollMethodTests)
|
|
----------------------------------------------------------------------
|
|
Traceback (most recent call last):
|
|
File "/dev/mysite/polls/tests.py", line 16, in test_was_published_recently_with_future_poll
|
|
self.assertEqual(future_poll.was_published_recently(), False)
|
|
AssertionError: True != False
|
|
|
|
----------------------------------------------------------------------
|
|
Ran 1 test in 0.003s
|
|
|
|
FAILED (failures=1)
|
|
|
|
A full explanation of this error output is beyond the scope of this document,
|
|
but it's pretty intuitive. You can consult the documentation of Python's
|
|
:mod:`unittest` library for details.
|
|
|
|
Note that the return code for the test-runner script is 1 for any number of
|
|
failed and erroneous tests. If all the tests pass, the return code is 0. This
|
|
feature is useful if you're using the test-runner script in a shell script and
|
|
need to test for success or failure at that level.
|
|
|
|
Speeding up the tests
|
|
---------------------
|
|
|
|
In recent versions of Django, the default password hasher is rather slow by
|
|
design. If during your tests you are authenticating many users, you may want
|
|
to use a custom settings file and set the :setting:`PASSWORD_HASHERS` setting
|
|
to a faster hashing algorithm::
|
|
|
|
PASSWORD_HASHERS = (
|
|
'django.contrib.auth.hashers.MD5PasswordHasher',
|
|
)
|
|
|
|
Don't forget to also include in :setting:`PASSWORD_HASHERS` any hashing
|
|
algorithm used in fixtures, if any.
|