Step-by-step guide: Implementing automated testing for Python code
Contents
Step-by-step guide: Implementing automated testing for Python code#
Doctest#
In this section, we will learn how to use doctest to test functions and modules in Python.
A doctest is a way to embed test cases within the documentation of a function or module in Python. The tests are written in the form of interactive Python sessions, and they are used to make sure that the code examples in the documentation are accurate.
Here is an example of a doctest for a simple Python function that calculates the area of a rectangle:
def area(width, height):
"""
Calculate the area of a rectangle.
>>> area(5, 10)
50
>>> area(2, 3)
6
"""
return width * height
To run the doctests, you can use the doctest module. For example, you can run the tests for the area function like this:
import doctest
doctest.testmod()
TestResults(failed=0, attempted=2)
This will run all of the doctests in the current module and report any failures.
Step 1: Write your first doctest#
Doctests are well suited for simple function testing. You are going to write your first test for the gaussian_model
function, from exercise 2, using as input and output the values proposed in the signature.
Open the lib_background.py
module containing the function. Then, add a documentation string (also known as a docstring) to the gaussian_model
function and write at least one test case.
import lib_background
doctest.testmod(lib_background)
# Expected result :
# TestResults(failed=0, attempted=3)
TestResults(failed=0, attempted=1)
If everything is working correctly, you should see the output N tests in 1 items. followed by N passed and 0 failed. indicating that all of the test cases in the docstring have passed.
Step 2: Run tests in the terminal#
To run the tests, open a terminal, navigate to the directory containing lib_background.py
, and run the following command:
python -m doctest lib_background.py
This won’t display anything unless an example fails, in which case the failing example(s) and the cause(s) of the failure(s) are printed to stdout, and the final line of output is Test Failed N failures., where N is the number of examples that failed.
Run it with the -v switch instead:
python -m doctest lib_background.py -v
and a detailed report of all examples tried is printed to standard output, along with assorted summaries at the end.
Pytest#
pytest is a testing framework for Python. It is a powerful tool for discovering and running tests, and it has a number of useful features, such as the ability to rerun only the tests that failed during the last run, and support for running tests in parallel.
Here is an example of a simple function that adds two numbers:
def add(a, b):
return a + b
and how you might use pytest to test it
def test_addition():
assert add(2, 3) == 5
assert add(-2, 3) == 1
assert add(2, -3) == -1
assert add(0, 0) == 0
To run this example using pytest, you would save the code above in a file named test_addition.py
, and then run the pytest command from the command line:
pytest
pytest
will discover the test file and run the tests contained in it. If any of the assert statements fail, pytest will print a message indicating which test failed and what the expected and actual output was.
Step 1: Use doctest with pytest#
You can run any doctest via pytest using the following command:
pytest --doctest-modules
Step 2: Write the test function#
Create a file called test_lib_peak.py
. This will contain the test function for your code.
In test_lib_peak.py
, write test functions to check if the is_peak function
is working properly. To do so, use a known tiny test matrix as input image and test both true and false cases. Your code should contain at least two functions: test_is_peak__true_case
and test_is_peak__false_case
.
Step 3: Run the tests#
To run the tests, open a terminal, navigate to the directory containing test_lib_peak.py
and run the following command:
pytest test_lib_peak.py
If everything is working correctly, you should see the output 2 passed
indicating that all of the test cases in test_lib_peak.py
ran succesfully.
Step 4: Parametrize your tests#
Instead of writting a dedicated function for each test case, we can refactor the code using the @pytest.mark.parametrize
decorator and write a single function that will test multiple values
For example test_addition
can be refactor as:
import pytest
@pytest.mark.parametrize(
"input_a,input_b,expected", [(2, 3, 5), (-2, 3, 1), (2, -3, -1), (0, 0, 0)]
)
def test_addition__refactor(input_a, input_b, expected):
assert add(input_a, input_b) == expected
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[7], line 1
----> 1 import pytest
4 @pytest.mark.parametrize(
5 "input_a,input_b,expected", [(2, 3, 5), (-2, 3, 1), (2, -3, -1), (0, 0, 0)]
6 )
7 def test_addition__refactor(input_a, input_b, expected):
8 assert add(input_a, input_b) == expected
ModuleNotFoundError: No module named 'pytest'
Step 5: Handle exceptions#
The pytest.raises
function allows you to test that a specific exception is raised when certain code is run. This can be useful for verifying that your code is handling errors and exceptions as expected. Here’s an example:
def test_add__int_and_str():
with pytest.raises(TypeError):
add(-1, "1")
The pytest.raises
function is used to test that a TypeError
is raised when the add
function is called with arguments of incompatible types (-1
and "1"
).
Now, you will add a new test function called test_is_peak__1d_array
in the test_lib_peak.py
file to check if the case with a 1D test matrix is handled correctly. This will ensure that your code is properly handling arrays of different dimensions and reacting appropriately to edge cases.
Hypothesis#
Hypothesis is a Python library for creating unit tests that are parametrized by example. It allows you to specify examples of input data for your tests and automatically generates test cases based on those examples. This can help you create more thorough and expressive tests, as well as reduce the amount of boilerplate code required to set up test cases.
For example, you can use Hypothesis to test the add function as follows:
from hypothesis import example, given, strategies
@given(x=strategies.integers(), y=strategies.integers())
@example(x=1, y=2)
@example(x=-1, y=-2)
def test_add__hypothesis(x, y):
assert add(x, y) == x + y
In this example, the test_add
function is decorated with the @given
decorator and two @example
decorators specifying specific input values. Hypothesis will generate test cases for all possible combinations of x
and y
integers, as well as run the test function with the specific examples provided in the @example
decorators.
Introduction to Mocks#
Mocks are objects that mimic the behavior of real objects in controlled ways. They are often used in testing to replace the behavior of external dependencies, such as a database or a network connection, with a simplified version that is easier to control and predict.
Using the unittest.mock module#
The mock library can be used to mock constants, functions, methods, properties, entire classes, class methods, static methods, etc.
For functions, for example, you will commonly need to specify a return value, check if they were called, and with what values:
@mock.patch('myapp.app.get_first_name')
def test_function(self, mock_get_first_name):
mock_get_first_name.return_value = 'Bat'
...
mock_get_first_name.assert_called()
mock_get_first_name.assert_called_once_with('baz')
A comprehensive guide to using the mock module can be found here
In exercise 1, you have developed a main function that uses functions to read an image. Here, you will use mocking to test this main function and ensure that it is working properly.
Hint
Use the
@mock.patch
decorator to mock the read_first_image function and the return_value attribute to set the return value of the mocked functionYou can also use
@patch("builtins.print")
to mock the print function and check the mock_calls attribute to verify that the main function is printing the expected output
Coverage report with pytest-cov
#
Once you have become proficient in using all the tools required to test a Python package, it’s important to determine if your code has been adequately tested.
To assist in this process, the pytest-cov
plugin can be used. This plugin, which is built on top of the pytest
framework, allows you to measure the coverage of your tests. It works by running your tests and collecting data on which lines of code are executed, and then generating a report on the percentage of your codebase that is covered by tests.
Using pytest-cov
can be helpful for identifying areas of your code that are not being tested, as well as for ensuring that you have adequate test coverage overall. This can help you catch bugs and improve the reliability and maintainability of your code.
To use pytest-cov
, you will need to run your tests with the --cov
flag followed by the path to the module or package that you want to measure coverage for. For example, to measure coverage for a module lib_peak.py
, you can use the command pytest --cov=lib_peak
. This will generate a coverage report showing the percentage of lines, statements, and functions covered by tests.
You can also use the --cov-report
flag to specify the format of the coverage report, such as html
for an HTML report or xml
for an XML report. For example, to generate an HTML coverage report for the lib_peak.py
module, you can use the command pytest --cov=lib_peak --cov-report=html
.
And now ?#
After following the step-by-step guide and generating the code coverage report, it’s time for you to take the lead and apply the knowledge you’ve acquired to the rest of your code. This means identifying which parts of the code need to be tested and writing the appropriate tests for them. The following are a few things to keep in mind when writing tests:
Test coverage: Make sure you have a good understanding of the code and ensure that all relevant parts of the code are being tested.
Edge cases: Consider edge cases and test them to ensure that the code is robust and can handle unexpected inputs.
Readability: Make sure that your tests are easy to read and understand, so that other developers can quickly understand what the code is doing.
Maintainability: Keep your tests organized and maintainable, so that they can be easily updated and modified as the code evolves.