pytest Tech-Webinar, 18.08.2020

Kontakt für Fragen oder Infos zu Firmenkursen:

Florian Bruhin
florian@bruhin.software
https://bruhin.software/
Twitter: @the_compiler

Nächste offene Trainings:

  • 08. September 2020, CH Open Workshoptage, HSLU Rotkreuz
    pytest: Test Driven Development (nicht nur) für Python

  • 01. bis 03. Februar 2021, Python Academy, Leipzig + online
    Professional Testing with Python

  • Vielleicht bald?, On-site Firmentraining
    Python, pytest, GUI-Applikationen mit Python und Qt (PyQt/PySide), ...

Videoaufzeichnung:

    Download

Weitere Ressourcen:

Einfache Exception

In [3]:
!pytest basic/test_traceback.py
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/student/code
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 2 items                                                              

basic/test_traceback.py F.                                               [100%]

=================================== FAILURES ===================================
_________________________________ test_divide __________________________________

    def test_divide():
>       assert divide(2, 0) == 0

basic/test_traceback.py:5: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

x = 2, y = 0

    def divide(x, y):
>       return x / y
E       ZeroDivisionError: division by zero

basic/test_traceback.py:2: ZeroDivisionError
=========================== short test summary info ============================
FAILED basic/test_traceback.py::test_divide - ZeroDivisionError: division by ...
========================= 1 failed, 1 passed in 0.18s ==========================

Angepasster Output bei diversen Datentypen

In [4]:
!pytest basic/failure_demo.py
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/student/code
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 8 items                                                              

basic/failure_demo.py FFFFFFFF                                           [100%]

=================================== FAILURES ===================================
_________________________________ test_eq_text _________________________________

    def test_eq_text():
>       assert 'spam' == 'eggs'
E       AssertionError: assert 'spam' == 'eggs'
E         - eggs
E         + spam

basic/failure_demo.py:4: AssertionError
_____________________________ test_eq_similar_text _____________________________

    def test_eq_similar_text():
>       assert 'foo 1 bar' == 'foo 2 bar'
E       AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
E         - foo 2 bar
E         ?     ^
E         + foo 1 bar
E         ?     ^

basic/failure_demo.py:7: AssertionError
______________________________ test_eq_long_text _______________________________

    def test_eq_long_text():
        a = '1'*100 + 'a' + '2'*100
        b = '1'*100 + 'b' + '2'*100
>       assert a == b
E       AssertionError: assert '111111111111...2222222222222' == '111111111111...2222222222222'
E         Skipping 90 identical leading characters in diff, use -v to show
E         Skipping 91 identical trailing characters in diff, use -v to show
E         - 1111111111b222222222
E         ?           ^
E         + 1111111111a222222222
E         ?           ^

basic/failure_demo.py:12: AssertionError
_________________________________ test_eq_list _________________________________

    def test_eq_list():
>       assert [0, 1, 2] == [0, 1, 3]
E       assert [0, 1, 2] == [0, 1, 3]
E         At index 2 diff: 2 != 3
E         Use -v to get the full diff

basic/failure_demo.py:15: AssertionError
_________________________________ test_eq_dict _________________________________

    def test_eq_dict():
>       assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
E       AssertionError: assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
E         Omitting 1 identical items, use -vv to show
E         Differing items:
E         {'b': 1} != {'b': 2}
E         Use -v to get the full diff

basic/failure_demo.py:18: AssertionError
_________________________________ test_eq_set __________________________________

    def test_eq_set():
>       assert {0, 10, 11, 12} == {0, 20, 21}
E       AssertionError: assert {0, 10, 11, 12} == {0, 20, 21}
E         Extra items in the left set:
E         10
E         11
E         12
E         Extra items in the right set:
E         20
E         21...
E         
E         ...Full output truncated (2 lines hidden), use '-vv' to show

basic/failure_demo.py:21: AssertionError
_____________________________ test_eq_longer_list ______________________________

    def test_eq_longer_list():
>       assert [1, 2] == [1, 2,3]
E       assert [1, 2] == [1, 2, 3]
E         Right contains one more item: 3
E         Use -v to get the full diff

basic/failure_demo.py:24: AssertionError
___________________________ test_not_in_text_single ____________________________

    def test_not_in_text_single():
        text = 'single foo line'
>       assert 'foo' not in text
E       AssertionError: assert 'foo' not in 'single foo line'
E         'foo' is contained here:
E           single foo line
E         ?        +++

basic/failure_demo.py:28: AssertionError
=========================== short test summary info ============================
FAILED basic/failure_demo.py::test_eq_text - AssertionError: assert 'spam' ==...
FAILED basic/failure_demo.py::test_eq_similar_text - AssertionError: assert '...
FAILED basic/failure_demo.py::test_eq_long_text - AssertionError: assert '111...
FAILED basic/failure_demo.py::test_eq_list - assert [0, 1, 2] == [0, 1, 3]
FAILED basic/failure_demo.py::test_eq_dict - AssertionError: assert {'a': 0, ...
FAILED basic/failure_demo.py::test_eq_set - AssertionError: assert {0, 10, 11...
FAILED basic/failure_demo.py::test_eq_longer_list - assert [1, 2] == [1, 2, 3]
FAILED basic/failure_demo.py::test_not_in_text_single - AssertionError: asser...
============================== 8 failed in 0.19s ===============================

Output capturing

In [5]:
%%writefile test_output.py
    
def test_passing():
    print("I'm a passing test")

def test_failing():
    print("I'm a failing test")
    assert False
Writing test_output.py

Standardmässig zeigt pytest nur den Output von Tests an, die fehlschlagen:

In [6]:
!pytest test_output.py
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/student/code
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 2 items                                                              

test_output.py .F                                                        [100%]

=================================== FAILURES ===================================
_________________________________ test_failing _________________________________

    def test_failing():
        print("I'm a failing test")
>       assert False
E       assert False

test_output.py:7: AssertionError
----------------------------- Captured stdout call -----------------------------
I'm a failing test
=========================== short test summary info ============================
FAILED test_output.py::test_failing - assert False
========================= 1 failed, 1 passed in 0.14s ==========================

Mit -s wird der ganze Output während den Tests angezeigt, ohne von pytest abgeändert zu werden:

In [7]:
!pytest -s test_output.py
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/student/code
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 2 items                                                              

test_output.py I'm a passing test
.I'm a failing test
F

=================================== FAILURES ===================================
_________________________________ test_failing _________________________________

    def test_failing():
        print("I'm a failing test")
>       assert False
E       assert False

test_output.py:7: AssertionError
=========================== short test summary info ============================
FAILED test_output.py::test_failing - assert False
========================= 1 failed, 1 passed in 0.13s ==========================

Marking

Eigene Marks

  • Test mit @pytest.mark.codec_x etc. markieren
  • Marks in pytest.ini unter markers = ... registrieren:
[pytest]
markers =
    codec_x: tests codec X
    codec_y: tests codec Y
  • Tests beim Ausführen filtern, z.B. mit -m codec_x, -m "codec_x or codec_y", -m "not slow", etc.
In [10]:
!pytest marking/test_marking.py -v -m codec_x
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/student/code/.hypothesis/examples')
rootdir: /home/student/code/marking, configfile: pytest.ini
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 2 items / 1 deselected / 1 selected                                  

marking/test_marking.py::test_codec_x PASSED                             [100%]

======================= 1 passed, 1 deselected in 0.02s ========================

Skip/xfail

In [11]:
%%writefile test_marking.py
import pytest

@pytest.mark.skip(reason="Should not run")
def test_skipped():
    raise Exception
    
@pytest.mark.xfail(reason="Broken, see JIRA-1234")
def test_xfailing():
    raise Exception
Writing test_marking.py
In [13]:
!pytest test_marking.py -rsx
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/student/code
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 2 items                                                              

test_marking.py sx                                                       [100%]

=========================== short test summary info ============================
SKIPPED [1] test_marking.py:3: Should not run
XFAIL test_marking.py::test_xfailing
  Broken, see JIRA-1234
======================== 1 skipped, 1 xfailed in 0.04s =========================

pytest.skip als Funktion

In [15]:
%%writefile test_skipping_func.py
import pytest

def test_skipped():
    # ...
    pytest.skip("Server not reachable")
Writing test_skipping_func.py
In [16]:
!pytest test_skipping_func.py
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /home/student/code
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 1 item                                                               

test_skipping_func.py s                                                  [100%]

============================== 1 skipped in 0.03s ==============================

Parametrisierung

In [17]:
!pytest basic/test_parametrization.py -v
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/student/code/.hypothesis/examples')
rootdir: /home/student/code
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 6 items                                                              

basic/test_parametrization.py::test_iseven[2] PASSED                     [ 16%]
basic/test_parametrization.py::test_iseven[4] PASSED                     [ 33%]
basic/test_parametrization.py::test_iseven[7] FAILED                     [ 50%]
basic/test_parametrization.py::test_successor[1-1] FAILED                [ 66%]
basic/test_parametrization.py::test_successor[2-3] PASSED                [ 83%]
basic/test_parametrization.py::test_successor[3-4] PASSED                [100%]

=================================== FAILURES ===================================
________________________________ test_iseven[7] ________________________________

arg1 = 7

    @pytest.mark.parametrize("arg1", [2, 4, 7])
    def test_iseven(arg1):
>       assert arg1 % 2 == 0
E       assert 1 == 0
E         +1
E         -0

basic/test_parametrization.py:5: AssertionError
_____________________________ test_successor[1-1] ______________________________

arg1 = 1, arg2 = 1

    @pytest.mark.parametrize("arg1, arg2", [(1, 1), (2, 3), (3, 4)])
    def test_successor(arg1, arg2):
>       assert arg2 == arg1 + 1
E       assert 1 == 2
E         +1
E         -2

basic/test_parametrization.py:9: AssertionError
=========================== short test summary info ============================
FAILED basic/test_parametrization.py::test_iseven[7] - assert 1 == 0
FAILED basic/test_parametrization.py::test_successor[1-1] - assert 1 == 2
========================= 2 failed, 4 passed in 0.18s ==========================

Fixtures

Zwei Fixtures und pytest.skip innerhalb Fixture

In [19]:
%pycat fixtures/test_fixture.py
import pytest

@pytest.fixture
def somevalue():
    pytest.skip("Answer not available")
    return 42

@pytest.fixture
def othervalue():
    return 23

def test_function(somevalue, othervalue):
    assert somevalue == 42
    assert othervalue == 23
In [18]:
!pytest fixtures/test_fixture.py --setup-show -v
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/student/code/.hypothesis/examples')
rootdir: /home/student/code
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 1 item                                                               

fixtures/test_fixture.py::test_function 
        SETUP    F somevalueSKIPPED
        TEARDOWN F somevalue

============================== 1 skipped in 0.04s ==============================

Komplexeres Beispiel

Mit session-scope db fixture und einer function-scope transaction fixture, welche mit autouse=True automatisch bei jedem Test angewendet wird:

In [21]:
%%writefile test_fixtures.py

import pytest

class Database:
    
    def __init__(self):
        print("creating db")
        
    def begin(self):
        print("beginning transaction")

    def rollback(self):
        print("finishing transaction")
        

@pytest.fixture(scope="session")
def db():
    return Database()

@pytest.fixture(autouse=True)
def transaction(db):
    db.begin()
    yield
    db.rollback()
    
def test_db(db):
    print("in test 1")
    
def test_db_2(db):
    print("in test 2")
Writing test_fixtures.py
In [23]:
!pytest test_fixtures.py -v -s
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/student/code/.hypothesis/examples')
rootdir: /home/student/code
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 2 items                                                              

test_fixtures.py::test_db creating db
beginning transaction
in test 1
PASSEDfinishing transaction

test_fixtures.py::test_db_2 beginning transaction
in test 2
PASSEDfinishing transaction


============================== 2 passed in 0.02s ===============================

Plugins

In [24]:
%cd hooks/reporting
/home/student/code/hooks/reporting
In [7]:
%pycat conftest.py
def pytest_report_header():
    return ["extrainfo: line 1"]

def pytest_terminal_summary(terminalreporter):
    if terminalreporter.verbosity >= 1:
        terminalreporter.section("my special section")
        terminalreporter.line("report something here")
In [26]:
!pytest -v
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/student/code/hooks/reporting/.hypothesis/examples')
extrainfo: line 1
rootdir: /home/student/code/hooks/reporting
plugins: forked-1.3.0, cov-2.10.1, hypothesis-5.26.0, xdist-2.0.0, instafail-0.4.2
collected 0 items                                                              

============================== my special section ==============================
report something here
============================ no tests ran in 0.03s =============================