Home Artificial Intelligence Organising Python Projects: Part III Requirements Testing framework Pytest configuration Testing the applying Coverage Coverage configuration Continuous Integration (CI) Badge Conclusion BONUS:

Organising Python Projects: Part III Requirements Testing framework Pytest configuration Testing the applying Coverage Coverage configuration Continuous Integration (CI) Badge Conclusion BONUS:

0
Organising Python Projects: Part III
Requirements
Testing framework
Pytest configuration
Testing the applying
Coverage
Coverage configuration
Continuous Integration (CI)
Badge
Conclusion
BONUS:

Photo by Gayatri Malhotra on Unsplash

Whether you’re a seasoned developer or simply getting began with 🐍 Python, it’s vital to know tips on how to construct robust and maintainable projects. This tutorial will guide you thru the technique of establishing a Python project using a few of the hottest and effective tools within the industry. You’ll learn tips on how to use GitHub and GitHub Actions for version control and continuous integration, in addition to other tools for testing, documentation, packaging and distribution. The tutorial is inspired by resources reminiscent of Hypermodern Python and Best Practices for a brand new Python project. Nonetheless, this shouldn’t be the one option to do things and you would possibly have different preferences or opinions. The tutorial is meant to be beginner-friendly but in addition cover some advanced topics. In each section, you’ll automate some tasks and add badges to your project to indicate your progress and achievements.

The repository for this series could be found at github.com/johschmidt42/python-project-johannes

  • OS: Linux, Unix, macOS, Windows (WSL2 with e.g. Ubuntu 20.04 LTS)
  • Tools: python3.10, bash, git, tree
  • Version Control System (VCS) Host: GitHub
  • Continuous Integration (CI) Tool: GitHub Actions

It is anticipated that you just are aware of the versioning control system (VCS) git. If not, here’s a refresher for you: Introduction to Git

Commits will probably be based on best practices for git commits & Conventional commits. There’s the traditional commit plugin for PyCharm or a VSCode Extension that enable you to put in writing commits on this format.

Overview

Structure

  • Testing framework (pytest)
  • Pytest configuration (pytest.ini_options)
  • Testing the applying (fastAPI, httpx)
  • Coverage (pytest-coverage)
  • Coverage configuration (coverage.report)
  • CI (test.yml)
  • Badge (Testing)
  • Bonus (Report coverage in README.md)

Testing your code is a crucial a part of software development. It helps you be certain that your code works as expected. You possibly can test your code or application manually or use a testing framework to automate the method. Automated tests could be of differing kinds, reminiscent of unit tests, integration tests, end-to-end tests, penetration tests, etc. On this tutorial, we’ll concentrate on writing an easy unit test for our single function in our project. This may reveal that our codebase is well tested and reliable, which is a basic requirement for any proper project.

Python has some testing frameworks to pick from, reminiscent of the built-in standard library unittest. Nonetheless, this module has some drawbacks, reminiscent of requiring boilerplate code, class-based tests and specific assert methods. A greater alternative is pytest, which is a preferred and powerful testing framework with many plugins. If you happen to will not be aware of pytest, it is best to read this introductory tutorial before you proceed, because we’ll write an easy test without explaining much of the fundamentals.

So let’s start by making a recent branch: feat/unit-tests

In our app src/example_app we only have two files that could be tested: __init__.py and app.py . The __init__ file incorporates just the version and the app.py our fastAPI application and the GET pokemon endpoint. We don’t must test the __init__.py file since it only incorporates the version and it is going to be executed after we import app.py or another file from our app.

We are able to create a tests folder within the project’s root and add the test file test_app.py in order that it looks like this:

.
...
├── src
│ └── example_app
│ ├── __init__.py
│ └── app.py
└── tests
└── test_app.py

Before we add a test function with pytest, we’d like to put in the testing framework first and add some configurations to make our lives a bit easier:

Since the default visual output within the terminal leaves some room for improvement, I prefer to use the plugin pytest-sugar. This is totally optional, but for those who just like the visuals, give it a try. We install these dependencies to a brand new group that we call test. Again, as explained within the last part (part II), that is to separate app and dev dependencies.

Because pytest may not know where our tests are situated, we are able to add this information to the pyproject.toml:

# pyproject.toml
...
[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = "-p no:cacheprovider" # deactivating pytest caching.

Where addopts stands for “add options” or “additional options” and the worth -p no:cacheprovider tells pytest to not cache runs. Alternatively, we are able to create a pytest.ini and add these lines there.

Let’s proceed with adding a test to the fastAPI endpoint that we created in app.py. Because we use httpx, we’d like to mock the response from the HTTP call (https://pokeapi.co/api). We could use monkeypatch or unittest.mock to alter the behaviour of some functions or classes in httpx but there already exists a plugin that we are able to use: respx

Mock HTTPX with awesome request patterns and response unwanted side effects.

Moreover, because fastAPI is an ASGI and never a WSGI, we’d like to put in writing an async test, for which we are able to use the pytest plugin: pytest-asyncio along with trio . Don’t worry if these are recent to you, they are only libraries for async Python and also you don’t need to know what they do.

> poetry add --group test respx pytest-asyncio trio

Let’s create our test within the test_app.py:

I won’t go into the main points of tips on how to create unit-tests with pytest, because this topic could cover a complete series of tutorials! But to summarise, I created an async test called test_get_pokemon by which the response will probably be the expected_response because we’re using the respx_mock library. The endpoint of our fastAPI application is known as and the result’s in comparison with the expected result. If you need to find more about tips on how to test with fastAPI and httpx, try the official documentation: Testing in fastAPI

And if you might have async functions, and don’t know tips on how to cope with them, take a take a look at: Testing with async functions in fastAPI

Assuming that you just installed your application with poetry install we now can run pytest with

> pytest
Running all our tests — Image by creator

and pytest knows by which directory it must search for test files!

To make our linters joyful, we must always also run them on the newly created file. For this, we’d like to change the command lint-mypy in order that mypy also covers files within the tests directory (previously only src):

# Makefile...lint-mypy:
@mypy .
...

Finally, we are able to now run our formatters and linters before committing:

> make format
> make lint
Running formatters and linters — Image by creator

The code coverage in a project is a great indicator of how much of the code is roofed by unit tests. Hence, code coverage is a great metric (not all the time) to ascertain if a selected codebase is well tested and reliable.

We are able to check our code coverage with the coverage module. It creates a coverage report and offers information in regards to the lines that we missed with our unit-tests. We are able to install it via a pytest plugin pytest-cov:

> poetry add --group test pytest-cov

We are able to run the coverage module through pytest:

> pytest --cov=src --cov-report term-missing --cov-report=html

To only check the coverage for the src directory we add the flag --cov=src . We wish the report back to be displayed within the terminal --cov-report term-missing and stored in a html file with --cov-report html

Coverage report terminal — Image by creator

We see that a coverage HTML report has been created within the directory htmlcov by which we discover an index.html.

.
...
├── index.html
├── keybd_closed.png
├── keybd_open.png
├── status.json
└── style.css

Opening it in a browser allows us to visually see the lines that we covered with our tests:

Coverage report HTML (overview) — Image by creator

Clicking on the link src/example_app/app.py we see an in depth view of what our unit-tests covered within the file and more importantly which lines they missed:

Coverage report HTML (detailed) — Image by creator

We notice that the code under the if __name__ == "principal": line is included in our coverage report. We are able to exclude this by setting the right flag when running pytest, or higher, add this configuration in our pyproject.toml:

# pyproject.toml
...
[tool.coverage.report]
exclude_lines = [
'if __name__ == "__main__":'
]

The lines after the if __name__==”__main__" are actually excluded*.

*It probably is smart to incorporate other common lines reminiscent of

  • def __repr__
  • def __str__
  • raise NotImplementedError

If we run pytest with the coverage module again

> pytest --cov=src --cov-report term-missing --cov-report=html
Coverage report HTML (excluded lines) — Image by creator

the last line shouldn’t be excluded as expected.

We’ve got covered the fundamentals of the coverage module, but there are more features that you may explore. You possibly can read the official documentation to learn more in regards to the options.

Let’s add these commands (pytest, coverage) to our Makefile, the identical way we did in Part II, in order that we don’t must remember them. Moreover we add a command that uses the --cov-fail-under=80 flag. This signals pytest to fail if the whole coverage is lower than 80 %. We’ll use this later within the CI a part of this tutorial. Since the coverage report creates some files and directories inside the project, we must always also add a command that removes these for us (clean-up):

# Makefileunit-tests:
@pytest
unit-tests-cov:
@pytest --cov=src --cov-report term-missing --cov-report=html
unit-tests-cov-fail:
@pytest --cov=src --cov-report term-missing --cov-report=html --cov-fail-under=80
clean-cov:
@rm -rf .coverage
@rm -rf htmlcov
...

And now we are able to invoke these with

> make unit-tests
> make unit-tests-cov

and clean up the created files with

> make clean-cov

Once more, we use the software development practice CI to ensure that nothing is broken each time we commit to our default branch principal.

Up until now, we were capable of run our tests locally. So allow us to create our second workflow that may run on a server from GitHub! We’ve got the choice of using codecov.io together with the codecov-action OR we are able to create the report within the Pull Request (PR) itself with a pytest-comment motion. I’ll select the second option for simplicity.

We are able to either create a brand new workflow that runs parallel to our linter lint.yml (faster) or have one workflow that runs the linters first after which the testing job (more efficient). It is a design alternative that depends upon the project’s needs. Each options have pros and cons. For this tutorial, I’ll create a separate workflow (test.yml). But before we try this, we’d like to update our command within the Makefile, in order that we create a pytest.xml and a pytest-coverage.txt, that are needed for the pytest-comment motion:

# Makefile...unit-tests-cov-fail:
@pytest --cov=src --cov-report term-missing --cov-report=html --cov-fail-under=80 --junitxml=pytest.xml | tee pytest-coverage.txt
clean-cov:
@rm -rf .coverage
@rm -rf htmlcov
@rm -rf pytest.xml
@rm -rf pytest-coverage.txt
...

Now we are able to write our workflow test.yml:

Let’s break it all the way down to ensure we understand each part. GitHub motion workflows have to be created within the .github/workflows directory of the repository within the format of .yaml or .yml files. If you happen to’re seeing these for the primary time, you may check them out here to raised understand them. Within the upper a part of the file, we give the workflow a reputation name: Testing and define on which signals/events, this workflow must be began: on: ... . Here, we wish that it runs when recent commits come right into a PullRequest targeting the principal branch or commits go the principal branch directly. The job runs in an ubuntu-latest (runs-on) environment and executes the next steps:

  • checkout the repository using the branch name that’s stored within the default environment variable ${{ github.head_ref }} . GitHub motion: checkout@v3
  • install Poetry with pipx since it’s pre-installed on all GitHub runners. If you might have a self-hosted runner in e.g. Azure, you’d need to put in it yourself or use an existing GitHub motion that does it for you.
  • Setup the python environment and caching the virtualenv based on the content within the poetry.lock file. GitHub motion: setup-python@v4
  • Install the applying & its requirements along with the test dependencies which might be needed to run the tests with pytest: poetry install --with test
  • Running the tests with the make command: poetry run make unit-tests-cov-vail Please note, that running the tools is just possible within the virtualenv, which we are able to access through poetry run.
  • We use a GitHub motion that permits us to routinely create a comment within the PR with the coverage report. GitHub motion: pytest-coverage-comment@principal

After we open a PR targeting the principal branch, the CI pipeline will run and we’ll see a comment like this in our PR:

Pytest coverage report in PR comment — Image by creator

It created a small badge with the whole coverage percentage (81%) and has linked the tested files with URLs. With one other commit in the identical feature branch (PR), the identical comment for the coverage report is overwritten by default.

To display the status of our recent CI pipeline on the homepage of our repository, we are able to add a badge to the README.md file.

We are able to retrieve the badge after we click on a workflow run:

Create a standing badge from workflow file on GitHub — Image by creator
Copy the badge markdown — Image by creator

and choose the principal branch. The badge markdown could be copied and added to the README.md:

Our landing page of the GitHub now looks like this ❤:

Second badge in README.md: Testing — Image by creator

If you happen to are interested in how this badge reflects the newest status of the pipeline run within the principal branch, you may try the statuses API on GitHub.

LEAVE A REPLY

Please enter your comment!
Please enter your name here