├── pdm
├── code
│ └── packaging-tutorial
│ │ ├── tests
│ │ ├── __init__.py
│ │ └── test_packaging_tutorial.py
│ │ ├── README.md
│ │ ├── src
│ │ └── packaging_tutorial
│ │ │ ├── __init__.py
│ │ │ ├── validated_pycon.py
│ │ │ ├── pycon.py
│ │ │ └── validated_pycon_with_config.py
│ │ ├── .pre-commit-config.yaml
│ │ ├── scripts
│ │ └── get_charlas.py
│ │ ├── pyproject.toml
│ │ ├── .github
│ │ └── workflows
│ │ │ └── publish.yml
│ │ └── .gitignore
└── tutorial
│ ├── toc.md
│ ├── 01-introduction
│ ├── 01-workshop-introduction.md
│ ├── 02-pip-internals.md
│ ├── 0x-bonus-scripts.md
│ ├── 03-our-first-package.md
│ ├── 05-virtual-environments.md
│ └── 04-dependencies.md
│ ├── 04-managing-your-code
│ ├── 15-using-github.md
│ ├── 18-summary.md
│ ├── 13-using-git.md
│ ├── 14-pre-commit.md
│ ├── 16-github-workflows.md
│ └── 17-publishing-with-github.md
│ ├── 03-building-and-publishing-packages
│ ├── 12-publishing-packages.md
│ └── 11-building-wheels.md
│ └── 02-more-about-dependencies
│ ├── 10-lock-files.md
│ ├── 09-static-code-checkers.md
│ ├── 07-package-extras.md
│ ├── 08-dev-dependencies.md
│ └── 06-unit-tests.md
├── uv
├── code
│ └── packaging-tutorial
│ │ ├── tests
│ │ ├── __init__.py
│ │ └── test_packaging_tutorial.py
│ │ ├── README.md
│ │ ├── src
│ │ └── packaging_tutorial
│ │ │ ├── __init__.py
│ │ │ ├── validated_pycon.py
│ │ │ ├── pycon.py
│ │ │ └── validated_pycon_with_config.py
│ │ ├── .pre-commit-config.yaml
│ │ ├── scripts
│ │ └── get_charlas.py
│ │ ├── .github
│ │ └── workflows
│ │ │ └── publish.yml
│ │ ├── pyproject.toml
│ │ ├── .gitignore
│ │ └── requirements.txt
└── tutorial
│ ├── 01-introduction
│ ├── 01-workshop-introduction.md
│ ├── 02-pip-internals.md
│ ├── 0x-bonus-scripts.md
│ ├── 03-our-first-package.md
│ ├── 05-virtual-environments.md
│ └── 04-dependencies.md
│ ├── 04-managing-your-code
│ ├── 15-using-github.md
│ ├── 18-summary.md
│ ├── 13-using-git.md
│ ├── 14-pre-commit.md
│ ├── 16-github-workflows.md
│ └── 17-publishing-with-github.md
│ ├── 03-building-and-publishing-packages
│ ├── 12-publishing-packages.md
│ └── 11-building-wheels.md
│ └── 02-more-about-dependencies
│ ├── 10-lock-files.md
│ ├── 09-static-code-checkers.md
│ ├── 07-package-extras.md
│ ├── 06-unit-tests.md
│ └── 08-dev-dependencies.md
├── assets
├── pip.png
├── pypi.png
├── pip.afphoto
├── pypi.afphoto
├── post_it_pink.svg
└── post_it_yellow.svg
└── README.md
/pdm/code/packaging-tutorial/tests/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/pdm/tutorial/toc.md:
--------------------------------------------------------------------------------
1 | # Table of contents
2 |
3 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/tests/__init__.py:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/README.md:
--------------------------------------------------------------------------------
1 | # the-name-of-the-package
2 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/README.md:
--------------------------------------------------------------------------------
1 | # the-name-of-the-package
2 |
--------------------------------------------------------------------------------
/assets/pip.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yngvem/pycon25-tutorial/HEAD/assets/pip.png
--------------------------------------------------------------------------------
/assets/pypi.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yngvem/pycon25-tutorial/HEAD/assets/pypi.png
--------------------------------------------------------------------------------
/assets/pip.afphoto:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yngvem/pycon25-tutorial/HEAD/assets/pip.afphoto
--------------------------------------------------------------------------------
/assets/pypi.afphoto:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/yngvem/pycon25-tutorial/HEAD/assets/pypi.afphoto
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/src/packaging_tutorial/__init__.py:
--------------------------------------------------------------------------------
1 | import importlib.metadata
2 |
3 | __version__ = importlib.metadata.version(__name__)
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/src/packaging_tutorial/__init__.py:
--------------------------------------------------------------------------------
1 | import importlib.metadata
2 |
3 | __version__ = importlib.metadata.version(__name__)
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/.pre-commit-config.yaml:
--------------------------------------------------------------------------------
1 | repos:
2 | - repo: https://github.com/pre-commit/pre-commit-hooks
3 | rev: v3.2.0
4 | hooks:
5 | - id: trailing-whitespace
6 | - id: end-of-file-fixer
7 | - repo: https://github.com/astral-sh/ruff-pre-commit
8 | rev: v0.11.6
9 | hooks:
10 | - id: ruff
11 | args: [ --fix ]
12 | - id: ruff-format
13 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/.pre-commit-config.yaml:
--------------------------------------------------------------------------------
1 | repos:
2 | - repo: https://github.com/pre-commit/pre-commit-hooks
3 | rev: v3.2.0
4 | hooks:
5 | - id: trailing-whitespace
6 | - id: end-of-file-fixer
7 | - repo: https://github.com/astral-sh/ruff-pre-commit
8 | rev: v0.11.6
9 | hooks:
10 | - id: ruff
11 | args: [ --fix ]
12 | - id: ruff-format
13 |
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/scripts/get_charlas.py:
--------------------------------------------------------------------------------
1 | import packaging_tutorial.pycon as pycon
2 |
3 | try:
4 | from rich import print
5 | except ImportError:
6 | pass
7 |
8 |
9 | sessions = pycon.get_sessions()
10 | charlas = (session for session in sessions if session.kind == "charla")
11 | print("Charlas:")
12 | print("========\n")
13 | for charla in charlas:
14 | print(charla.name, charla.start)
15 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/scripts/get_charlas.py:
--------------------------------------------------------------------------------
1 | import packaging_tutorial.pycon as pycon
2 |
3 | try:
4 | from rich import print
5 | except ImportError:
6 | pass
7 |
8 |
9 | sessions = pycon.get_sessions()
10 | charlas = (session for session in sessions if session.kind == "charla")
11 | print("Charlas:")
12 | print("========\n")
13 | for charla in charlas:
14 | print(charla.name, charla.start)
15 |
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/pyproject.toml:
--------------------------------------------------------------------------------
1 | [project]
2 | name = "packaging-tutorial"
3 | description = ""
4 | authors = [
5 | {name = "Yngve Mardal Moe"},
6 | {name = "Marie Roald"},
7 | ]
8 | dependencies = [
9 | "httpx>=0.28.1",
10 | ]
11 | requires-python = ">=3.11"
12 | readme = "README.md"
13 | license = {text = "MIT"}
14 | dynamic = ["version"]
15 |
16 | [project.optional-dependencies]
17 | rich = [
18 | "rich>=14.0.0",
19 | ]
20 |
21 | [dependency-groups]
22 | dev = [
23 | "ruff>=0.11.5",
24 | "pytest>=8.3.5",
25 | "pyright>=1.1.399",
26 | ]
27 |
28 | [build-system]
29 | requires = ["setuptools", "setuptools-scm"]
30 | build-backend = "setuptools.build_meta"
31 |
32 | [tool.ruff.lint]
33 | select = ["I"]
34 |
35 | [tool.setuptools_scm]
36 | root = "../../.."
37 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/.github/workflows/publish.yml:
--------------------------------------------------------------------------------
1 | name: release
2 |
3 | on:
4 | push:
5 | tags:
6 | - "v*.*.*"
7 |
8 | permissions:
9 | id-token: write
10 |
11 | jobs:
12 | build-and-publish:
13 | runs-on: ubuntu-latest
14 |
15 | steps:
16 | - name: Checkout code
17 | uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
18 | with:
19 | persist-credentials: false
20 | - name: Setup uv
21 | uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca
22 | with:
23 | python-version: 3.13
24 | enable-cache: true
25 | - name: build package
26 | run: uv build
27 | - name: Publish package distributions to PyPI
28 | uses: pypa/gh-action-pypi-publish@76f52bc884231f62b9a034ebfe128415bbaabdfc
29 | with:
30 | repository-url: https://test.pypi.org/legacy/
31 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/pyproject.toml:
--------------------------------------------------------------------------------
1 | [project]
2 | name = "packaging-tutorial"
3 | description = ""
4 | authors = [
5 | {name = "Yngve Mardal Moe"},
6 | {name = "Marie Roald"},
7 | ]
8 | dependencies = [
9 | "httpx>=0.28.1",
10 | "pydantic>=2.11.4",
11 | "pydantic-settings>=2.9.1",
12 | ]
13 | requires-python = ">=3.11"
14 | readme = "README.md"
15 | license = "MIT"
16 | dynamic = ["version"]
17 |
18 | [project.optional-dependencies]
19 | rich = [
20 | # "rich>=14.0.0",
21 | ]
22 | dev = [
23 | "pytest>=8.3.5",
24 | "ruff>=0.11.8",
25 | ]
26 |
27 | [dependency-groups]
28 | dev = [
29 | # "ruff>=0.11.5",
30 | # "pytest>=8.3.5",
31 | # "pyright>=1.1.399",
32 | "pytest>=8.3.5",
33 | ]
34 |
35 | [build-system]
36 | requires = ["setuptools", "setuptools-scm"]
37 | build-backend = "setuptools.build_meta"
38 |
39 | [tool.ruff.lint]
40 | select = ["I"]
41 |
42 | [tool.setuptools_scm]
43 | root = "../../.."
44 |
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/.github/workflows/publish.yml:
--------------------------------------------------------------------------------
1 | name: release
2 |
3 | on:
4 | push:
5 | tags:
6 | - "v*.*.*"
7 |
8 | permissions:
9 | id-token: write
10 |
11 | jobs:
12 | build-and-publish:
13 | runs-on: ubuntu-latest
14 |
15 | steps:
16 | - name: Checkout code
17 | uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
18 | with:
19 | persist-credentials: false
20 | - name: Setup PDM
21 | uses: pdm-project/setup-pdm@deb8d8a4e2a03aabcef6f2cc981923fc6b29ef99
22 | with:
23 | python-version: 3.13
24 | cache: true
25 | - name: Install project
26 | run: pdm install
27 | - name: build package
28 | run: |
29 | pdm build
30 | - name: Publish package distributions to PyPI
31 | uses: pypa/gh-action-pypi-publish@76f52bc884231f62b9a034ebfe128415bbaabdfc
32 | with:
33 | repository-url: https://test.pypi.org/legacy/
--------------------------------------------------------------------------------
/pdm/tutorial/01-introduction/01-workshop-introduction.md:
--------------------------------------------------------------------------------
1 | # Workshop introduction
2 |
3 | This is a self-worked tutorial on packaging.
4 | Each section starts with a little introduction before you start working on exercises.
5 | However, these exercises are a bit different from what you might be used to: you will be asked to do things, possibly without knowing what it does or understanding why.
6 | Then, when you are finished with a set of exercises, there's a reflection text for you to read, where your questions hopefully are answered.
7 | If you participate in a live session, we also ask you to put a yellow post-it note on the back of your laptop after finishing a set of exercises to signify that you are ready to move on.
8 | Specifically, put a yellow post it note on your laptop when you get to this symbol:
9 |
10 |
11 |
12 | Once most tutorial participants have finished an exercise and placed yellow post it notes on their laptops, we will have a short recap together where we demonstrate the exercises and discuss what we have learned before moving on to a new topic.
13 |
14 | If you have any questions, then please attach a pink post-it note on the back of your laptop screen, and we'll do our best to help you.
15 |
16 |
17 |
18 | **PS:** While it's often faster to copy-paste code and commands, we recommend that you type them in manually unless otherwise stated (at least for most of them), as this will make it easier to remember the commands in the future.
19 |
20 | ## Next up
21 | [Pip internals](./02-pip-internals.md)
--------------------------------------------------------------------------------
/uv/tutorial/01-introduction/01-workshop-introduction.md:
--------------------------------------------------------------------------------
1 | # Workshop introduction
2 |
3 | This is a self-worked tutorial on packaging.
4 | Each section starts with a little introduction before you start working on exercises.
5 | However, these exercises are a bit different from what you might be used to: you will be asked to do things, possibly without knowing what it does or understanding why.
6 | Then, when you are finished with a set of exercises, there's a reflection text for you to read, where your questions hopefully are answered.
7 | If you participate in a live session, we also ask you to put a yellow post-it note on the back of your laptop after finishing a set of exercises to signify that you are ready to move on.
8 | Specifically, put a yellow post it note on your laptop when you get to this symbol:
9 |
10 |
11 |
12 | Once most tutorial participants have finished an exercise and placed yellow post it notes on their laptops, we will have a short recap together where we demonstrate the exercises and discuss what we have learned before moving on to a new topic.
13 |
14 | If you have any questions, then please attach a pink post-it note on the back of your laptop screen, and we'll do our best to help you.
15 |
16 |
17 |
18 | **PS:** While it's often faster to copy-paste code and commands, we recommend that you type them in manually unless otherwise stated (at least for most of them), as this will make it easier to remember the commands in the future.
19 |
20 | ## Next up
21 | [Pip internals](./02-pip-internals.md)
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/tests/test_packaging_tutorial.py:
--------------------------------------------------------------------------------
1 | from datetime import datetime, timedelta
2 |
3 | import httpx
4 | import pytest
5 |
6 | import packaging_tutorial.pycon
7 |
8 |
9 | @pytest.fixture(autouse=True)
10 | def clear_cached_response() -> None:
11 | packaging_tutorial.pycon.get_programme.cache_clear()
12 |
13 |
14 | def test_get_future_sessions() -> None:
15 | now = datetime.fromisoformat("20250512T12:00Z")
16 | past_sessions = [
17 | packaging_tutorial.pycon.Session(start=now - timedelta(hours=2)),
18 | packaging_tutorial.pycon.Session(start=now - timedelta(hours=1)),
19 | packaging_tutorial.pycon.Session(start=now - timedelta(hours=22)),
20 | ]
21 | future_sessions = [
22 | packaging_tutorial.pycon.Session(start=now + timedelta(hours=1)),
23 | packaging_tutorial.pycon.Session(start=now + timedelta(hours=21)),
24 | ]
25 | sessions = past_sessions + future_sessions
26 |
27 | assert list(packaging_tutorial.pycon.get_future_sessions(sessions, now=now)) == future_sessions
28 |
29 |
30 | def test_get_sessions_caches_response(monkeypatch: pytest.MonkeyPatch):
31 | # Setup mock client
32 | def handler(request: httpx.Request) -> httpx.Response:
33 | handler.requests.append(request)
34 | return httpx.Response(200, json={"some data": 123})
35 | handler.requests = []
36 |
37 | transport = httpx.MockTransport(handler=handler)
38 | client = httpx.Client(transport=transport)
39 | monkeypatch.setattr(packaging_tutorial.pycon, "HTTP_CLIENT", client)
40 |
41 | # Act and assert
42 | packaging_tutorial.pycon.get_programme()
43 | packaging_tutorial.pycon.get_programme()
44 | assert len(handler.requests) == 1
45 |
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/tests/test_packaging_tutorial.py:
--------------------------------------------------------------------------------
1 | from datetime import datetime, timedelta
2 |
3 | import httpx
4 | import pytest
5 |
6 | import packaging_tutorial.pycon
7 |
8 |
9 | @pytest.fixture(autouse=True)
10 | def clear_cached_response() -> None:
11 | packaging_tutorial.pycon.get_programme.cache_clear()
12 |
13 |
14 | def test_get_future_sessions() -> None:
15 | now = datetime.fromisoformat("20250512T12:00Z")
16 | past_sessions = [
17 | packaging_tutorial.pycon.Session(start=now - timedelta(hours=2)),
18 | packaging_tutorial.pycon.Session(start=now - timedelta(hours=1)),
19 | packaging_tutorial.pycon.Session(start=now - timedelta(hours=22)),
20 | ]
21 | future_sessions = [
22 | packaging_tutorial.pycon.Session(start=now + timedelta(hours=1)),
23 | packaging_tutorial.pycon.Session(start=now + timedelta(hours=21)),
24 | ]
25 | sessions = past_sessions + future_sessions
26 |
27 | assert list(packaging_tutorial.pycon.get_future_sessions(sessions, now=now)) == future_sessions
28 |
29 |
30 | def test_get_sessions_caches_response(monkeypatch: pytest.MonkeyPatch):
31 | # Setup mock client
32 | def handler(request: httpx.Request) -> httpx.Response:
33 | handler.requests.append(request)
34 | return httpx.Response(200, json={"some data": 123})
35 | handler.requests = []
36 |
37 | transport = httpx.MockTransport(handler=handler)
38 | client = httpx.Client(transport=transport)
39 | monkeypatch.setattr(packaging_tutorial.pycon, "HTTP_CLIENT", client)
40 |
41 | # Act and assert
42 | packaging_tutorial.pycon.get_programme()
43 | packaging_tutorial.pycon.get_programme()
44 | assert len(handler.requests) == 1
45 |
--------------------------------------------------------------------------------
/pdm/tutorial/04-managing-your-code/15-using-github.md:
--------------------------------------------------------------------------------
1 | # Using GitHub
2 |
3 | In this part of the tutorial, we'll look at how to use [GitHub](https://github.com/) to host your code.
4 | While the tutorial is for GitHub, there are a vast amount of similar tools you can use ([GitLab](https://gitlab.com/), [Atlassian BitBucket](https://bitbucket.com), [CodeBerg](https://codeberg.org/) and [GitTea](https://about.gitea.com/)).
5 | The reason we use GitHub here is that it's the most popular tool with ready-built components that makes it easier to upload your code to PyPI.
6 |
7 | ## Exercises
8 |
9 | 1. Create a new repository on GitHub, name it {{name-packaging-tutorial}}. Initialise it as an empty repository, and **DO NOT** press the "Include README file" option.
10 | 2. Add the GitHub repository as a *Git remote* by running `git remote add "origin" {remote-url}`, where `{remote-url}` either is the HTTPS or SSH URL. To find this URL, press the green \<\> Code button on your GitHub repository.
11 | 3. Push your changes to GitHub by running `git push --set-upstream origin main`.
12 | 4. Open your repository on GitHub and validate that your updates are there.
13 |
14 |
15 |
16 | ## Reflection
17 |
18 | You have now uploaded your code to GitHub so other people can clone it.
19 | You did this by first creating an empty repository on GitHub.
20 | Then, you added that repository as a *remote* to your local Git repository, which essentially means that Git knows that it can find a copy of the code on GitHub.
21 | However, changes aren't synchronised automatically.
22 | Instead, you must explicitly *push* changes, telling Git to upload them to GitHub.
23 | If there are no new changes on GitHub that you don't have locally, then the code is pushed.
24 | Otherwise, you will need to merge the changes into one coherent repository.
25 | We will not have time to go into how that's done here and refer, e.g. to Chalmer Lowe's [Introduction to Sprinting tutorial](https://github.com/chalmerlowe/intro_to_sprinting/) for more information about this.
26 |
27 | ## Next up
28 | [Setting up a continuous integration (CI) pipeline](./16-github-workflows.md)
29 |
--------------------------------------------------------------------------------
/uv/tutorial/04-managing-your-code/15-using-github.md:
--------------------------------------------------------------------------------
1 | # Using GitHub
2 |
3 | In this part of the tutorial, we'll look at how to use [GitHub](https://github.com/) to host your code.
4 | While the tutorial is for GitHub, there are a vast amount of similar tools you can use ([GitLab](https://gitlab.com/), [Atlassian BitBucket](https://bitbucket.com), [CodeBerg](https://codeberg.org/) and [GitTea](https://about.gitea.com/)).
5 | The reason we use GitHub here is that it's the most popular tool with ready-built components that makes it easier to upload your code to PyPI.
6 |
7 | ## Exercises
8 |
9 | 1. Create a new repository on GitHub, name it {{name-packaging-tutorial}}. Initialise it as an empty repository, and **DO NOT** press the "Include README file" option.
10 | 1. Add the GitHub repository as a *Git remote* by running `git remote add "origin" {remote-url}`, where `{remote-url}` either is the HTTPS or SSH URL. To find this URL, press the green \<\> Code button on your GitHub repository.
11 | 1. Push your changes to GitHub by running `git push --set-upstream origin main`.
12 | 1. Open your repository on GitHub and validate that your updates are there.
13 |
14 |
15 |
16 | ## Reflection
17 |
18 | You have now uploaded your code to GitHub so other people can clone it.
19 | You did this by first creating an empty repository on GitHub.
20 | Then, you added that repository as a *remote* to your local Git repository, which essentially means that Git knows that it can find a copy of the code on GitHub.
21 | However, changes aren't synchronised automatically.
22 | Instead, you must explicitly *push* changes, telling Git to upload them to GitHub.
23 | If there are no new changes on GitHub that you don't have locally, then the code is pushed.
24 | Otherwise, you will need to merge the changes into one coherent repository.
25 | We will not have time to go into how that's done here and refer, e.g. to Chalmer Lowe's [Introduction to Sprinting tutorial](https://github.com/chalmerlowe/intro_to_sprinting/) for more information about this.
26 |
27 | ## Next up
28 | [Setting up a continuous integration (CI) pipeline](./16-github-workflows.md)
29 |
--------------------------------------------------------------------------------
/pdm/tutorial/01-introduction/02-pip-internals.md:
--------------------------------------------------------------------------------
1 | # How pip installs packages
2 |
3 | When we install a Python package we might use pip and write something along the lines of
4 |
5 | ```bash
6 | pip install urllib3
7 | ```
8 |
9 | But what does that do?
10 |
11 | To answer that, we first need to know what a package is.
12 | A package is essentially a collection of Python files together with instructions on how pip should install it.
13 | Now the question is, of course, how pip installs these packages.
14 |
15 | 
16 |
17 | The screenshot above shows that pip downloads a `.whl`-file, or a *wheel*-file.
18 | To find that file, it starts by searching [PyPI](https://pypi.org/) for that package, and if it finds it, it checks if it has a wheel file it can download, downloads it and installs it.
19 |
20 | So, an obvious question is: what are these *wheel files*? What's in them?
21 |
22 | ## Exercises
23 |
24 | 1. Navigate to the [PyPI website](https://pypi.org/) and find the project page for urllib3
25 | 2. Download the latest wheel file for urllib3 (the file named `urllib3-{{version}}-py3-none-any.whl`)
26 | 3. Rename the downloaded file so it ends with the file extension `.zip` instead of `.whl`
27 | 4. Unzip the file and inspect the contents. Discuss what you believe they are with your neighbour
28 | 5. Compare the contents of the unzipped wheel file with the [urllib3 source code](https://github.com/urllib3/urllib3). Are there any similarities?
29 |
30 |
31 |
32 | ## Reflection
33 |
34 | We see that the wheel files contain two things: The importable Python files and package metadata in a [very particular format](https://packaging.python.org/en/latest/specifications/core-metadata/).
35 | So this is what we need to provide to enable other people to install and import our code.
36 | While the source code can be easy to provide in the correct format, the metadata is stored cumbersomely.
37 | For example, the RECORD-file includes SHA-hashes for every single importable file, and we don't want to manually compute those.
38 | Instead, we want to use tools that can make wheels with the accompanying metadata-files automatically from our code.
39 |
40 | ## Next up
41 | [Creating our first package with PDM](./03-our-first-package.md).
42 |
--------------------------------------------------------------------------------
/uv/tutorial/01-introduction/02-pip-internals.md:
--------------------------------------------------------------------------------
1 | # How pip installs packages
2 |
3 | When we install a Python package we might use pip and write something along the lines of
4 |
5 | ```bash
6 | pip install urllib3
7 | ```
8 |
9 | But what does that do?
10 |
11 | To answer that, we first need to know what a package is.
12 | A package is essentially a collection of Python files together with instructions on how pip should install it.
13 | Now the question is, of course, how pip installs these packages.
14 |
15 | 
16 |
17 | The screenshot above shows that pip downloads a `.whl`-file, or a *wheel*-file.
18 | To find that file, it starts by searching [PyPI](https://pypi.org/) for that package, and if it finds it, it checks if it has a wheel file it can download, downloads it and installs it.
19 |
20 | So, an obvious question is: what are these *wheel files*? What's in them?
21 |
22 | ## Exercises
23 |
24 | 1. Navigate to the [PyPI website](https://pypi.org/) and find the project page for urllib3
25 | 2. Download the latest wheel file for urllib3 (the file named `urllib3-{{version}}-py3-none-any.whl`)
26 | 3. Rename the downloaded file so it ends with the file extension `.zip` instead of `.whl`
27 | 4. Unzip the file and inspect the contents. Discuss what you believe they are with your neighbour
28 | 5. Compare the contents of the unzipped wheel file with the [urllib3 source code](https://github.com/urllib3/urllib3). Are there any similarities?
29 |
30 |
31 |
32 | ## Reflection
33 |
34 | We see that the wheel files contain two things: The importable Python files and package metadata in a [very particular format](https://packaging.python.org/en/latest/specifications/core-metadata/).
35 | So this is what we need to provide to enable other people to install and import our code.
36 | While the source code can be easy to provide in the correct format, the metadata is stored cumbersomely.
37 | For example, the RECORD-file includes SHA-hashes for every single importable file, and we don't want to manually compute those.
38 | Instead, we want to use tools that can make wheels with the accompanying metadata-files automatically from our code.
39 |
40 | ## Next up
41 | [Creating our first package with uv](./03-our-first-package.md).
42 |
--------------------------------------------------------------------------------
/pdm/tutorial/03-building-and-publishing-packages/12-publishing-packages.md:
--------------------------------------------------------------------------------
1 | # Publishing your package to PyPI
2 |
3 | Once we've created installable wheels and source distributions, we need to share them.
4 | This is typically done by uploading them to the Python Packaging Index (PyPI).
5 | Technically, PyPI is a *package repository*, and while PyPI is the most common packaging repository, many others exist.
6 | For example, your employer might have a copy of PyPI internally that you can upload internal packages to.
7 | In this part of the tutorial, we'll upload our package to a package repository named Test PyPI, which is a clone of PyPI where we can mess around without cluttering the official PyPI.
8 |
9 | ## Exercises
10 | 1. If you haven't already, register a new user for Test-PyPI at https://test.pypi.org/
11 | 1. Navigate to your account settings on Test PyPI and create an API-token with full access to your account. Store this token somewhere safe (preferably in a password or credential manager).
12 | 1. Publish your package by running `pdm publish -r https://test.pypi.org/legacy`. Use the username `__token__` and the token you created earlier as password.
13 | 1. Navigate to your account settings on Test PyPI and delete the API token you created.
14 | 1. Navigate to https://test.pypi.org/ and search for you package.
15 | 1. Navigate to https://test.pypi.org/simple and see if you find your package. What do you think this "simple view" is used for?
16 | 1. Navigate to https://test.pypi.org/simple/{your-package-name}/, what do you see here?
17 |
18 |
19 |
20 | ## Reflection
21 | By now, you have created a user at a package registry and pushed code there that anyone can install.
22 | Moreover, you might have gotten a deeper understanding of how pip works: it finds projects by parsing a simple HTML file and downloads and installs wheels by following the links.
23 | This is VERY similar to how we could do it manually, but luckily, we don't need to.
24 |
25 | If you're interested in the specifics of PyPI and how it's built up, then you can check out the [PyPI documentation](https://docs.pypi.org/api/index-api/), or the HTML repository specs in [PEP 503](https://peps.python.org/pep-0503/) (there is also a JSON-spec that you can use if you want to create your own package repository: [PEP 691](https://peps.python.org/pep-0691/)).
26 |
27 | ## Exercise (Optional)
28 | 1. Use what you've learned so far to set up one of your own project to use uv.
29 |
30 | ## Next up
31 | [Using git to keep track of our code](../04-managing-your-code/13-using-git.md)
32 |
--------------------------------------------------------------------------------
/uv/tutorial/04-managing-your-code/18-summary.md:
--------------------------------------------------------------------------------
1 | # Summary
2 |
3 | Now, you are pretty much up-to-date on the modern packaging standards in Python.
4 | You should know how to organise your code in a good way, that the `pyproject.toml` file is organised in four parts: The `project` table with package metadata, the `dependency-groups`-table with development dependencies, the `build-backend` table, which parametrises how to build the package and the `tool` table, which parametrises other utilities you use while you're developing your package.
5 | Moreover, you've learned to use the modern project management tool, uv and how uv plays together with the python packaging ecosystem.
6 |
7 | You have also learned about tools that make it easier to make high-quality Python packages, like pytest (with a couple of plugins) and Ruff, and how you can use pre-commit hooks to make Ruff run automatically when you commit and how to set up a CI-pipeline that runs unit tests automatically when you push to GitHub.
8 |
9 | Finally, you've learned about building your package, the difference between a wheel and a source distribution, and how to publish your packages to PyPI, both manually and automatically through a CD-pipeline (that you secured with Zizmor).
10 |
11 | What we haven't really looked at is how to build Python extension modules (i.e. modules written in other languages, like C, C++, Fortran or Rust).
12 | While this can be very useful in certain circumstances, building such packages also requires knowledge about the build systems commonly used with these extension modules.
13 | However, if you are interested in making extension modules, then you can still use uv -- you just need to set the correct build backend.
14 | Here are some resources for further reading about building extension modules for Python:
15 |
16 | - [The Python Packaging User Guide's recommendations on build systems](https://packaging.python.org/en/latest/guides/tool-recommendations/#build-backends)
17 | - [The Meson build system for C, C++, Fortran, etc.](https://mesonbuild.com/meson-python/)
18 | - Meson is, for example, used to build [NumPy](https://numpy.org) and [SciPy](https://scipy.org).
19 | - [Scikit-build-core (wrapping CMake projects for Python)](https://scikit-build-core.readthedocs.io/en/latest/)
20 | - [Maturin](https://www.maturin.rs) and [PyO3](https://github.com/PyO3/pyo3) for extension modules written in Rust.
21 | - Maturin is, for example, used to build [Pydantic](https://docs.pydantic.dev/latest/).
22 | - [PyOpenSci's quick introduction to extension modules](https://www.pyopensci.org/python-package-guide/package-structure-code/complex-python-package-builds.html)
23 |
--------------------------------------------------------------------------------
/pdm/tutorial/04-managing-your-code/18-summary.md:
--------------------------------------------------------------------------------
1 | # Summary
2 |
3 | Now, you are pretty much up-to-date on the modern packaging standards in Python.
4 | You should know how to organise your code in a good way, that the `pyproject.toml` file is organised in four parts: The `project` table with package metadata, the `dependency-groups`-table with development dependencies, the `build-backend` table, which parametrises how to build the package and the `tool` table, which parametrises other utilities you use while you're developing your package.
5 | Moreover, you've learned to use the modern project management tool, PDM and how PDM plays together with the python packaging ecosystem.
6 |
7 | You have also learned about tools that make it easier to make high-quality Python packages, like pytest (with a couple of plugins) and Ruff, and how you can use pre-commit hooks to make Ruff run automatically when you commit and how to set up a CI-pipeline that runs unit tests automatically when you push to GitHub.
8 |
9 | Finally, you've learned about building your package, the difference between a wheel and a source distribution, and how to publish your packages to PyPI, both manually and automatically through a CD-pipeline (that you secured with Zizmor).
10 |
11 | What we haven't really looked at is how to build Python extension modules (i.e. modules written in other languages, like C, C++, Fortran or Rust).
12 | While this can be very useful in certain circumstances, building such packages also requires knowledge about the build systems commonly used with these extension modules.
13 | However, if you are interested in making extension modules, then you can still use PDM -- you just need to set the correct build backend.
14 | Here are some resources for further reading about building extension modules for Python:
15 |
16 | - [The Python Packaging User Guide's recommendations on build systems](https://packaging.python.org/en/latest/guides/tool-recommendations/#build-backends)
17 | - [The Meson build system for C, C++, Fortran, etc.](https://mesonbuild.com/meson-python/)
18 | - Meson is, for example, used to build [NumPy](https://numpy.org) and [SciPy](https://scipy.org).
19 | - [Scikit-build-core (wrapping CMake projects for Python)](https://scikit-build-core.readthedocs.io/en/latest/)
20 | - [Maturin](https://www.maturin.rs) and [PyO3](https://github.com/PyO3/pyo3) for extension modules written in Rust.
21 | - Maturin is, for example, used to build [Pydantic](https://docs.pydantic.dev/latest/).
22 | - [PyOpenSci's quick introduction to extension modules](https://www.pyopensci.org/python-package-guide/package-structure-code/complex-python-package-builds.html)
23 |
--------------------------------------------------------------------------------
/uv/tutorial/03-building-and-publishing-packages/12-publishing-packages.md:
--------------------------------------------------------------------------------
1 | # Publishing your package to PyPI
2 |
3 | Once we've created installable wheels and source distributions, we need to share them.
4 | This is typically done by uploading them to the Python Packaging Index (PyPI).
5 | Technically, PyPI is a *package repository*, and while PyPI is the most common packaging repository, many others exist.
6 | For example, your employer might have a copy of PyPI internally that you can upload internal packages to.
7 | In this part of the tutorial, we'll upload our package to a package repository named Test PyPI, which is a clone of PyPI where we can mess around without cluttering the official PyPI.
8 |
9 | ## Exercises
10 | 1. If you haven't already, register a new user for Test-PyPI at https://test.pypi.org/
11 | 1. Navigate to your account settings on Test PyPI and create an API-token with full access to your account. Store this token somewhere safe (preferably in a password or credential manager).
12 | 1. Publish your package by running `uv publish --publish-url https://test.pypi.org/legacy/`. Use the username `__token__` and the token you created earlier as password.
13 | 1. Navigate to your account settings on Test PyPI and delete the API token you created.
14 | 1. Navigate to https://test.pypi.org/ and search for you package.
15 | 1. Navigate to https://test.pypi.org/simple and see if you find your package. What do you think this "simple view" is used for?
16 | 1. Navigate to https://test.pypi.org/simple/{your-package-name}/, what do you see here?
17 |
18 |
19 |
20 | ## Reflection
21 | By now, you have created a user at a package registry and pushed code there that anyone can install.
22 | Moreover, you might have gotten a deeper understanding of how pip works: it finds projects by parsing a simple HTML file and downloads and installs wheels by following the links.
23 | This is VERY similar to how we could do it manually, but luckily, we don't need to.
24 |
25 | If you're interested in the specifics of PyPI and how it's built up, then you can check out the [PyPI documentation](https://docs.pypi.org/api/index-api/), or the HTML repository specs in [PEP 503](https://peps.python.org/pep-0503/) (there is also a JSON-spec that you can use if you want to create your own package repository: [PEP 691](https://peps.python.org/pep-0691/)).
26 |
27 | ## Exercise (Optional)
28 | 1. Use what you've learned so far to set up one of your own project to use uv.
29 |
30 |
31 |
32 | ## Next up
33 | [Using git to keep track of our code](../04-managing-your-code/13-using-git.md)
34 |
--------------------------------------------------------------------------------
/uv/tutorial/02-more-about-dependencies/10-lock-files.md:
--------------------------------------------------------------------------------
1 | # Lock files
2 |
3 | An integral part of developing a Python package is keeping track of its dependencies, and, as we've already seen, making sure they are compatible.
4 | The task of finding compatible set of dependencies, given the project requirements is called *locking*, and most major Python project management tools have a locking concept.
5 |
6 | ## Exercises
7 | 1. Open the `uv.lock`-file. What does it contain? Can you find your dependencies here?
8 | 2. The `uv.lock`-file contains many dependencies that you haven't explicitly added with `uv add`. Why do you think this is?
9 | 3. Run `uv export -o requirements.txt` and inspect the contents of the `requirements.txt`. What do you think it contains?
10 | 4. Delete the `requirements.txt`-file
11 | 5. Run `uv tree` and inspect the terminal output. What does this represent?
12 |
13 |
14 |
15 | ## Reflection
16 | When you add dependencies to your project, you also need to install your dependencies' dependencies.
17 | This is called indirect dependencies, and can be a source of much pain.
18 | Whenever you add a dependency, you may add dozens of other dependencies you don't know about.
19 | The lock file contains a full list of all these dependencies with information about what versions they require, and if uv can successfully lock your project, you know that there exists a compatible set of dependencies -- even among all the indirect ones.
20 |
21 | In fact, a very common type of lock file used to be the `requirements.txt`-file obtained from running `pip freeze` (or `uv export`, etc.)
22 | While this was commonly found before, we have more modern standards these days so we don't really need to use it anymore.
23 | Still, it's useful to be aware that a `requirements.txt`-file can function as an absolutely minimal lock file.
24 |
25 | Finally, we saw that while the lock file is useful for keeping a record of the dependencies, it is not meant for human consumption.
26 | Rather, if you want to inspect the dependencies of a project, then you should probably look at a visual representation of the dependency tree.
27 |
28 | ## Note: Upcoming standard
29 | Lock files were recently standardised in [PEP 751](https://peps.python.org/pep-0751/).
30 | This is great news, as lock files are one of the final hurdles for truly being able to work with different packaging frontends (like uv, PDM, Hatch and Poetry) on the same project.
31 | However, there is still a way to go before the different tools switch to this new format (and some may never switch completely).
32 | Still, it's great to see that the standard has arrived after many years of work.
33 | So, in the future, you may see `pylock.toml` as a common file in Python projects as well, together with the `pyproject.toml`!
34 |
35 | ## Next up
36 | [Building your project](../03-building-and-publishing-packages/11-building-wheels.md)
37 |
--------------------------------------------------------------------------------
/pdm/tutorial/02-more-about-dependencies/10-lock-files.md:
--------------------------------------------------------------------------------
1 | # Lock files
2 |
3 | An integral part of developing a Python package is keeping track of its dependencies, and, as we've already seen, making sure they are compatible.
4 | The task of finding compatible set of dependencies, given the project requirements is called *locking*, and most major Python project management tools have a locking concept.
5 |
6 | ## Exercises
7 | 1. Open the `pdm.lock`-file. What does it contain? Can you find your dependencies here?
8 | 2. The `pdm.lock`-file contains many dependencies that you haven't explicitly added with `pdm add`. Why do you think this is?
9 | 3. Run `pdm export -o requirements.txt` and inspect the contents of the `requirements.txt`. What do you think it contains?
10 | 4. Delete the `requirements.txt`-file
11 | 5. Run `pdm list --tree` and inspect the terminal output. What does this represent?
12 |
13 |
14 |
15 | ## Reflection
16 | When you add dependencies to your project, you also need to install your dependencies' dependencies.
17 | This is called indirect dependencies, and can be a source of much pain.
18 | Whenever you add a dependency, you may add dozens of other dependencies you don't know about.
19 | The lock file contains a full list of all these dependencies with information about what versions they require, and if PDM can successfully lock your project, you know that there exists a compatible set of dependencies -- even among all the indirect ones.
20 |
21 | In fact, a very common type of lock file used to be the `requirements.txt`-file obtained from running `pip freeze` (or `pdm export`, etc.)
22 | While this was commonly found before, we have more modern standards these days so we don't really need to use it anymore.
23 | Still, it's useful to be aware that a `requirements.txt`-file can function as an absolutely minimal lock file.
24 |
25 | Finally, we saw that while the lock file is useful for keeping a record of the dependencies, it is not meant for human consumption.
26 | Rather, if you want to inspect the dependencies of a project, then you should probably look at a visual representation of the dependency tree.
27 |
28 | ## Note: Upcoming standard
29 | Lock files were recently standardised in [PEP 751](https://peps.python.org/pep-0751/).
30 | This is great news, as lock files are one of the final hurdles for truly being able to work with different packaging frontends (like uv, PDM, Hatch and Poetry) on the same project.
31 | However, there is still a way to go before the different tools switch to this new format (and some may never switch completely).
32 | Still, it's great to see that the standard has arrived after many years of work.
33 | So, in the future, you may see `pylock.toml` as a common file in Python projects as well, together with the `pyproject.toml`!
34 |
35 | ## Next up
36 | [Building your project](../03-building-and-publishing-packages/11-building-wheels.md)
37 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Python project packaging: from zero to hero
2 |
3 | Packaging your code is an essential skill that empowers you to share your Python projects with the world.
4 | However, the packaging process can appear complex and overwhelming, and it can be difficult to know where to begin.
5 |
6 | This is meant as a hands-on tutorial to clarify what packaging is and how good packaging tools can help you share your code with others.
7 | We’ll cover how to structure projects and follow best practices for project layouts, specify project metadata, make your package installable, and publish it to PyPI. While the materials are meant for a hands-on tutorial, they can also be used for self study.
8 |
9 | We'll use uv in this tutorial, but there are also materials for PDM if you'd prefer that.
10 | Also, most of the topics covered will also apply to other packaging tools like Hatch or Poetry.
11 | Moreover, while the correct technical term for what we'll look at here is [distribution packages](https://packaging.python.org/en/latest/glossary/#term-Distribution-Package), we'll just use the word package (despite it's [more general meaning](https://docs.python.org/3/glossary.html#term-package)) as it's commonly used as a shorthand for distribution package (see e.g. PEP [458](https://peps.python.org/pep-0458/), [480](https://peps.python.org/pep-0480/) and [668](https://peps.python.org/pep-0668/)).
12 |
13 | ## Overview
14 |
15 | This workshop will be a hands-on way to explore the following concepts:
16 |
17 | * What a (distribution) package is
18 | * How pip install packages
19 | * What a wheel file is
20 | * How you can create packages
21 | * What the preffered way to specify project metadata is
22 |
23 | However, the workshop will **not** cover *application packaging*, which is usually done with a container system like Docker, Podman or Kubernetes.
24 | Still, the concepts will be useful for mastering application packaging, as most of the concepts covered here are necessary for application packaging as well.
25 | In fact, a common way to deploy an application is to first package it as a library and then install that library in a container image.
26 |
27 | ## Prerequisites
28 |
29 | These materials are meant both for beginner coders who are just getting ready to share their code and experienced developers who haven't looked at the Python packaging ecosystem for a couple of years and want a refresher.
30 | They assume that you are sort of familiar with Python and that you are somewhat familiar using the terminal (CMD/Bash/Zsh/Powershell/etc). It will also be beneficial to know the basics of Git for the later parts of the tutorial, but it's not a requirement. Though, if you're attending the live workshop, then we can help you along!
31 |
32 | Also you should know how to use a code editor like VSCode, PyCharm or Sublime Text.
33 |
34 | > [!NOTE]
35 | > If you cannot install uv on your machine, then you can fork our [base project repo](https://github.com/yngvem/pycon25-tutorial-project), which as a devcontainer with uv installed.
36 |
37 | ## Next
38 | [Introduction (uv)](uv/tutorial/01-introduction/01-workshop-introduction.md) or [Introduction (pdm)](pdm/tutorial/01-introduction/01-workshop-introduction.md)
39 |
40 | ## Copyright
41 | © Yngve Mardal Moe & Marie Roald (2025-)
42 |
--------------------------------------------------------------------------------
/pdm/tutorial/02-more-about-dependencies/09-static-code-checkers.md:
--------------------------------------------------------------------------------
1 | # Static code checkers
2 |
3 | In addition to tests, we also have tools that can analyse our code without running it.
4 | Specifically, we have code formatters, which deal with the code layout, but not its content (the code is equivalent before and after formatting); linters, which detects issues with the code itself; and type checkers, which finds type issues in statically typed Python code.
5 | Such tools are commonly used to ensure a high and consistent code quality in projects, and we highly recommend that you use such tools -- particularly linters and code formatters.
6 |
7 | Previously, there were many linters and code formatters, like [flake8](https://flake8.pycqa.org/en/latest/), [autopep8](https://pypi.org/project/autopep8/), [Black](https://github.com/psf/black), [pylint](https://www.pylint.org)++.
8 | However, these days, the main tool to use is [Ruff](https://astral.sh/ruff), which is lightning fast and does pretty much everything flit, isort, black and many other tools did before.
9 |
10 | ## Exercises
11 |
12 | 1. Add `ruff` as a PDM development dependency and run it in your terminal by running `pdm run ruff check .`. What happened?
13 | 2. You can customise what Ruff looks for in your `pyproject.toml`-file. Add the following lines to the bottom of your `pyproject.toml` file and run the ruff checks again. What changed?
14 | ```toml
15 | [tool.ruff.lint]
16 | select = ["I"]
17 | ```
18 | 3. Run `pdm run ruff check --fix .` and look at your `tests/test_pycon.py` file. What changed? What do you think will happen if you run `pdm run ruff check` again?
19 | 4. Run `pdm run ruff format .` and look at your `tests/test_pycon.py` file. How is the `format` argument different from the `check` argument?
20 |
21 |
22 |
23 | ## Reflection
24 |
25 | By running `ruff check`, we used the linter functionality of Ruff to see if there are potential bugs or [code smells](https://en.wikipedia.org/wiki/Code_smell) that we should fix.
26 | We also added more [linting rules](https://docs.astral.sh/ruff/rules/), and made it consider the [order of our imports](https://docs.astral.sh/ruff/rules/#isort-i).
27 | This made our code fail the linter checks.
28 | However, some of the linter checks can be solved safely automatically, like the import sorting, so by running `ruff check --fix .`, we ask Ruff to automatically resolve the issues it can resolve.
29 |
30 | Ruff can also work as a code formatter, which is why we could run `ruff format .`.
31 | When we ran that code it mostly added and removed whitespace, producing (nearly) equivalent code that should behave exactly like the original code, but that also follows the [PEP-8](https://peps.python.org/pep-0008/) style guide.
32 |
33 | However, what Ruff doesn't do static type checking, which you will need either [MyPy](https://mypy-lang.org) or [PyRight](https://github.com/microsoft/pyright) for.
34 | Still, Ruff's impressive speed, wide range of linting rules and almost perfect agreement with [Black](https://github.com/psf/black) formatted code has made it very popular in the Python ecosystem.
35 |
36 | ## Next up
37 | [Locking and lock files](./10-lock-files.md)
38 |
--------------------------------------------------------------------------------
/uv/tutorial/02-more-about-dependencies/09-static-code-checkers.md:
--------------------------------------------------------------------------------
1 | # Static code checkers
2 |
3 | In addition to tests, we also have tools that can analyse our code without running it.
4 | Specifically, we have code formatters, which deal with the code layout, but not its content (the code is equivalent before and after formatting); linters, which detects issues with the code itself; and type checkers, which finds type issues in statically typed Python code.
5 | Such tools are commonly used to ensure a high and consistent code quality in projects, and we highly recommend that you use such tools -- particularly linters and code formatters.
6 |
7 | Previously, there were many linters and code formatters, like [flake8](https://flake8.pycqa.org/en/latest/), [autopep8](https://pypi.org/project/autopep8/), [Black](https://github.com/psf/black), [pylint](https://www.pylint.org)++.
8 | However, these days, the main tool to use is [Ruff](https://astral.sh/ruff), which is lightning fast and does pretty much everything flit, isort, black and many other tools did before.
9 |
10 | ## Exercises
11 |
12 | 1. Add `ruff` as a uv development dependency and run it in your terminal by running `uv run ruff check .`. What happened?
13 | 2. You can customise what Ruff looks for in your `pyproject.toml`-file. Add the following lines to the bottom of your `pyproject.toml` file and run the ruff checks again. What changed, did any files have any issues, if so which?
14 | ```toml
15 | [tool.ruff.lint]
16 | select = ["I"]
17 | ```
18 | 3. Run `uv run ruff check --fix .` and look at your `tests/test_pycon.py` file. What changed? What do you think will happen if you run `uv run ruff check` again?
19 | 4. Run `uv run ruff format .` and look at your `tests/test_pycon.py` file. How is the `format` argument different from the `check` argument?
20 |
21 |
22 |
23 | ## Reflection
24 | By running `ruff check`, we used the linter functionality of Ruff to see if there are potential bugs or [code smells](https://en.wikipedia.org/wiki/Code_smell) that we should fix.
25 | We also added more [linting rules](https://docs.astral.sh/ruff/rules/), and made it consider the [order of our imports](https://docs.astral.sh/ruff/rules/#isort-i).
26 | This made our code fail the linter checks.
27 | However, some of the linter checks can be solved safely automatically, like the import sorting, so by running `ruff check --fix .`, we ask Ruff to automatically resolve the issues it can resolve.
28 |
29 | Ruff can also work as a code formatter, which is why we could run `ruff format .`.
30 | When we ran that code it mostly added and removed whitespace, producing (nearly) equivalent code that should behave exactly like the original code, but that also follows the [PEP-8](https://peps.python.org/pep-0008/) style guide.
31 |
32 | However, what Ruff doesn't do static type checking, which you will need either [MyPy](https://mypy-lang.org) or [PyRight](https://github.com/microsoft/pyright) for.
33 | Still, Ruff's impressive speed, wide range of linting rules and almost perfect agreement with [Black](https://github.com/psf/black) formatted code has made it very popular in the Python ecosystem.
34 |
35 | ## Next up
36 | [Locking and lock files](./10-lock-files.md)
37 |
--------------------------------------------------------------------------------
/pdm/tutorial/01-introduction/0x-bonus-scripts.md:
--------------------------------------------------------------------------------
1 | # Bonus content: Package scripts
2 |
3 | You may have noticed that some Python projects also create executables.
4 | For example, you can run Pytest by just running the command `pytest`.
5 | This is done through something called *[executable scripts](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#creating-executable-scripts)* or entry points.
6 |
7 | ## Exercises
8 |
9 | 1. Open the `pyproject.toml` file and update it so it has a block that says
10 |
11 | ```toml
12 | [project.scripts]
13 | {PACKAGE_NAME} = "{PACKAGE_NAME}.pycon:main"
14 | ```
15 |
16 | 2. Run `pdm install`. What do you think will happen when you run `pdm run {PACKAGE_NAME}` now?
17 | 3. Update the `{PACKAGE_NAME} = "{PACKAGE_NAME}.pycon:main` so it says `pycon_next_session = "{PACKAGE_NAME}.pycon:main` and run `pdm install`. What do you think will happen when you run `pdm run pycon_next_session`?
18 |
19 | ## Reflection
20 |
21 | We have now seen how we can create executables with Python, and this is how tools like `pytest` and `pre-commit` registers executables on your system.
22 | Specifically, you let `pip`, `pipx`, `pdm`, etc. create the executable when you install the tool, and they will just run the function specified in the `project.scripts` table.
23 |
24 | Thus, after solving exercise 3, running `pdm run pycon_next_session` will be the same as running the following Python script:
25 | ```python
26 | from {PACKAGE_NAME}.pycon import main
27 | main()
28 | ```
29 |
30 | However, this is not the only way to make your code runnable!
31 | We can also make executable modules.
32 |
33 | ## Exercises
34 |
35 | 1. Try to run `pdm run python -m {PACKAGE_NAME}.pycon`. What do you think happened?
36 | 2. Try to run `pdm run python -m {PACKAGE_NAME}`. What do you think will happen now?
37 | 3. Add the file `__main__.py` into your `src/{PACKAGE_NAME}` directory. The file should have the following contents
38 | ```python
39 | from {PACKAGE_NAME}.pycon import main
40 | main()
41 | ```
42 | 4. Try to run `pdm run python -m {PACKAGE_NAME}` again. Discuss with your neighbour: Why do you think this worked now?
43 | 5. Discuss with your neighbour: What do you think the benefit of using `python -m {module}` compared to just running `python {path_to_script}`?
44 |
45 |
46 | ## Reflection
47 |
48 | We have now seen the other way of making executable Python modules: The `python -m`-method.
49 | This works a little bit differently from the `project.scripts`-method.
50 | With the executable scripts, we specify a function we want to run, while with the `python -m`-method we specify a module.
51 | This means that running `pdm run python -m {PACKAGE_NAME}.pycon` will be the same as running `pdm run python src/{PACKAGE_NAME}/pycon.py`.
52 |
53 | The benefit of using the `python -m` method compared to just running Python files with `python {path_to_script}` is that we can run any importable module, even if you don't know its path.
54 | You can test this yourself by e.g. running `pdm run python -m calendar`.
55 | This will be the same as running the builtin calendar module in Python!
56 |
57 | If you want to see a full list of all runnable modules that come built-in with Python, then Trey Hunner has written about it [here](https://www.pythonmorsels.com/cli-tools/).
58 |
59 | ## Next up
60 | [Virtual environments](./05-virtual-environments.md)
61 |
--------------------------------------------------------------------------------
/uv/tutorial/01-introduction/0x-bonus-scripts.md:
--------------------------------------------------------------------------------
1 | # Bonus content: Package scripts
2 |
3 | You may have noticed that some Python projects also create executables.
4 | For example, you can run Pytest by just running the command `pytest`.
5 | This is done through something called *[executable scripts](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#creating-executable-scripts)* or entry points.
6 |
7 | ## Exercises
8 |
9 | 1. Inspect the `project.scripts` table in the `pyproject.toml` file. What do you think will happen if you run `uv run {PACKAGE_NAME}`? Try and see if you were right.
10 | 2. Update the `{PACKAGE_NAME} = "{PACKAGE_NAME}:main"` so it says `{PACKAGE_NAME} = "{PACKAGE_NAME}.pycon:main`. What do you think will happen when you run `uv run {PACKAGE_NAME}` now?
11 | 3. Update the `{PACKAGE_NAME} = "{PACKAGE_NAME}.pycon:main` so it says `pycon_next_session = "{PACKAGE_NAME}.pycon:main`. What do you think will happen when you run `uv run pycon_next_session`?
12 |
13 | ## Reflection
14 |
15 | We have now seen how we can create executables with Python, and this is how tools like `pytest` and `pre-commit` registers executables on your system.
16 | Specifically, you let `pip`, `pipx`, `uv`, etc. create the executable when you install the tool, and they will just run the function specified in the `project.scripts` table.
17 |
18 | Thus, after solving exercise 3, running `uv run pycon_next_session` will be the same as running the following Python script:
19 | ```python
20 | from {PACKAGE_NAME}.pycon import main
21 | main()
22 | ```
23 |
24 | However, this is not the only way to make your code runnable!
25 | We can also make executable modules.
26 |
27 | ## Exercises
28 |
29 | 1. Try to run `uv run python -m {PACKAGE_NAME}.pycon`. What do you think happened?
30 | 2. Try to run `uv run python -m {PACKAGE_NAME}`. What do you think will happen now?
31 | 3. Add the file `__main__.py` into your `src/{PACKAGE_NAME}` directory. The file should have the following contents
32 | ```python
33 | from {PACKAGE_NAME}.pycon import main
34 | main()
35 | ```
36 | 4. Try to run `uv run python -m {PACKAGE_NAME}` again. Discuss with your neighbour: Why do you think this worked now?
37 | 5. Discuss with your neighbour: What do you think the benefit of using `python -m {module}` compared to just running `python {path_to_script}`?
38 |
39 |
40 | ## Reflection
41 |
42 | We have now seen the other way of making executable Python modules: The `python -m`-method.
43 | This works a little bit differently from the `project.scripts`-method.
44 | With the executable scripts, we specify a function we want to run, while with the `python -m`-method we specify a module.
45 | This means that running `uv run python -m {PACKAGE_NAME}.pycon` will be the same as running `uv run python src/{PACKAGE_NAME}/pycon.py`.
46 |
47 | The benefit of using the `python -m` method compared to just running Python files with `python {path_to_script}` is that we can run any importable module, even if you don't know its path.
48 | You can test this yourself by e.g. running `uv run python -m calendar`.
49 | This will be the same as running the builtin calendar module in Python!
50 |
51 | If you want to see a full list of all runnable modules that come built-in with Python, then Trey Hunner has written about it [here](https://www.pythonmorsels.com/cli-tools/).
52 |
53 | ## Next up
54 | [Virtual environments](./05-virtual-environments.md)
55 |
--------------------------------------------------------------------------------
/uv/tutorial/01-introduction/03-our-first-package.md:
--------------------------------------------------------------------------------
1 | # Creating an installable Python package
2 | We are now ready to make our first package.
3 | Luckily, we don't need to do this manually -- many amazing people in the Python community spend their time maintaining tools that do the heavy lifting for us.
4 | We will use a modern, feature-packed and standard complying packaging tool: [uv](https://docs.astral.sh/uv/), but there are many other great tools too, like [PDM](https://pdm-project.org/latest/), [Hatch](https://hatch.pypa.io/latest/), [Poetry](https://python-poetry.org/) and [Setuptools](https://setuptools.pypa.io/en/latest/).
5 |
6 | What we learn in this tutorial will easily transfer across different packaging tools, as all solve more or less the same problem: how to distribute Python packages.
7 | If you want a comparison of the different tools for packaging in Python, then we recommend [this excellent writeup by pyOpenSci](https://www.pyopensci.org/python-package-guide/package-structure-code/python-package-build-tools.html).
8 |
9 | ## Exercises
10 |
11 | 1. Install uv (e.g. by following the official [installation instructions](https://docs.astral.sh/uv/getting-started/installation/)).
12 | 2. We will now create a new empty Python project with uv. Create a folder with the name you want your project to get, e.g. `packaging-tutorial-{myname}`, navigate your terminal to that directory, and run `uv init --package`.
13 | 3. Where do you think you should should put the source code for your package?
14 | 4. Open the pyproject.toml file. Can you find where the following pieces of information are stored?
15 |
16 | * The author's name
17 | * The author's e-mail
18 | * The version number
19 | * The minimal Python version required to use your package
20 |
21 | 5. What are the dependencies of your package?
22 | 6. Discuss with your neighbour: what do you think the purpose of the `__init__.py`-file in the `src/{PACKAGE_NAME}`-directory is?
23 |
24 |
25 |
26 | ## Reflection
27 |
28 | We have now learned how we can create a new package with uv.
29 | By default, uv uses a [src-layout](https://packaging.python.org/en/latest/discussions/src-layout-vs-flat-layout/) for libraries, which has some benefits compared to using a flat layout (see e.g. [this blog post](https://hynek.me/articles/testing-packaging/#src), [this blog post](https://blog.ionelmc.ro/2014/05/25/python-packaging/#the-structure) or [this discussion](https://github.com/pypa/packaging.python.org/pull/1150)).
30 | Additionally, uv stores the package metadata in an easy-to-read pyproject.toml file that follows the [official PEP-621 specification](https://packaging.python.org/en/latest/specifications/pyproject-toml/).
31 |
32 | We have also looked at the `__init__.py`-file.
33 | This is a special file that tells Python that the `src/{PACKAGE_NAME}`-directory should be importable.
34 | Strictly speaking, it turns the directory into a [package](https://docs.python.org/3/tutorial/modules.html#packages), but that package is confusingly a bit different from what people normally talk about when they talk about packaging.
35 | The file does several things, but you can conceptualize it as two things: it makes it possible to do "[relative imports](https://docs.python.org/3/reference/import.html#package-relative-imports)", and its the file you import when you write `import {PACKAGE_NAME}`.
36 |
37 | ## Next up
38 | [Dependencies and creating your first package](./04-dependencies.md)
39 |
--------------------------------------------------------------------------------
/pdm/tutorial/04-managing-your-code/13-using-git.md:
--------------------------------------------------------------------------------
1 | # Using Git
2 |
3 | This part functions as an extremely quick introduction to creating a [Git](https://git-scm.com/) repository.
4 |
5 | We've all been there: we have made some changes to some files, but we don't know if we prefer them to the previous versions. So what do we do?
6 | We save the file as `{filename}_v2.py`, `{filename}_v2-2.py`, `{filename}_v3.py` `{filename}_final.py` and `{filename}_old.py.bak`.
7 | But what's the difference between `{filename}_v2-2.py` and `{filename}_v3.py`? And do we really need all this clutter in our codebase?
8 | The answer is of course no -- and the solution: a version control system (VCS)/source control management tool (SCM-tool) named Git.
9 |
10 | Simply put, Git is a tool that makes it easy to have multiple versions of the same code.
11 | It facilitates sharing and makes it easy (or, let's face it, easier) to merge changes from multiple parties.
12 |
13 | We won't cover much here, just the absolute basics (and maybe not even that).
14 | A complete introduction to Git is, unfortunately, outside the scope of this tutorial.
15 | For a more thorough introduction to git, we refer to the excellent [Introduction to Sprinting](https://github.com/chalmerlowe/intro_to_sprinting/) tutorial by Chalmer Lowe, the [How Git Works](https://wizardzines.com/zines/git/) Wizard Zine by Julia Evans, and the [Pro Git book](https://git-scm.com/book/en/v2).
16 |
17 | ## Exercises
18 |
19 | 1. Run `git init` in your project root to initialize a Git repository.
20 | 1. *Stage* all files in your project root and all subdirectories by running `git add *`
21 | 1. *Commit* the changes by running `git commit -m "Initial commit"`
22 | 1. Check that the files are committed by running `git log` followed by `git status`. What do you think these commands do?
23 |
24 |
25 |
26 | ## Reflection
27 |
28 | You have now initialized a Git repository and committed some files.
29 | Essentially, this means that Git knows about the files and knows to track their changes.
30 | If you make any change to one file (multiple files), then you can *commit* it (them), telling Git that "this is the latest version of this file".
31 | Together with the commit, you write a message, which typically is a short description of what you did, and maybe why did it.
32 | Then, if you ever want to see old versions of your files, you can write `git log` to go through all changes and `git checkout` to "go back in time" and see the state of your repository at a given commit.
33 |
34 | ## Exercises
35 |
36 | 1. Modify a file (e.g. by adding a comment). Run git status again. What do you see?
37 | 1. What do you think the command `git diff` does? Run it and see if you were right (PS. If you're stuck in the diff view, then you can exit it by pressing q).
38 | 1. *Stage* the file you changed by running `git add {path/to/file}.`
39 | 1. *Commit* the file by running `git commit` and write a descriptive message.
40 | 1. Have a look at the updated git log by running `git log`.
41 |
42 |
43 |
44 | ## Reflection
45 | You have now updated your file and committed the changes.
46 | In doing so, you first had to stage the file, which is akin to telling Git "These are the changes I want to store."
47 | Then, you commited the changes, essentially telling Git: "Store a snapshot of the files as they were when I staged them."
48 |
49 | ## Next up
50 | [Using pre-commit hooks to automatically check the code before committing](./14-pre-commit.md)
51 |
--------------------------------------------------------------------------------
/uv/tutorial/04-managing-your-code/13-using-git.md:
--------------------------------------------------------------------------------
1 | # Using Git
2 |
3 | This part functions as an extremely quick introduction to creating a [Git](https://git-scm.com/) repository.
4 |
5 | We've all been there: we have made some changes to some files, but we don't know if we prefer them to the previous versions. So what do we do?
6 | We save the file as `{filename}_v2.py`, `{filename}_v2-2.py`, `{filename}_v3.py` `{filename}_final.py` and `{filename}_old.py.bak`.
7 | But what's the difference between `{filename}_v2-2.py` and `{filename}_v3.py`? And do we really need all this clutter in our codebase?
8 | The answer is of course no -- and the solution: a version control system (VCS)/source control management tool (SCM-tool) named Git.
9 |
10 | Simply put, Git is a tool that makes it easy to have multiple versions of the same code.
11 | It facilitates sharing and makes it easy (or, let's face it, easier) to merge changes from multiple parties.
12 |
13 | We won't cover much here, just the absolute basics (and maybe not even that).
14 | A complete introduction to Git is, unfortunately, outside the scope of this tutorial.
15 | For a more thorough introduction to git, we refer to the excellent [Introduction to Sprinting](https://github.com/chalmerlowe/intro_to_sprinting/) tutorial by Chalmer Lowe, the [How Git Works](https://wizardzines.com/zines/git/) Wizard Zine by Julia Evans, and the [Pro Git book](https://git-scm.com/book/en/v2).
16 |
17 | ## Exercises
18 |
19 | 1. Run `git init` in your project root to initialize a Git repository.
20 | 1. *Stage* all files in your project root and all subdirectories by running `git add *`
21 | 1. *Commit* the changes by running `git commit -m "Initial commit"`
22 | 1. Check that the files are committed by running `git log` followed by `git status`. What do you think these commands do?
23 |
24 |
25 |
26 | ## Reflection
27 |
28 | You have now initialized a Git repository and committed some files.
29 | Essentially, this means that Git knows about the files and knows to track their changes.
30 | If you make any change to one file (multiple files), then you can *commit* it (them), telling Git that "this is the latest version of this file".
31 | Together with the commit, you write a message, which typically is a short description of what you did, and maybe why did it.
32 | Then, if you ever want to see old versions of your files, you can write `git log` to go through all changes and `git checkout` to "go back in time" and see the state of your repository at a given commit.
33 |
34 | ## Exercises
35 |
36 | 1. Modify a file (e.g. by adding a comment). Run git status again. What do you see?
37 | 1. What do you think the command `git diff` does? Run it and see if you were right (PS. If you're stuck in the diff view, then you can exit it by pressing q).
38 | 1. *Stage* the file you changed by running `git add {path/to/file}.`
39 | 1. *Commit* the file by running `git commit` and write a descriptive message.
40 | 1. Have a look at the updated git log by running `git log`.
41 |
42 |
43 |
44 | ## Reflection
45 | You have now updated your file and committed the changes.
46 | In doing so, you first had to stage the file, which is akin to telling Git "These are the changes I want to store."
47 | Then, you commited the changes, essentially telling Git: "Store a snapshot of the files as they were when I staged them."
48 |
49 | ## Next up
50 | [Using pre-commit hooks to automatically check the code before committing](./14-pre-commit.md)
51 |
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/src/packaging_tutorial/validated_pycon.py:
--------------------------------------------------------------------------------
1 | from collections.abc import Generator, Iterable
2 | from datetime import UTC, datetime, timedelta
3 | from functools import lru_cache
4 | from typing import Any, Self
5 |
6 | import httpx
7 | import pydantic
8 |
9 | try:
10 | from rich import print
11 | except ImportError:
12 | pass
13 |
14 |
15 | class Session(pydantic.BaseModel):
16 | room: str | None = None
17 | rooms: list[str] | None = None
18 | start: datetime | None = None
19 | end: datetime | None = None
20 | duration: timedelta | None = None
21 | kind: str | None = None
22 | section: str | None = None
23 | conf_key: int | None = None
24 | list_render: bool | None = None
25 | license: str | None = None
26 | tags: str | None = None
27 | released: bool | None = None
28 | contact: list[str] | None = None
29 | name: str | None = None
30 | description: str | None = None
31 | authors: list[str] | None = None
32 | speakers: list[dict[str, int | str]] | None = None
33 | conf_url: str | None = None
34 | cancelled: bool | None = None
35 |
36 | @classmethod
37 | def from_record(cls, record: dict[str, Any]) -> Self:
38 | return cls(
39 | room=record.get("room"),
40 | rooms=record.get("rooms"),
41 | start=(datetime.fromisoformat(t) if (t := record.get("start")) else None),
42 | end=(datetime.fromisoformat(t) if (t := record.get("end")) else None),
43 | duration=(timedelta(minutes=dt) if (dt := record.get("duration")) else None),
44 | kind=record.get("kind"),
45 | section=record.get("section"),
46 | conf_key=record.get("conf_key"),
47 | list_render=record.get("list_render"),
48 | license=record.get("license"),
49 | tags=record.get("tags"),
50 | released=record.get("released"),
51 | contact=record.get("contact"),
52 | name=record.get("name"),
53 | description=record.get("description"),
54 | authors=record.get("authors"),
55 | speakers=record.get("speakers"),
56 | conf_url=record.get("conf_url"),
57 | cancelled=record.get("cancelled"),
58 | )
59 |
60 |
61 | HTTP_CLIENT = httpx.Client()
62 |
63 |
64 | @lru_cache()
65 | def get_programme() -> dict:
66 | response = HTTP_CLIENT.get("https://us.pycon.org/2025/schedule/conference.json")
67 | response.raise_for_status()
68 | return response.json()
69 |
70 |
71 | def get_sessions() -> Generator[Session, None, None]:
72 | programme = get_programme()
73 | return (Session.from_record(record) for record in programme["schedule"])
74 |
75 |
76 | def get_future_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
77 | if now is None:
78 | now = datetime.now(tz=UTC)
79 | return (session for session in sessions if session.start and session.start > now)
80 |
81 |
82 | def get_immediate_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
83 | sessions = list(get_future_sessions(sessions, now=now))
84 | start_time = min(session.start for session in sessions if session.start)
85 | return (session for session in sessions if session.start == start_time)
86 |
87 |
88 | def main() -> None:
89 | sessions = get_sessions()
90 |
91 | print("Next sessions:")
92 | print("==============\n")
93 | for session in get_immediate_sessions(sessions):
94 | print(session.name, session.start )
95 |
96 |
97 | if __name__ == "__main__":
98 | main()
99 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/src/packaging_tutorial/validated_pycon.py:
--------------------------------------------------------------------------------
1 | from collections.abc import Generator, Iterable
2 | from datetime import UTC, datetime, timedelta
3 | from functools import lru_cache
4 | from typing import Any, Self
5 |
6 | import httpx
7 | import pydantic
8 |
9 | try:
10 | from rich import print
11 | except ImportError:
12 | pass
13 |
14 |
15 | class Session(pydantic.BaseModel):
16 | room: str | None = None
17 | rooms: list[str] | None = None
18 | start: datetime | None = None
19 | end: datetime | None = None
20 | duration: timedelta | None = None
21 | kind: str | None = None
22 | section: str | None = None
23 | conf_key: int | None = None
24 | list_render: bool | None = None
25 | license: str | None = None
26 | tags: str | None = None
27 | released: bool | None = None
28 | contact: list[str] | None = None
29 | name: str | None = None
30 | description: str | None = None
31 | authors: list[str] | None = None
32 | speakers: list[dict[str, int | str]] | None = None
33 | conf_url: str | None = None
34 | cancelled: bool | None = None
35 |
36 | @classmethod
37 | def from_record(cls, record: dict[str, Any]) -> Self:
38 | return cls(
39 | room=record.get("room"),
40 | rooms=record.get("rooms"),
41 | start=(datetime.fromisoformat(t) if (t := record.get("start")) else None),
42 | end=(datetime.fromisoformat(t) if (t := record.get("end")) else None),
43 | duration=(timedelta(minutes=dt) if (dt := record.get("duration")) else None),
44 | kind=record.get("kind"),
45 | section=record.get("section"),
46 | conf_key=record.get("conf_key"),
47 | list_render=record.get("list_render"),
48 | license=record.get("license"),
49 | tags=record.get("tags"),
50 | released=record.get("released"),
51 | contact=record.get("contact"),
52 | name=record.get("name"),
53 | description=record.get("description"),
54 | authors=record.get("authors"),
55 | speakers=record.get("speakers"),
56 | conf_url=record.get("conf_url"),
57 | cancelled=record.get("cancelled"),
58 | )
59 |
60 |
61 | HTTP_CLIENT = httpx.Client()
62 |
63 |
64 | @lru_cache()
65 | def get_programme() -> dict:
66 | response = HTTP_CLIENT.get("https://us.pycon.org/2025/schedule/conference.json")
67 | response.raise_for_status()
68 | return response.json()
69 |
70 |
71 | def get_sessions() -> Generator[Session, None, None]:
72 | programme = get_programme()
73 | return (Session.from_record(record) for record in programme["schedule"])
74 |
75 |
76 | def get_future_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
77 | if now is None:
78 | now = datetime.now(tz=UTC)
79 | return (session for session in sessions if session.start and session.start > now)
80 |
81 |
82 | def get_immediate_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
83 | sessions = list(get_future_sessions(sessions, now=now))
84 | start_time = min(session.start for session in sessions if session.start)
85 | return (session for session in sessions if session.start == start_time)
86 |
87 |
88 | def main() -> None:
89 | sessions = get_sessions()
90 |
91 | print("Next sessions:")
92 | print("==============\n")
93 | for session in get_immediate_sessions(sessions):
94 | print(session.name, session.start )
95 |
96 |
97 | if __name__ == "__main__":
98 | main()
99 |
--------------------------------------------------------------------------------
/pdm/tutorial/01-introduction/03-our-first-package.md:
--------------------------------------------------------------------------------
1 | # Creating an installable Python package
2 | We are now ready to make our first package.
3 | Luckily, we don't need to do this manually -- many amazing people in the Python community spend their time maintaining tools that do the heavy lifting for us.
4 | We will use a modern, feature-packed and standard complying packaging tool: [PDM](https://pdm-project.org/latest/), but there are many other great tools too, like [uv](https://docs.astral.sh/uv/), [Hatch](https://hatch.pypa.io/latest/), [Poetry](https://python-poetry.org/) and [Setuptools](https://setuptools.pypa.io/en/latest/).
5 |
6 | What we learn in this tutorial will easily transfer across different packaging tools, as all solve more or less the same problem: how to distribute Python packages.
7 | If you want a comparison of the different tools for packaging in Python, then we recommend [this excellent writeup by pyOpenSci](https://www.pyopensci.org/python-package-guide/package-structure-code/python-package-build-tools.html).
8 |
9 | ## Exercises
10 |
11 | 1. Install PDM (e.g. by following the official [installation instructions](https://pdm-project.org/en/latest/#recommended-installation-method)).
12 | 2. We will now create a new empty Python project with PDM. Create the folder you want to store the project in, run `pdm init`, and answer all the questions the setup-wizards asks. Answer yes to the "Is the project a library that is installable?" question. If there are any other questions that you do not understand yet, then just press enter to select the default option.
13 | 3. In what sub-directory should you put the source code for your package?
14 | 4. Open the pyproject.toml file. You should find the name you gave the package on line 2. Can you find where the following pieces of information are stored?
15 |
16 | * The author's name
17 | * The author's e-mail
18 | * The version number
19 | * The minimal Python version required to use your package
20 |
21 | 5. What are the dependencies of your package?
22 | 6. Discuss with your neighbour: what do you think the purpose of the `__init__.py`-file in the `src/{PACKAGE_NAME}`-directory is?
23 |
24 |
25 |
26 | ## Reflection
27 |
28 | We have now learned how we can create a new package with PDM.
29 | By default, PDM uses a [src-layout](https://packaging.python.org/en/latest/discussions/src-layout-vs-flat-layout/), which has some benefits compared to using a flat layout (see e.g. [this blog post](https://hynek.me/articles/testing-packaging/#src), [this blog post](https://blog.ionelmc.ro/2014/05/25/python-packaging/#the-structure) or [this discussion](https://github.com/pypa/packaging.python.org/pull/1150)).
30 | Additionally, PDM stores the package metadata in an easy-to-read pyproject.toml file that follows the [official PEP-621 specification](https://packaging.python.org/en/latest/specifications/pyproject-toml/).
31 |
32 | We have also looked at the `__init__.py`-file.
33 | This is a special file that tells Python that the `src/{PACKAGE_NAME}`-directory should be importable.
34 | Strictly speaking, it turns the directory into a [package](https://docs.python.org/3/tutorial/modules.html#packages), but that package is confusingly a bit different from what people normally talk about when they talk about packaging.
35 | The file does several things, but you can conceptualize it as two things: it makes it possible to do "[relative imports](https://docs.python.org/3/reference/import.html#package-relative-imports)", and its the file you import when you write `import {PACKAGE_NAME}`.
36 |
37 | ## Next up
38 | [Dependencies and creating your first package](./04-dependencies.md)
39 |
--------------------------------------------------------------------------------
/pdm/tutorial/02-more-about-dependencies/07-package-extras.md:
--------------------------------------------------------------------------------
1 | # Package extras
2 | Package extras or optional dependencies, previously often called dependency groups (but now dependency groups is a specific thing we'll discuss later), is a way to group the dependencies of our project.
3 | They are commonly used if you have optional features that not every user of your package needs.
4 | For example, if you use the artificial intelligence library *Huggingface Transformers*, then you might also want to use the parts of the Transformers library that require *PyTorch*.
5 | One way to install both Transformers and a compatible version of PyTorch simultaneously is by installing `transformers[torch]`, which specifies that you want the `torch`-package extras when you install Transformers.
6 | While package extras are meant to add optional dependencies to your projects, they can also be used to define development dependencies, e.g. running the unit tests or creating the documentation.
7 |
8 | ## Exercises
9 | 1. Add `rich` as an optional dependency in the group `rich` by running `pdm add rich --group pretty`. Can you find the `rich`-dependency in the `pyproject.toml`-file?
10 | 2. Try to run the `get_charlas.py` script again. Did anything change?
11 | 2. Sync the virtual environment again with `pdm install --without pretty`.
12 | 3. Run the `get_charlas.py` script again. Did you still get the nice colours?
13 | 4. Install the package with the `pretty`-extras again by writing `pdm install --with pretty`
14 | 5. Run the `get_charlas.py` script again to check that you got the nice colours back.
15 | 6. When do you think it makes sense to use optional dependencies?
16 |
17 |
18 |
19 | ## Reflection
20 |
21 | We have now added `rich` as an optional dependency to our package.
22 | This means that when we later publish our code to PyPI, people can install it both with and without the `rich`-dependency.
23 | This is very useful when our code has some functionality that not all users of our code are interested in.
24 | By making the dependencies for that code optional, we reduce the number of indirect dependencies for users of our package, which again makes their lives easier.
25 |
26 | A prime example is if we make a cli application.
27 | Then, we may want to include a specific version of [Typer](https://typer.tiangolo.com) to get a nice CLI interface.
28 | However, if we pin the Typer version, and someone else wants to use our code as a component in their own CLI, then they would be locked to the same Typer version as we are using.
29 | Instead, if we make the CLI-dependencies into optional dependencies, then this is unproblematic.
30 | Users who want to install the CLI can install `{PACKAGE_NAME}[cli]`, while those who only want the library features can simply install `{PACKAGE_NAME}`.
31 |
32 | Moreover, since you'd like your app to work with all optional dependencies, PDM will install them by default.
33 | Specifically, PDM will install optional dependency of **your** project, but not the optional dependencies of your dependencies (unless you specify the extras by running `pdm add {library}[{optional_group}]`).
34 | However, you can still set it up so PDM skips optional groups of your library as well, and it's a good idea to run tests both with and without optional dependencies (and if you have CI-pipelines, then they should run without optional groups as well).
35 |
36 | So we have optional dependencies, but what about PyTest? Should that be a project dependency?
37 | No, we use it for development, so it should probably be listed as such, which is why the next part of this tutorial is about development dependencies.
38 |
39 | ## Next up
40 | [Development dependencies](./08-dev-dependencies.md)
41 |
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/src/packaging_tutorial/pycon.py:
--------------------------------------------------------------------------------
1 | import dataclasses
2 | import itertools
3 | from collections.abc import Generator, Iterable
4 | from datetime import UTC, datetime, timedelta
5 | from functools import lru_cache
6 | from typing import Any, Self
7 |
8 | import httpx
9 |
10 | try:
11 | from rich import print
12 | except ImportError:
13 | pass
14 |
15 | @dataclasses.dataclass
16 | class Session:
17 | room: str | None = None
18 | rooms: list[str] | None = None
19 | start: datetime | None = None
20 | end: datetime | None = None
21 | duration: timedelta | None = None
22 | kind: str | None = None
23 | section: str | None = None
24 | conf_key: int | None = None
25 | list_render: bool | None = None
26 | license: str | None = None
27 | tags: str | None = None
28 | released: bool | None = None
29 | contact: list[str] | None = None
30 | name: str | None = None
31 | description: str | None = None
32 | authors: list[str] | None = None
33 | speakers: list[dict[str, int | str]] | None = None
34 | conf_url: str | None = None
35 | cancelled: bool | None = None
36 |
37 | @classmethod
38 | def from_record(cls, record: dict[str, Any]) -> Self:
39 | return cls(
40 | room=record.get("room"),
41 | rooms=record.get("rooms"),
42 | start=(datetime.fromisoformat(t) if (t := record.get("start")) else None),
43 | end=(datetime.fromisoformat(t) if (t := record.get("end")) else None),
44 | duration=(timedelta(minutes=dt) if (dt := record.get("duration")) else None),
45 | kind=record.get("kind"),
46 | section=record.get("section"),
47 | conf_key=record.get("conf_key"),
48 | list_render=record.get("list_render"),
49 | license=record.get("license"),
50 | tags=record.get("tags"),
51 | released=record.get("released"),
52 | contact=record.get("contact"),
53 | name=record.get("name"),
54 | description=record.get("description"),
55 | authors=record.get("authors"),
56 | speakers=record.get("speakers"),
57 | conf_url=record.get("conf_url"),
58 | cancelled=record.get("cancelled"),
59 | )
60 |
61 |
62 | HTTP_CLIENT = httpx.Client()
63 |
64 |
65 | @lru_cache()
66 | def get_programme() -> dict:
67 | response = HTTP_CLIENT.get("https://us.pycon.org/2025/schedule/conference.json")
68 | response.raise_for_status()
69 | return response.json()
70 |
71 |
72 | def get_sessions() -> Generator[Session, None, None]:
73 | programme = get_programme()
74 | return (Session.from_record(record) for record in programme["schedule"])
75 |
76 |
77 | def get_future_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
78 | if now is None:
79 | now = datetime.now(tz=UTC)
80 | return (session for session in sessions if session.start and session.start > now)
81 |
82 |
83 | def get_immediate_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
84 | sessions = list(get_future_sessions(sessions, now=now))
85 | start_time = min(session.start for session in sessions if session.start)
86 | return (session for session in sessions if session.start == start_time)
87 |
88 |
89 | def main() -> None:
90 | sessions = get_sessions()
91 |
92 | print("Next sessions:")
93 | print("==============\n")
94 | for session in get_immediate_sessions(sessions):
95 | print(session.name, session.start)
96 |
97 |
98 | if __name__ == "__main__":
99 | main()
100 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/src/packaging_tutorial/pycon.py:
--------------------------------------------------------------------------------
1 | import dataclasses
2 | import itertools
3 | from collections.abc import Generator, Iterable
4 | from datetime import UTC, datetime, timedelta
5 | from functools import lru_cache
6 | from typing import Any, Self
7 |
8 | import httpx
9 |
10 | try:
11 | from rich import print
12 | except ImportError:
13 | pass
14 |
15 | @dataclasses.dataclass
16 | class Session:
17 | room: str | None = None
18 | rooms: list[str] | None = None
19 | start: datetime | None = None
20 | end: datetime | None = None
21 | duration: timedelta | None = None
22 | kind: str | None = None
23 | section: str | None = None
24 | conf_key: int | None = None
25 | list_render: bool | None = None
26 | license: str | None = None
27 | tags: str | None = None
28 | released: bool | None = None
29 | contact: list[str] | None = None
30 | name: str | None = None
31 | description: str | None = None
32 | authors: list[str] | None = None
33 | speakers: list[dict[str, int | str]] | None = None
34 | conf_url: str | None = None
35 | cancelled: bool | None = None
36 |
37 | @classmethod
38 | def from_record(cls, record: dict[str, Any]) -> Self:
39 | return cls(
40 | room=record.get("room"),
41 | rooms=record.get("rooms"),
42 | start=(datetime.fromisoformat(t) if (t := record.get("start")) else None),
43 | end=(datetime.fromisoformat(t) if (t := record.get("end")) else None),
44 | duration=(timedelta(minutes=dt) if (dt := record.get("duration")) else None),
45 | kind=record.get("kind"),
46 | section=record.get("section"),
47 | conf_key=record.get("conf_key"),
48 | list_render=record.get("list_render"),
49 | license=record.get("license"),
50 | tags=record.get("tags"),
51 | released=record.get("released"),
52 | contact=record.get("contact"),
53 | name=record.get("name"),
54 | description=record.get("description"),
55 | authors=record.get("authors"),
56 | speakers=record.get("speakers"),
57 | conf_url=record.get("conf_url"),
58 | cancelled=record.get("cancelled"),
59 | )
60 |
61 |
62 | HTTP_CLIENT = httpx.Client()
63 |
64 |
65 | @lru_cache()
66 | def get_programme() -> dict:
67 | response = HTTP_CLIENT.get("https://us.pycon.org/2025/schedule/conference.json")
68 | response.raise_for_status()
69 | return response.json()
70 |
71 |
72 | def get_sessions() -> Generator[Session, None, None]:
73 | programme = get_programme()
74 | return (Session.from_record(record) for record in programme["schedule"])
75 |
76 |
77 | def get_future_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
78 | if now is None:
79 | now = datetime.now(tz=UTC)
80 | return (session for session in sessions if session.start and session.start > now)
81 |
82 |
83 | def get_immediate_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
84 | sessions = list(get_future_sessions(sessions, now=now))
85 | start_time = min(session.start for session in sessions if session.start)
86 | return (session for session in sessions if session.start == start_time)
87 |
88 |
89 | def main() -> None:
90 | sessions = get_sessions()
91 |
92 | print("Next sessions:")
93 | print("==============\n")
94 | for session in get_immediate_sessions(sessions):
95 | print(session.name, session.start)
96 |
97 |
98 | if __name__ == "__main__":
99 | main()
100 |
--------------------------------------------------------------------------------
/uv/tutorial/02-more-about-dependencies/07-package-extras.md:
--------------------------------------------------------------------------------
1 | # Package extras
2 | Package extras or optional dependencies, previously often called dependency groups (but now dependency groups is a specific thing we'll discuss later), is a way to group the dependencies of our project.
3 | They are commonly used if you have optional features that not every user of your package needs.
4 | For example, if you use the artificial intelligence library *Huggingface Transformers*, then you might also want to use the parts of the Transformers library that require *PyTorch*.
5 | One way to install both Transformers and a compatible version of PyTorch simultaneously is by installing `transformers[torch]`, which specifies that you want the `torch`-package extras when you install Transformers.
6 | While package extras are meant to add optional dependencies to your projects, they can also be used to define development dependencies, e.g. running the unit tests or creating the documentation.
7 |
8 | ## Exercises
9 | 1. Add `rich` as an optional dependency in the group `rich` by running `uv add rich --optional pretty`. Can you find the `rich`-dependency in the `pyproject.toml`-file?
10 | 2. Try to run the `get_charlas.py` script again. Did anything change?
11 | 2. Sync the uv environment again with `uv sync`.
12 | 3. Run the `get_charlas.py` script again. Did you still get the nice colours?
13 | 4. Install the package with the `pretty`-extras again by writing `uv sync --extra pretty`
14 | 5. Run the `get_charlas.py` script again to check that you got the nice colours back.
15 | 6. When do you think it makes sense to use optional dependencies?
16 |
17 |
18 |
19 | ## Reflection
20 | We have now added `rich` as an optional dependency to our package.
21 | This means that when we later publish our code to PyPI, people can install it both with and without the `rich`-dependency.
22 | This is very useful when our code has some functionality that not all users of our code are interested in.
23 | By making the dependencies for that code optional, we reduce the number of indirect dependencies for users of our package, which again makes their lives easier.
24 |
25 | A prime example is if we make a cli application.
26 | Then, we may want to include a specific version of [Typer](https://typer.tiangolo.com) to get a nice CLI interface.
27 | However, if we pin the Typer version, and someone else wants to use our code as a component in their own CLI, then they would be locked to the same Typer version as we are using.
28 | Instead, if we make the CLI-dependencies into optional dependencies, then this is unproblematic.
29 | Users who want to install the CLI can install `{PACKAGE_NAME}[cli]`, while those who only want the library features can simply install `{PACKAGE_NAME}`.
30 |
31 | Moreover, since optional dependencies aren't installed by default by users, uv will also not include them when you sync your project.
32 | However, if you either add an optional dependency to a group, then uv will include that group until the next time you run `uv sync` (unless you also include the `--extra` argument to explicitly include optional groups.)
33 | An important thing to realise here, is that this means that it's a big difference between running `uv add {package} --optional {group-name}` and manually updating the `pyproject.toml`-file and running `uv sync`.
34 | In fact, `uv add {package} --optional {group-name} ` is equivalent to manually updating the `pyproject.toml` file and running `uv sync --extra {group-name}`.
35 |
36 | So we have optional dependencies, but what about PyTest? Should that be a project dependency?
37 | No, we use it for development, so it should probably be listed as such, which is why the next part of this tutorial is about development dependencies.
38 |
39 | ## Next up
40 | [Development dependencies](./08-dev-dependencies.md)
41 |
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | share/python-wheels/
24 | *.egg-info/
25 | .installed.cfg
26 | *.egg
27 | MANIFEST
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .nox/
43 | .coverage
44 | .coverage.*
45 | .cache
46 | nosetests.xml
47 | coverage.xml
48 | *.cover
49 | *.py,cover
50 | .hypothesis/
51 | .pytest_cache/
52 | cover/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .web../../../assets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | .pybuilder/
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | # For a library or package, you might want to ignore these files since the code is
87 | # intended to run in multiple environments; otherwise, check them in:
88 | # .python-version
89 |
90 | # pipenv
91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
94 | # install all needed dependencies.
95 | #Pipfile.lock
96 |
97 | # poetry
98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99 | # This is especially recommended for binary packages to ensure reproducibility, and is more
100 | # commonly ignored for libraries.
101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102 | #poetry.lock
103 |
104 | # pdm
105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106 | #pdm.lock
107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108 | # in version control.
109 | # https://pdm.fming.dev/#use-with-ide
110 | .pdm.toml
111 | .pdm-python
112 | .pdm-build/
113 |
114 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
115 | __pypackages__/
116 |
117 | # Celery stuff
118 | celerybeat-schedule
119 | celerybeat.pid
120 |
121 | # SageMath parsed files
122 | *.sage.py
123 |
124 | # Environments
125 | .env
126 | .venv
127 | env/
128 | venv/
129 | ENV/
130 | env.bak/
131 | venv.bak/
132 |
133 | # Spyder project settings
134 | .spyderproject
135 | .spyproject
136 |
137 | # Rope project settings
138 | .ropeproject
139 |
140 | # mkdocs documentation
141 | /site
142 |
143 | # mypy
144 | .mypy_cache/
145 | .dmypy.json
146 | dmypy.json
147 |
148 | # Pyre type checker
149 | .pyre/
150 |
151 | # pytype static type analyzer
152 | .pytype/
153 |
154 | # Cython debug symbols
155 | cython_debug/
156 |
157 | # PyCharm
158 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
159 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
160 | # and can be added to the global gitignore or merged into this file. For a more nuclear
161 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
162 | #.idea/
163 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | share/python-wheels/
24 | *.egg-info/
25 | .installed.cfg
26 | *.egg
27 | MANIFEST
28 |
29 | # PyInstaller
30 | # Usually these files are written by a python script from a template
31 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
32 | *.manifest
33 | *.spec
34 |
35 | # Installer logs
36 | pip-log.txt
37 | pip-delete-this-directory.txt
38 |
39 | # Unit test / coverage reports
40 | htmlcov/
41 | .tox/
42 | .nox/
43 | .coverage
44 | .coverage.*
45 | .cache
46 | nosetests.xml
47 | coverage.xml
48 | *.cover
49 | *.py,cover
50 | .hypothesis/
51 | .pytest_cache/
52 | cover/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .web../../../assets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | .pybuilder/
76 | target/
77 |
78 | # Jupyter Notebook
79 | .ipynb_checkpoints
80 |
81 | # IPython
82 | profile_default/
83 | ipython_config.py
84 |
85 | # pyenv
86 | # For a library or package, you might want to ignore these files since the code is
87 | # intended to run in multiple environments; otherwise, check them in:
88 | # .python-version
89 |
90 | # pipenv
91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
94 | # install all needed dependencies.
95 | #Pipfile.lock
96 |
97 | # poetry
98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
99 | # This is especially recommended for binary packages to ensure reproducibility, and is more
100 | # commonly ignored for libraries.
101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
102 | #poetry.lock
103 |
104 | # pdm
105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
106 | #pdm.lock
107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
108 | # in version control.
109 | # https://pdm.fming.dev/#use-with-ide
110 | .pdm.toml
111 | .pdm-python
112 | .pdm-build/
113 |
114 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
115 | __pypackages__/
116 |
117 | # Celery stuff
118 | celerybeat-schedule
119 | celerybeat.pid
120 |
121 | # SageMath parsed files
122 | *.sage.py
123 |
124 | # Environments
125 | .env
126 | .venv
127 | env/
128 | venv/
129 | ENV/
130 | env.bak/
131 | venv.bak/
132 |
133 | # Spyder project settings
134 | .spyderproject
135 | .spyproject
136 |
137 | # Rope project settings
138 | .ropeproject
139 |
140 | # mkdocs documentation
141 | /site
142 |
143 | # mypy
144 | .mypy_cache/
145 | .dmypy.json
146 | dmypy.json
147 |
148 | # Pyre type checker
149 | .pyre/
150 |
151 | # pytype static type analyzer
152 | .pytype/
153 |
154 | # Cython debug symbols
155 | cython_debug/
156 |
157 | # PyCharm
158 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
159 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
160 | # and can be added to the global gitignore or merged into this file. For a more nuclear
161 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
162 | #.idea/
163 |
--------------------------------------------------------------------------------
/uv/tutorial/02-more-about-dependencies/06-unit-tests.md:
--------------------------------------------------------------------------------
1 | # Unit tests
2 |
3 | An essential part of writing robust software is testing it, preferably with an automated test suite.
4 | The typical way of doing this in Python these days is with [pytest](https://docs.pytest.org/en/stable/).
5 | With pytest, we can create simple tests that use assertions to check if the code behaves as it should.
6 | For example, we created a test for the API wrapper we are making.
7 | It checks that `get_future_sessions` gets all future sessions:
8 |
9 | ```python
10 | import packaging_tutorial.pycon as pycon
11 | from datetime import datetime, timedelta
12 |
13 |
14 | def test_get_future_sessions() -> None:
15 | now = datetime.fromisoformat("20250512T12:00Z")
16 | past_sessions = [
17 | pycon.Session(start=now - timedelta(hours=2)),
18 | pycon.Session(start=now - timedelta(hours=1)),
19 | pycon.Session(start=now - timedelta(hours=22)),
20 | ]
21 | future_sessions = [
22 | pycon.Session(start=now + timedelta(hours=1)),
23 | pycon.Session(start=now + timedelta(hours=21)),
24 | ]
25 | sessions = past_sessions + future_sessions
26 |
27 | assert list(pycon.get_future_sessions(sessions, now=now)) == future_sessions
28 | ```
29 |
30 | We will not go into details on testing in this tutorial, but if you're interested, then Ned Batchelder (who, among other things, maintains [coverage.py](https://coverage.readthedocs.io/)) had a presentation about testing with the [materials online](https://nedbatchelder.com/text/test3.html) and we've heard good things about Brian Okken's [tutorials](https://courses.pythontest.com).
31 | Still, we will need some tests to get through the entire packaging process.
32 |
33 | ## Exercises
34 | 1. Make a folder, `tests/` in your project root (i.e. not in the `src` or `src/{package_name}` folder) and add an empty `__init__.py`-file to it. Discuss with your neighbour: What do you think this folder should contain? and why did we not include it in the `src/{package_name}` directory?
35 | 2. Add a `test_pycon.py`-file to the `tests/`-directory and populate it with the test above (you may need to update the `import packaging_tutorial.pycon as pycon`-import).
36 | 3. Add pytest as a dependency to your project using whichever method you prefer.
37 | 4. Run pytest by running `uv run pytest`.
38 |
39 |
40 |
41 | ## Reflection
42 | We have now added our first test to our package.
43 | These are added outside the package, and there are several reasons for this.
44 | First, we don't want the tests to be importable from our module.
45 | They are not runnable -- they just contain simple assertions and require pytest to run, so it would be kind of strange to make the tests importable (but it's not unheard of to bundle them with the code either, especially if you're using other test frameworks).
46 |
47 | Also, the tests may require dependencies that the package doesn't need, such as pytest.
48 | We need pytest to run the tests, but not to use our library.
49 | Isn't it then kind of odd to add pytest as a dependency, when it's only required for the tests?
50 | It is, and instead of using a normal dependency, we should use *optional dependencies* or *development dependencies*.
51 |
52 | Finally, notice that we also included a `__init__.py`-file. While not strictly neccessary, it can be nice.
53 | The reason it can be nice is that it opens for *relative imports* within the tests directory, and it can be nice to use relative imports within the tests directory (despite relative imports' [weaknesseses](https://softwareengineering.stackexchange.com/a/159505)).
54 | The reason for this is that if you for example have some test utilities that you import through `import test.utils`, then you are dependent on having the correct working directory when you invoke pytest.
55 | If you instead use `from .utils`, then it will work irrespectively of where you run the tests from.
56 |
57 | ## Next up
58 | [Optional dependencies and package extras](./07-package-extras.md)
59 |
--------------------------------------------------------------------------------
/pdm/code/packaging-tutorial/src/packaging_tutorial/validated_pycon_with_config.py:
--------------------------------------------------------------------------------
1 | from collections.abc import Generator, Iterable
2 | from datetime import UTC, datetime, timedelta
3 | from functools import lru_cache
4 | from typing import Any, Self
5 |
6 | import httpx
7 | import pydantic
8 | import pydantic_settings
9 |
10 | try:
11 | from rich import print
12 | except ImportError:
13 | pass
14 |
15 |
16 | class Config(pydantic_settings.BaseSettings):
17 | model_config = pydantic.ConfigDict(frozen=True)
18 |
19 | USER_AGENT: str = "python-packaging-tutorial"
20 |
21 | def make_client(self,) -> httpx.Client:
22 | return httpx.Client(headers={"User-Agent": self.USER_AGENT})
23 |
24 |
25 | class Session(pydantic.BaseModel):
26 | room: str | None = None
27 | rooms: list[str] | None = None
28 | start: datetime | None = None
29 | end: datetime | None = None
30 | duration: timedelta | None = None
31 | kind: str | None = None
32 | section: str | None = None
33 | conf_key: int | None = None
34 | list_render: bool | None = None
35 | license: str | None = None
36 | tags: str | None = None
37 | released: bool | None = None
38 | contact: list[str] | None = None
39 | name: str | None = None
40 | description: str | None = None
41 | authors: list[str] | None = None
42 | speakers: list[dict[str, int | str]] | None = None
43 | conf_url: str | None = None
44 | cancelled: bool | None = None
45 |
46 | @classmethod
47 | def from_record(cls, record: dict[str, Any]) -> Self:
48 | return cls(
49 | room=record.get("room"),
50 | rooms=record.get("rooms"),
51 | start=(datetime.fromisoformat(t) if (t := record.get("start")) else None),
52 | end=(datetime.fromisoformat(t) if (t := record.get("end")) else None),
53 | duration=(timedelta(minutes=dt) if (dt := record.get("duration")) else None),
54 | kind=record.get("kind"),
55 | section=record.get("section"),
56 | conf_key=record.get("conf_key"),
57 | list_render=record.get("list_render"),
58 | license=record.get("license"),
59 | tags=record.get("tags"),
60 | released=record.get("released"),
61 | contact=record.get("contact"),
62 | name=record.get("name"),
63 | description=record.get("description"),
64 | authors=record.get("authors"),
65 | speakers=record.get("speakers"),
66 | conf_url=record.get("conf_url"),
67 | cancelled=record.get("cancelled"),
68 | )
69 |
70 |
71 | HTTP_CLIENT = httpx.Client()
72 |
73 |
74 | @lru_cache()
75 | def get_programme(config: Config=Config()) -> dict:
76 | response = config.make_client().get("https://us.pycon.org/2025/schedule/conference.json")
77 | response.raise_for_status()
78 | return response.json()
79 |
80 |
81 | def get_sessions(config: Config=Config()) -> Generator[Session, None, None]:
82 | programme = get_programme(config=config)
83 | return (Session.from_record(record) for record in programme["schedule"])
84 |
85 |
86 | def get_future_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
87 | if now is None:
88 | now = datetime.now(tz=UTC)
89 | return (session for session in sessions if session.start and session.start > now)
90 |
91 |
92 | def get_immediate_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
93 | sessions = list(get_future_sessions(sessions, now=now))
94 | start_time = min(session.start for session in sessions if session.start)
95 | return (session for session in sessions if session.start == start_time)
96 |
97 |
98 | def main() -> None:
99 | sessions = get_sessions()
100 |
101 | print("Next sessions:")
102 | print("==============\n")
103 | for session in get_immediate_sessions(sessions):
104 | print(session.name, session.start)
105 |
106 |
107 | if __name__ == "__main__":
108 | main()
109 |
--------------------------------------------------------------------------------
/pdm/tutorial/02-more-about-dependencies/08-dev-dependencies.md:
--------------------------------------------------------------------------------
1 | # Development dependencies
2 | Development dependency groups is a recently added standardised feature specifically for dependencies that shouldn't be published to PyPI as optional dependencies.
3 | When you run PDM install, all development dependencies are installed by default, but you can choose to skip them.
4 |
5 | **Note:** the standardisation of dependency groups happened in [PEP753](https://peps.python.org/pep-0735/), which was accepted during the autumn of 2024.
6 | If you haven't updated PDM since then, then you can still use development dependencies, but they will not be specified according to the standard.
7 |
8 | ## Exercises
9 | 1. Use `pdm add --dev pytest` to add pytest as a development dependency.
10 | 2. Open the `pyproject.toml`-file and locate the lines regarding the `dev` optional dependencies and the development dependencies. Why is one of the sections in the `pyproject.toml` named `[dependency-groups]` while the other parts start with `[project]` or `[project.optional-dependencies]`?
11 | 3. Delete the lines related to the `dev` optional dependencies, but NOT the development dependency group and run `pdm install` again. Do you think pytest is installed still? Why or why not?
12 | 4. Discuss with your neighbour: When do you think you should use development dependencies, and when do you think you should use optional dependencies?
13 |
14 |
15 |
16 | ## Reflection
17 | We have now added development dependencies to our code.
18 | We need these dependencies for development, but we don't want consumers of our code to install them.
19 | We store these dependencies in the `dependency-groups`-table to ensure that that PDM knows to install the dependencies during development, but people installing our code will not accidentally end up with all of our development dependencies.
20 | After all, our development dependencies may clash with their dependencies.
21 | For example, we may want a particular version of pytest for our code, but we don't want libraries that depend on our code to have to use the same version of pytest.
22 |
23 | This table summarises when to use the different types of dependencies.
24 |
25 | | Dependency type | Placement | Description |
26 | |-----------------|----------------------------------------------|---------------------------------------------|
27 | | Mandatory | `project.dependencies` | Minimal dependencies for using your code |
28 | | Optional | `project.optional-dependencies.{group_name}` | Special features/platforms |
29 | | Development | `dependency-groups.{group_name}` | Developing the code, but not for running it |
30 |
31 | ## Exercises
32 | 1. Use `pdm add --dev pytest-cov pytest-randomly` to add [pytest-cov](https://pytest-cov.readthedocs.io/en/latest/) and [pytest-randomly](https://pypi.org/project/pytest-randomly/)
33 | 2. Run pytest again, did you notice any difference?
34 | 3. Run pytest again, this time with the command `pdm run pytest --cov src`. What happened?
35 | 4. What do you think happens if you run `pdm run pytest --randomly-seed=last`?
36 |
37 |
38 |
39 | ## Reflection
40 | Now that we know how to specify development dependencies, we have added a couple of extra very useful pytest plugins as development dependencies: `pytest-cov` and `pytest-randomly`.
41 | Pytest-randomly does two extremely useful things for testing: It sets random seeds for `random`, `numpy`, `faker` and more, ensuring reproducible tests, and it randomises the order of our tests to help us detect state leakage in our test suite.
42 | Pytest-cov on the other hands is a thin wrapper around [`Coverage.py`](https://coverage.readthedocs.io/), which monitors what code lines in a given directory are run when we run our tests, which is extremely useful to find parts of our code that haven't been tested.
43 |
44 | ## Next up
45 | [Static code checkers](./09-static-code-checkers.md)
46 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/src/packaging_tutorial/validated_pycon_with_config.py:
--------------------------------------------------------------------------------
1 | from collections.abc import Generator, Iterable
2 | from datetime import UTC, datetime, timedelta
3 | from functools import lru_cache
4 | from typing import Any, Self
5 |
6 | import httpx
7 | import pydantic
8 | import pydantic_settings
9 |
10 | try:
11 | from rich import print
12 | except ImportError:
13 | pass
14 |
15 |
16 | class Config(pydantic_settings.BaseSettings):
17 | model_config = pydantic.ConfigDict(frozen=True)
18 |
19 | USER_AGENT: str = "python-packaging-tutorial"
20 |
21 | def make_client(self,) -> httpx.Client:
22 | return httpx.Client(headers={"User-Agent": self.USER_AGENT})
23 |
24 |
25 | class Session(pydantic.BaseModel):
26 | room: str | None = None
27 | rooms: list[str] | None = None
28 | start: datetime | None = None
29 | end: datetime | None = None
30 | duration: timedelta | None = None
31 | kind: str | None = None
32 | section: str | None = None
33 | conf_key: int | None = None
34 | list_render: bool | None = None
35 | license: str | None = None
36 | tags: str | None = None
37 | released: bool | None = None
38 | contact: list[str] | None = None
39 | name: str | None = None
40 | description: str | None = None
41 | authors: list[str] | None = None
42 | speakers: list[dict[str, int | str]] | None = None
43 | conf_url: str | None = None
44 | cancelled: bool | None = None
45 |
46 | @classmethod
47 | def from_record(cls, record: dict[str, Any]) -> Self:
48 | return cls(
49 | room=record.get("room"),
50 | rooms=record.get("rooms"),
51 | start=(datetime.fromisoformat(t) if (t := record.get("start")) else None),
52 | end=(datetime.fromisoformat(t) if (t := record.get("end")) else None),
53 | duration=(timedelta(minutes=dt) if (dt := record.get("duration")) else None),
54 | kind=record.get("kind"),
55 | section=record.get("section"),
56 | conf_key=record.get("conf_key"),
57 | list_render=record.get("list_render"),
58 | license=record.get("license"),
59 | tags=record.get("tags"),
60 | released=record.get("released"),
61 | contact=record.get("contact"),
62 | name=record.get("name"),
63 | description=record.get("description"),
64 | authors=record.get("authors"),
65 | speakers=record.get("speakers"),
66 | conf_url=record.get("conf_url"),
67 | cancelled=record.get("cancelled"),
68 | )
69 |
70 |
71 | HTTP_CLIENT = httpx.Client()
72 |
73 |
74 | @lru_cache()
75 | def get_programme(config: Config=Config()) -> dict:
76 | response = config.make_client().get("https://us.pycon.org/2025/schedule/conference.json")
77 | response.raise_for_status()
78 | return response.json()
79 |
80 |
81 | def get_sessions(config: Config=Config()) -> Generator[Session, None, None]:
82 | programme = get_programme(config=config)
83 | return (Session.from_record(record) for record in programme["schedule"])
84 |
85 |
86 | def get_future_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
87 | if now is None:
88 | now = datetime.now(tz=UTC)
89 | return (session for session in sessions if session.start and session.start > now)
90 |
91 |
92 | def get_immediate_sessions(sessions: Iterable[Session], now: datetime | None = None) -> Generator[Session, None, None]:
93 | sessions = list(get_future_sessions(sessions, now=now))
94 | start_time = min(session.start for session in sessions if session.start)
95 | return (session for session in sessions if session.start == start_time)
96 |
97 |
98 | def main() -> None:
99 | sessions = get_sessions()
100 |
101 | print("Next sessions:")
102 | print("==============\n")
103 | for session in get_immediate_sessions(sessions):
104 | print(session.name, session.start)
105 |
106 |
107 | if __name__ == "__main__":
108 | main()
109 |
--------------------------------------------------------------------------------
/uv/tutorial/04-managing-your-code/14-pre-commit.md:
--------------------------------------------------------------------------------
1 | # Pre-commit hooks
2 |
3 | While linters are great for making a codebase easier to read, they are also easy to forget.
4 | This is why you often find commits with the message "Run ruff", "Run black", "Run flake8", etc.
5 | Instead, we should have Git automatically lint our code before every commit.
6 | If there is anything the linters want to change, then the commit should not succeed, and we should be asked to update the code.
7 |
8 | Luckily, this is possible, and the main tool for running commands that verify your code before committing is called "pre-commit".
9 |
10 | ## Exercises
11 |
12 | 1. Check if you have pre-commit installed on your system by running `pre-commit -h`. If you get the help message, then pre-commit is installed. If not, then you can install it by running `uv tool install pre-commit` (this will install pre-commit as an application globally on your system, not just for this project).
13 | 1. Run the command `pre-commit sample-config` to get a sample pre-commit file. Copy the output into a new file in your project root that you name `.pre-commit-config.yaml`.
14 | 1. Install your pre-commit hooks by running `pre-commit install`
15 | 1. Run your pre-commit hooks by running `pre-commit run --all`. What do you think just happened (look at your `.pre-commit-config.yaml`-file)?
16 | 1. If any files were changed (you can e.g. check this by running `git status`), then stage and commit them with the commit message "Run pre-commit"
17 | 1. Open a file, and add a new line with a comment and a blank space at the end of it. Save the file, stage it, and try to commit it. What happened?
18 | 1. Stage the file again and commit it. Discuss with your neighbour: what do you think just happened?
19 |
20 |
21 |
22 | ## Reflection
23 | Pre-commit is an amazing tool.
24 | In the `.pre-commit-config.yaml`-file, you specify a set of *hooks* that runs automatically every time you try to commit a change.
25 | By default, these hooks run only on the staged files, and any unstaged files will be ignored.
26 | If the hooks exit successfully, then you can commit the changes.
27 | If not, then the hooks will try to fix the errors automatically, updating all your staged files (but it will not stage the changes).
28 | You can then inspect the changes, stage them and try to commit again.
29 |
30 | Also, we used `uv tool` to install `pre-commit`, which is a good to install Python applications globally on our systems (i.e. it's not for project-specific tools).
31 | If you're familiar with apt or homebrew, then you can think of `uv tool` kind of like those programs, except `uv tool` is for applications written in Python.
32 |
33 | ## Exercises
34 |
35 | 1. Why do you think pre-commit doesn't automatically stage fixes as well?
36 | 2. Open the `.pre-commit-config.yaml`-file and add the following lines:
37 | ```
38 | - repo: https://github.com/astral-sh/ruff-pre-commit
39 | rev: v0.11.6
40 | hooks:
41 | - id: ruff
42 | args: [ --fix ]
43 | - id: ruff-format
44 | ```
45 | 3. Stage and commit the updated `.pre-commit-config.yaml`-file.
46 | 4. Discuss with your neighbour: What do you think the lines you added in the previous exercise lines do?
47 | 5. Add the following code to `src/{package_name}/pycon.py` (be exact and keep the weird formatting):
48 | ```python
49 |
50 | def print_future_sessions() -> None:
51 | sessions = get_sessions(
52 | )
53 | for session in sessions:
54 | print( session, session.start )
55 | ```
56 | 6. Stage and commit the updated `pycon.py`-file. Did it pass the pre-commit hooks?
57 | 7. Stage the fixed file and commit again
58 |
59 |
60 |
61 |
62 | ## Reflection
63 |
64 | We have now added pre-commit configuration so that every time we commit a file, it's passed through Ruff (which we learned about [earlier](../02-more-about-dependencies/09-static-code-checkers.md)), and only if it passes the linter checks do we allow the commit to go through.
65 |
66 | ## Next up
67 | [Sharing our code on GitHub](./15-using-github.md)
68 |
--------------------------------------------------------------------------------
/pdm/tutorial/02-more-about-dependencies/06-unit-tests.md:
--------------------------------------------------------------------------------
1 | # Unit tests
2 |
3 | An essential part of writing robust software is testing it, preferably with an automated test suite.
4 | The typical way of doing this in Python these days is with [pytest](https://docs.pytest.org/en/stable/).
5 | With pytest, we can create simple tests that use assertions to check if the code behaves as it should.
6 | For example, we created a test for the API wrapper that you can copy.
7 | It checks that `get_future_sessions` gets all future sessions:
8 |
9 | ```python
10 | import packaging_tutorial.pycon as pycon
11 | from datetime import datetime, timedelta
12 |
13 |
14 | def test_get_future_sessions() -> None:
15 | now = datetime.fromisoformat("20250512T12:00Z")
16 | past_sessions = [
17 | pycon.Session(start=now - timedelta(hours=2)),
18 | pycon.Session(start=now - timedelta(hours=1)),
19 | pycon.Session(start=now - timedelta(hours=22)),
20 | ]
21 | future_sessions = [
22 | pycon.Session(start=now + timedelta(hours=1)),
23 | pycon.Session(start=now + timedelta(hours=21)),
24 | ]
25 | sessions = past_sessions + future_sessions
26 |
27 | assert list(pycon.get_future_sessions(sessions, now=now)) == future_sessions
28 | ```
29 |
30 | We will not go into details on testing in this tutorial, but if you're interested, then Ned Batchelder (who, among other things, maintains [coverage.py](https://coverage.readthedocs.io/)) had a presentation about testing with the [materials online](https://nedbatchelder.com/text/test3.html) and we've heard good things about Brian Okken's [tutorials](https://courses.pythontest.com).
31 | Still, we will need some tests to get through the entire packaging process.
32 |
33 | ## Exercises
34 | 1. There should be a folder `tests/` in your project root (i.e. not in the `src` or `src/{package_name}` folder) that contains a `__init__.py`-file. Discuss with your neighbour: What do you think this folder should contain? and why is it not inside the `src/{package_name}` directory?
35 | 2. Add a `test_pycon.py`-file to the `tests/`-directory and populate it with the test above (you may need to update the import).
36 | 3. Add pytest as a dependency to your project using whichever method you prefer.
37 | 4. Run pytest by running `pdm run pytest`.
38 |
39 |
40 |
41 | ## Reflection
42 | We have now added our first test to our package.
43 | To do that, we first created a `tests`-folder with a `__init__.py` file in it.
44 | By including this file, we make it possible to do relative imports in the tests (we're not doing that here, but it's good practice to include it).
45 | Also, we kept the tests separated from the rest of our source code for several reasons.
46 | First, we don't want the tests to be importable from our module.
47 | They are not runnable -- they just contain simple assertions and require pytest to run, so it would be kind of strange to make the tests importable (but it's not unheard of to bundle them with the code either, especially if you're using other test frameworks).
48 |
49 | Also, the tests may require dependencies that the package doesn't need, such as pytest.
50 | We need pytest to run the tests, but not to use our library.
51 | Isn't it then kind of odd to add pytest as a dependency, when it's only required for the tests?
52 | It is, and instead of using a normal dependency, we should use *optional dependencies* or *development dependencies*.
53 |
54 | Finally, notice that we also included a `__init__.py`-file. While not strictly neccessary, it can be nice.
55 | The reason it can be nice is that it opens for *relative imports* within the tests directory, and it can be nice to use relative imports within the tests directory (despite relative imports' [weaknesseses](https://softwareengineering.stackexchange.com/a/159505)).
56 | The reason for this is that if you for example have some test utilities that you import through `import test.utils`, then you are dependent on having the correct working directory when you invoke pytest.
57 | If you instead use `from .utils`, then it will work irrespectively of where you run the tests from.
58 |
59 | ## Next up
60 | [Optional dependencies and package extras](./07-package-extras.md)
61 |
--------------------------------------------------------------------------------
/uv/tutorial/02-more-about-dependencies/08-dev-dependencies.md:
--------------------------------------------------------------------------------
1 | # Development dependencies
2 | Development dependency groups is a recently added standardised feature specifically for dependencies that shouldn't be published to PyPI as optional dependencies.
3 | When you run `uv sync`, all development dependencies are installed by default, but you can choose to skip them.
4 |
5 | **Note:** the standardisation of dependency groups happened in [PEP753](https://peps.python.org/pep-0735/), which was accepted during the autumn of 2024.
6 | If you haven't updated uv since then, then you can still use development dependencies, but they will not be specified according to the standard.
7 |
8 | ## Exercises
9 | 1. Use `uv add --dev pytest` to add pytest as a development dependency.
10 | 2. Open the `pyproject.toml`-file and locate the lines regarding the normal dependencies, optional dependencies and the development dependencies. Why is one of the sections in the `pyproject.toml` named `[dependency-groups]` while the other parts start with `[project]` or `[project.optional-dependencies]`?
11 | 3. Delete the lines related to the `pytest` normal dependencies, but NOT the development dependency group and run `uv sync --extra pretty` (or, if you skipped the package extras exercises, `uv sync`) again. Do you think pytest is installed still? Why or why not?
12 | 4. Discuss with your neighbour: When do you think you should use development dependencies, and when do you think you should use optional dependencies?
13 |
14 |
15 |
16 | ## Reflection
17 | We have now added development dependencies to our code.
18 | We need these dependencies for development, but we don't want consumers of our code to install them.
19 | We store these dependencies in the `dependency-groups`-table to ensure that that uv knows to install the dependencies during development, but people installing our code will not accidentally end up with all of our development dependencies.
20 | After all, our development dependencies may clash with their dependencies.
21 | For example, we may want a particular version of pytest for our code, but we don't want libraries that depend on our code to have to use the same version of pytest.
22 |
23 | This table summarises when to use the different types of dependencies.
24 |
25 | | Dependency type | Placement | Description |
26 | |-----------------|----------------------------------------------|---------------------------------------------|
27 | | Mandatory | `project.dependencies` | Minimal dependencies for using your code |
28 | | Optional | `project.optional-dependencies.{group_name}` | Special features/platforms |
29 | | Development | `dependency-groups.{group_name}` | Developing the code, but not for running it |
30 |
31 | ## Next up
32 | [Static code checkers](./09-static-code-checkers.md)
33 |
34 | ## Bonus (useful dev dependencies): Exercises
35 | 1. Use `uv add --dev pytest-cov pytest-randomly` to add [pytest-cov](https://pytest-cov.readthedocs.io/en/latest/) and [pytest-randomly](https://pypi.org/project/pytest-randomly/)
36 | 2. Run pytest again, did you notice any difference?
37 | 3. Run pytest again, this time with the command `uv run pytest --cov src`. What happened?
38 | 4. What do you think happens if you run `uv run pytest --randomly-seed=last`?
39 |
40 |
41 |
42 | ## Bonus (useful dev dependencies): Reflection
43 | Now that we know how to specify development dependencies, we have added a couple of extra very useful pytest plugins as development dependencies: `pytest-cov` and `pytest-randomly`.
44 | Pytest-randomly does two extremely useful things for testing: It sets random seeds for `random`, `numpy`, `faker` and more, ensuring reproducible tests, and it randomises the order of our tests to help us detect state leakage in our test suite.
45 | Pytest-cov on the other hands is a thin wrapper around [`Coverage.py`](https://coverage.readthedocs.io/), which monitors what code lines in a given directory are run when we run our tests, which is extremely useful to find parts of our code that haven't been tested.
46 |
--------------------------------------------------------------------------------
/pdm/tutorial/04-managing-your-code/14-pre-commit.md:
--------------------------------------------------------------------------------
1 | # Pre-commit hooks
2 |
3 | While linters are great for making a codebase easier to read, they are also easy to forget.
4 | This is why you often find commits with the message "Run ruff", "Run black", "Run flake8", etc.
5 | Instead, we should have Git automatically lint our code before every commit.
6 | If there is anything the linters want to change, then the commit should not succeed, and we should be asked to update the code.
7 |
8 | Luckily, this is possible, and the main tool for running commands that verify your code before committing is called "pre-commit".
9 |
10 | ## Exercises
11 |
12 | 1. Check if you have pre-commit installed on your system by running `pre-commit -h`. If you get the help message, then pre-commit is installed. If not, then you can install it by running `pipx install pre-commit` (if you don't already have pipx installed, see the installation instructions [here](https://pipx.pypa.io/stable/)).
13 | 1. Run the command `pre-commit sample-config` to get a sample pre-commit file. Copy the output into a new file in your project root that you name `.pre-commit-config.yaml`.
14 | 1. Install your pre-commit hooks by running `pre-commit install`
15 | 1. Run your pre-commit hooks by running `pre-commit run --all`. What do you think just happened (look at your `.pre-commit-config.yaml`-file)?
16 | 1. If any files were changed (you can e.g. check this by running `git status`), then stage and commit them with the commit message "Run pre-commit"
17 | 1. Open a file, and add a new line with a comment and a blank space at the end of it. Save the file, stage it, and try to commit it. What happened?
18 | 1. Stage the file again and commit it. Discuss with your neighbour: what do you think just happened?
19 |
20 |
21 |
22 | ## Reflection
23 | Pre-commit is an amazing tool.
24 | In the `.pre-commit-config.yaml`-file, you specify a set of *hooks* that runs automatically every time you try to commit a change.
25 | By default, these hooks run only on the staged files, and any unstaged files will be ignored.
26 | If the hooks exit successfully, then you can commit the changes.
27 | If not, then the hooks will try to fix the errors automatically, updating all your staged files (but it will not stage the changes).
28 | You can then inspect the changes, stage them and try to commit again.
29 |
30 | Also, we used pipx to install `pre-commit`, which is the preferred way to install applications globally on our systems (i.e. it's not for project-specific tools).
31 | If you're familiar with apt or homebrew, then you can think of pipx kind of like those programs, except pipx is for applications written in Python.
32 | Pipx is an open source tool for installing executables written in Python.
33 | It's similar to pip, but it's specific for executables and makes sure each application you install runs in an isolated environment.
34 |
35 | ## Exercises
36 |
37 | 1. Why do you think pre-commit doesn't automatically stage fixes as well?
38 | 2. Open the `.pre-commit-config.yaml`-file and add the following lines:
39 | ```
40 | - repo: https://github.com/astral-sh/ruff-pre-commit
41 | rev: v0.11.6
42 | hooks:
43 | - id: ruff
44 | args: [ --fix ]
45 | - id: ruff-format
46 | ```
47 | 3. Stage and commit the updated `.pre-commit-config.yaml`-file.
48 | 4. Discuss with your neighbour: What do you think the lines you added in the previous exercise lines do?
49 | 5. Add the following code to `src/{package_name}/pycon.py` (be exact and keep the weird formatting):
50 | ```python
51 |
52 | def print_future_sessions() -> None:
53 | sessions = get_sessions(
54 | )
55 | for session in sessions:
56 | print( session, session.start )
57 | ```
58 | 6. Stage and commit the updated `pycon.py`-file. Did it pass the pre-commit hooks?
59 | 7. Stage the fixed file and commit again
60 |
61 |
62 |
63 |
64 | ## Reflection
65 |
66 | We have now added pre-commit configuration so that every time we commit a file, it's passed through Ruff (which we learned about [earlier](../02-more-about-dependencies/09-static-code-checkers.md)), and only if it passes the linter checks do we allow the commit to go through.
67 |
68 | ## Next up
69 | [Sharing our code on GitHub](./15-using-github.md)
70 |
--------------------------------------------------------------------------------
/pdm/tutorial/03-building-and-publishing-packages/11-building-wheels.md:
--------------------------------------------------------------------------------
1 | # Building your package
2 |
3 | Recall that we, [in the beginning of this tutorial](../01-introduction/02-pip-internals.md), looked at how pip installs packages: By downloading and extracting a wheel.
4 | In reality, pip can work with both wheels and source distributions, and we'll look a bit closer at these now that we are ready to build our package.
5 |
6 | In other words, we are ready to gather all the parts we need to import our library and place them into a nice bundle that other people can install and use.
7 | You could do this yourself, but realistically, what you want to do is to use a *build backend* that can do this automatically.
8 | A build backend is essentially a piece of code that follows a specification called [PEP-517](https://peps.python.org/pep-0517/), which specifies how tools should convert code into installable bundles.
9 |
10 | ## Exercises
11 |
12 | 1. Build your package by running `pdm build`. What do you think happened now?
13 | 2. Look inside the `dist`-directory. How many files do you see there?
14 | 3. Unzip the source distribution into a directory called `dist/sdist`, e.g. by running `pdm run python -m tarfile -e dist/{sdist_name}.tar.gz dist/sdist`.
15 | 4. Unzip the wheel file into a directory called `dist/wheel`, e.g. by running `pdm run python -m zipfile -e dist/{wheel_name}.whl dist/wheel`.
16 | 5. Inspect the `dist/sdist` and `dist/wheel` directories. Are there any differences? If so, what are they?
17 | 6. Open the `.venv/lib/python3.12/site-packages` (Linux and Mac) or the `.venv/Lib/site-packages` (Windows) directory. Compare the folder names in this directory to the folder names in the extracted of the wheel and source distribution. Can you see any similarities?
18 |
19 |
20 |
21 | ## Reflection
22 |
23 | By running `pdm build`, we trigger the build backend to make a *source distribution* and a *wheel* - the file types we looked at earlier in this tutorial.
24 | The difference between wheels and source distributions is subtle, especially when we only have Python code, but it's worth knowing.
25 | A source distribution is the source code combined with instructions on how to build it stored in a *tarball* (you can think of a tarball as a zip file, while not 100% accurate, it's close enough) so users can build and install it themselves.
26 | It's essentially a snapshot of our code repository.
27 | There is therefore, essentially, nothing that happens when you make the source distribution, it just selects a collection of files, puts them in an archive and that's about it.
28 | A wheel, on the other hand is an installable bundle that can be extracted directly into the site-packages directory (a place Python looks for imports).
29 | Since we are working on a pure Python project, the wheel is very similar to the source distribution, but the `pyproject.toml`-file is parsed and the metadata is extracted into the `{package_name}-{version}.dist-info`-folder.
30 |
31 | Where wheels really shine is for extension modules written in other languages (like C or Rust).
32 | In that case, the source distribution will contain the C or Rust files, and the user would need to compile it themselves, while the wheel would contain pre-compiled binaries for specific platforms.
33 | This means that it's much easier to install wheels than source distributions, and you should always provide them if possible.
34 |
35 | ## Exercises
36 |
37 | 1. Let's change the build backend. Open the `pyproject.toml`-file and modify the `[build-system]` to use `setuptools` instead of `PDM`. Specifically, modify it so it says `requires = ["setuptools"]` and `build-backend = "setuptools.build_meta"` (Note: Setuptools isn't better than PDM, just different. So unless you have some specific advanced requirements, we recommend just sticking to the defaults here, but you should still know how to do it).
38 | 2. Build the project again. What do you think was different this time around?
39 |
40 |
41 |
42 | ## Reflection
43 |
44 | The build backend is the tool that actually creates the wheel and source distribution.
45 | What we just did was switching out the build backend to use [Setuptools](https://setuptools.pypa.io/en/latest/setuptools.html) instead of PDM.
46 | This means that instead of calling `pdm.backend.build_wheel('dist/')` and `pdm.backend.build_sdist('dist/')` to create the wheel and source distribution, uv called `setuptools.build_meta.build_wheel('dist/')` and `setuptools.build_meta.build_wheel('dist/')` when you ran pdm build.
47 | This might seem complicated, but luckily, unless you have good reasons, it doesn't really matter if you choose Hatchling, Setuptools or PDM.
48 | Still, there are differences, but you can switch the build backend when you find a reason to.
49 | Until then, just stick with the defaults.
50 |
51 | ## Next up
52 | [Publishing your package to PyPI](./12-publishing-packages.md)
53 |
--------------------------------------------------------------------------------
/uv/tutorial/03-building-and-publishing-packages/11-building-wheels.md:
--------------------------------------------------------------------------------
1 | # Building your package
2 |
3 | Recall that we, [in the beginning of this tutorial](../01-introduction/02-pip-internals.md), looked at how pip installs packages: By downloading and extracting a wheel.
4 | In reality, pip can work with both wheels and source distributions, and we'll look a bit closer at these now that we are ready to build our package.
5 |
6 | In other words, we are ready to gather all the parts we need to import our library and place them into a nice bundle that other people can install and use.
7 | You could do this yourself, but realistically, what you want to do is to use a *build backend* that can do this automatically.
8 | A build backend is essentially a piece of code that follows a specification called [PEP-517](https://peps.python.org/pep-0517/), which specifies how tools should convert code into installable bundles.
9 |
10 | ## Exercises
11 |
12 | 1. Build your package by running `uv build`. What do you think happened now?
13 | 2. Look inside the `dist`-directory. How many files do you see there?
14 | 3. Unzip the source distribution into a directory called `dist/sdist`, e.g. by running `uv run python -m tarfile -e dist/{sdist_name}.tar.gz dist/sdist`.
15 | 4. Unzip the wheel file into a directory called `dist/wheel`, e.g. by running `uv run python -m zipfile -e dist/{wheel_name}.whl dist/wheel`.
16 | 5. Inspect the `dist/sdist` and `dist/wheel` directories. Are there any differences? If so, what are they?
17 | 6. Open the `.venv/lib/python3.12/site-packages` (Linux and Mac) or the `.venv/Lib/site-packages` (Windows) directory. Compare the folder names in this directory to the folder names in the extracted of the wheel and source distribution. Can you see any similarities?
18 |
19 |
20 |
21 | ## Reflection
22 |
23 | By running `uv build`, we trigger the build backend to make a *source distribution* and a *wheel* - the file types we looked at earlier in this tutorial.
24 | The difference between wheels and source distributions is subtle, especially when we only have Python code, but it's worth knowing.
25 | A source distribution is the source code combined with instructions on how to build it stored in a *tarball* (you can think of a tarball as a zip file, while not 100% accurate, it's close enough) so users can build and install it themselves.
26 | It's essentially a snapshot of our code repository.
27 | There is therefore, essentially, nothing that happens when you make the source distribution, it just selects a collection of files, puts them in an archive and that's about it.
28 | A wheel, on the other hand is an installable bundle that can be extracted directly into the site-packages directory (a place Python looks for imports).
29 | Since we are working on a pure Python project, the wheel is very similar to the source distribution, but the `pyproject.toml`-file is parsed and the metadata is extracted into the `{package_name}-{version}.dist-info`-folder.
30 |
31 | Where wheels really shine is for extension modules written in other languages (like C or Rust).
32 | In that case, the source distribution will contain the C or Rust files, and the user would need to compile it themselves, while the wheel would contain pre-compiled binaries for specific platforms.
33 | This means that it's much easier to install wheels than source distributions, and you should always provide them if possible.
34 |
35 | ## Exercises
36 |
37 | 1. Let's change the build backend. Open the `pyproject.toml`-file and modify the `[build-system]` to use `setuptools` instead of Hatchling. Specifically, modify it so it says `requires = ["setuptools"]` and `build-backend = "setuptools.build_meta"` (Note: Setuptools isn't better than Hatchling, just different. So unless you have some specific advanced requirements, we recommend just sticking to the defaults here, but you should still know how to do it).
38 | 2. Build the project again. What do you think was different this time around?
39 |
40 |
41 |
42 | ## Reflection
43 |
44 | The build backend is the tool that actually creates the wheel and source distribution.
45 | What we just did was switching out the build backend to use [Setuptools](https://setuptools.pypa.io/en/latest/setuptools.html) instead of Hatchling, the buildsystem made for [Hatch](https://hatch.pypa.io/).
46 | This means that instead of calling `hatchling.build.build_wheel('dist/')` and `hatchling.build.build_sdist('dist/')` to create the wheel and source distribution, uv called `setuptools.build_meta.build_wheel('dist/')` and `setuptools.build_meta.build_sdist('dist/')` when you ran `uv build`.
47 | This might seem complicated, but luckily, unless you have good reasons, it doesn't really matter if you choose Hatchling, Setuptools or PDM as the build backend.
48 | Still, there are differences, but you can switch the build backend when you find a reason to.
49 | Until then, just stick with the defaults.
50 |
51 | ## Next up
52 | [Publishing your package to PyPI](./12-publishing-packages.md)
53 |
--------------------------------------------------------------------------------
/pdm/tutorial/01-introduction/05-virtual-environments.md:
--------------------------------------------------------------------------------
1 | # Virtual environments with PDM
2 |
3 | In the previous section, we saw how two dependencies can be incompatible.
4 | But what if we have two different projects we are developing, and their dependencies are incompatible?
5 | That shouldn't be a problem: each app could work by itself.
6 | However, if we just install the dependencies in the global Python installation, we cannot work on both applications at the same time!
7 |
8 | Virtual environments come to the rescue: virtual environments are more or less separate Python installations isolated from the rest of your system.
9 | These environments are incredibly useful as they let us develop our libraries and applications without worrying about messing up any other applications or libraries we have installed on our computer.
10 |
11 | There are many different tools that can create virtual environments such as the builtin `venv` module, the slightly more powerful [`virtualenv`](https://virtualenv.pypa.io/en/latest/) module or `conda`.
12 | However, you don't need to worry about that at all, because PDM takes care of creating and managing virtual environments for us!
13 | (It uses `virtualenv` to manage dependencies by default, but you probably don't need to worry about those details)
14 |
15 | ## Exercises
16 |
17 | > [!NOTE]
18 | > You may need to replace python here with either py (Windows) or python3 (Linux and Mac)
19 |
20 | > [!NOTE]
21 | > If you're using the integrated terminal in VSCode, PyCharm or another "smart" code editor, then you may not see the system Python executable in exercise 1. and 6.
22 |
23 | 1. Run `python -c "import sys; print(sys.executable)"` in a terminal emulator (cmd or powershell) to see which Python executable Windows uses by default (the first result).
24 | 2. Run `pdm run python -c "import sys; print(sys.executable)"` to see which Python executable PDM uses by default. Is there any difference?
25 | 3. Activate the virtual environment by running `.venv\Scripts\activate` (Windows) or `.venv\bin\activate` (Linux/Mac OS)
26 | 4. Run `python -c "import sys; print(sys.executable)"` to see which Python executable Windows now uses by default
27 | 5. Deactivate the virtual environment by running `deactivate`
28 | 6. Run `python -c "import sys; print(sys.executable)"` to see which Python executable Windows now uses by default
29 | 7. Discuss with your neighbour: What are the benefits of using a different virtual environment for each project compared to using just one large environment for everything?
30 |
31 |
32 |
33 | ## Reflection
34 |
35 | When we run Python in a virtual environment, we use a Python executable that is different from when we run the system Python executable.
36 | Moreover, if we want to run Python in the virtual environment, then we can either run `pdm run python` or activate it by first running `.venv\Scripts\activate` (Windows) or `.venv\bin\activate` (Linux/Mac OS) and then `python`.
37 |
38 | ## Specifying virtual environments in your IDE
39 |
40 | Some IDEs (integrated development environments -- a fancy name for a code editor) let you specify a virtual environment for your projects, and by doing so, the IDE can provide better code completion and syntax highlighting.
41 |
42 | ### Specifying virtual environment in VSCode
43 |
44 | To specify the virtual environment in VSCode, press Ctrl⇧ ShiftP to get the command palette and write `Python: Select interpreter`.
45 |
46 | ### Specifying virtual environment in PyCharm
47 | Press CtrlAltS to open settings and select **Project: | Python Interpreter**. Click **Add Interpreter** → **Add Local Interpreter** → **Virtualenv Environment** (PDM uses Virtualenv by default) and select the environment from the list (or browse for the correct environment if it doesn't appear).
48 | For more info see [the official PyCharm documentation](https://www.jetbrains.com/help/pycharm/creating-virtual-environment.html#python_create_virtual_env).
49 |
50 | ## Optional reading: Installing new Python versions
51 | If you work on many Python projects at once, then you may need to install multiple Python versions as well.
52 | Your previous project might have used Python 3.12, but you want to use the latest features of Python 3.13!
53 | Luckily, there's an easy way to get around this problem: the `pdm python install`-command.
54 |
55 | You can, for example, use PDM to install the latest version of Python 3.13 by running `pdm python install 3.13`, which will download the Python 3.13 version of *[Python build standalone](https://github.com/astral-sh/python-build-standalone)* (originally developed by [Gregory Szorc (indygreg)](https://github.com/indygreg), but now maintained by Astral).
56 | Python build standalone is a patched standalone version of CPython that can be copied around on your file system.
57 | It has a couple of [behaviour quirks](https://gregoryszorc.com/docs/python-build-standalone/main/quirks.html) compared to the stock CPython, but likely nothing you will notice in practice.
58 |
59 | ## Next up
60 | [Development dependencies and unit tests](../02-more-about-dependencies/06-unit-tests.md)
61 |
--------------------------------------------------------------------------------
/assets/post_it_pink.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
33 |
--------------------------------------------------------------------------------
/assets/post_it_yellow.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
33 |
--------------------------------------------------------------------------------
/uv/tutorial/04-managing-your-code/16-github-workflows.md:
--------------------------------------------------------------------------------
1 | # Automatic code quality control on GitHub
2 |
3 | While pre-commit hooks are a great way to prevent us from committing code with style errors, they are insufficient to prevent bugs.
4 | We cannot run our full test suite before every commit.
5 | If we did, we could lose all momentum while coding and end up never committing at all.
6 | Instead, we set up a continuous integration (CI) pipeline that runs the test suite and linter checks every time we push code to GitHub.
7 |
8 | The nice thing about these pipelines is that you can set them up so all pull requests must pass the CI pipeline before it can be merged.
9 | There are many systems we can use to set up such pipelines, but perhaps the most common is GitHub's builtin system: GitHub Actions.
10 |
11 | ## Exercises
12 |
13 | 1. Create a directory in your project root named `.github`. Create a new directory called `workflows` inside there again and a file named `ci.yml` inside the `workflows directory`. Copy the below configuration into that file.
14 |
15 | ```yaml
16 | name: Run test
17 |
18 | on: [push, pull_request]
19 |
20 | jobs:
21 | run-tests:
22 | runs-on: ubuntu-latest
23 | steps:
24 | - name: Checkout code
25 | uses: actions/checkout@v4
26 | - name: Setup uv
27 | uses: astral-sh/setup-uv@v6
28 | with:
29 | python-version: 3.13
30 | enable-cache: true
31 | - name: Run unit tests
32 | run: uv run pytest
33 | ```
34 |
35 | 1. What do you think the `on: [push, pull_request]` line does?
36 | 1. What is the difference between `run`-steps and `uses`-steps?
37 | 1. Commit the file and push to GitHub. Open the GitHub repository in your browser and have a look at the `Actions` tab. Do you see a job? If so, press it and have a look at its log. Can you tell what happens?
38 |
39 |
40 |
41 | ## Reflection
42 | We have now set up a very simple CI job where the unit tests are run automatically every time someone pushes to GitHub and every time someone opens or make changes to a PR.
43 | You can even [create badges](https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/monitoring-workflows/adding-a-workflow-status-badge) that show if the CI-pipeline fails or succeeds so anyone can see the CI-pipeline status from your README file!
44 | However, pay attention to how we set up these pipelines: we use `uses`-steps and `run`-steps.
45 | The `run`-steps simply run terminal commands, while the `uses`-steps are a bit "scarier": They run jobs defined in external GitHub repositories.
46 | We'll look a bit more at that later, but first, let's improve our CI pipeline a bit.
47 |
48 | ## Exercises
49 |
50 | 1. Add the following configuration right under the `run-tests:` line and update the `runs-on: ubuntu-latest` line so it says `runs-on: ${{ matrix.os }}` instead. What do you think this does?
51 | ```yaml
52 | strategy:
53 | matrix:
54 | os: [ubuntu-latest, macos-latest, windows-latest]
55 | ```
56 | 2. Commit and push your changes to see how the new pipeline runs. Are there any changes compared to last time the pipeline ran?
57 | 3. Update the pipeline so it also runs with multiple Python versions (e.g. '3.11', '3.12', and '3.13'). If you're stuck, then you can check out the [GitHub Actions docs](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/running-variations-of-jobs-in-a-workflow). Commit and push the code to see if you succeeded.
58 | 4. Add a new step before the unit tests are run where you run the command `uv run ruff check . && uv run ruff format --check .`. Commit and push the code to see if you succeeded. What do you think this command does?
59 |
60 |
61 |
62 | ## Reflection
63 |
64 | We now have a CI-pipeline that runs our tests on several Python versions AND several operating systems.
65 | This gives us confidence that the code we write is robust and will work for many users.
66 | However, remember what we said earlier?
67 | We're running arbitrary code from other repositories in our CI-CD pipeline!
68 |
69 | ## Exercises
70 |
71 | 1. Visit https://github.com/astral-sh/setup-uv and discuss what you think this repository contains with your neighbour.
72 | 1. Discuss with your neighbour: what do you think can happen if a nefarious actor gains access to the `astral-sh/setup-uv` repository? Is there any way for us to protect ourselves from such a scenario?
73 | 1. Visit https://github.com/astral-sh/setup-uv/tree/v6 and press the commit hash link (it should be a string of seven numbers and letters (a-f) below the \<\> Code button). Copy the hash from the URL you just visited (the URL should have the following format: https://github.com/{user/org}/{repo}/commit/{hash}). It should be a long string of seemingly random letters and numbers (e.g. `6b9c6063abd6010835644d4c2e1bef4cf5cd0fca`).
74 | 1. Update the `uses: astral-sh/setup-uv@v4`-line so it says `uses: astral-sh/setup-uv@6b9c6063abd6010835644d4c2e1bef4cf5cd0fca`. Discuss with your neighbour: Why should you do this with your CI-pipelines? And is there any considerations regarding when you should pin based on the Git hash?
75 |
76 |
77 |
78 |
79 | ## Reflection
80 |
81 | We have now updated our pipeline to use a specific commit of the `setup-uv` step.
82 | This is an immutable reference, so we can feel pretty confident that the contents of the action won't suddenly change and inject malware in our code.
83 | It is, in general, good practice to [pin your CI/CD pipelines based on Git hashes](https://julienrenaux.fr/2019/12/20/github-actions-security-risk/), and you can set up [Dependabot](https://docs.github.com/en/code-security/dependabot) to keep your dependencies up-to-date (but we do not have experience with that).
84 | Still, it might be OK to use non-static references like tags, on certain workflows where you 100% trust the author (e.g. company-internal workflows).
85 | So, when you use GitHub actions, you should be aware that [they have been used as threat vectors](https://blog.pypi.org/posts/2024-12-11-ultralytics-attack-analysis/), but in general, the safety they provide in letting you run a CI pipeline outweighs the potential risk they bring.
86 | Furthermore, if you want to protect yourself, then you can use [zizmor](https://woodruffw.github.io/zizmor/) to scan your GitHub actions for potential security flaws (which we will look at later).
87 |
88 | ## Next up
89 | [Setting up continuous delivery (CD) to automatically push changes to PyPI](./17-publishing-with-github.md)
90 |
--------------------------------------------------------------------------------
/pdm/tutorial/04-managing-your-code/16-github-workflows.md:
--------------------------------------------------------------------------------
1 | # Automatic code quality control on GitHub
2 |
3 | While pre-commit hooks are a great way to prevent us from committing code with style errors, they are insufficient to prevent bugs.
4 | We cannot run our full test suite before every commit.
5 | If we did, we could lose all momentum while coding and end up never committing at all.
6 | Instead, we set up a continuous integration (CI) pipeline that runs the test suite and linter checks every time we push code to GitHub.
7 |
8 | The nice thing about these pipelines is that you can set them up so all pull requests must pass the CI pipeline before it can be merged.
9 | There are many systems we can use to set up such pipelines, but perhaps the most common is GitHub's builtin system: GitHub Actions.
10 |
11 | ## Exercises
12 |
13 | 1. Create a directory in your project root named `.github`. Create a new directory called `workflows` inside there again and a file named `ci.yml` inside the `workflows directory`. Copy the below configuration into that file.
14 |
15 | ```yaml
16 | name: Run test
17 |
18 | on: [push, pull_request]
19 |
20 | jobs:
21 | run-tests:
22 | runs-on: ubuntu-latest
23 | steps:
24 | - name: Checkout code
25 | uses: actions/checkout@v4
26 | - name: Setup PDM
27 | uses: pdm-project/setup-pdm@v4
28 | with:
29 | python-version: 3.13
30 | cache: true
31 | - name: Install project
32 | run: pdm install
33 | - name: Run unit tests
34 | run: pdm run pytest
35 | ```
36 |
37 | 1. What do you think the `on: [push, pull_request]` line does?
38 | 1. What is the difference between `run`-steps and `uses`-steps?
39 | 1. Commit the file and push to GitHub. Open the GitHub repository in your browser and have a look at the `Actions` tab. Do you see a job? If so, press it and have a look at its log. Can you tell what happens?
40 |
41 |
42 |
43 | ## Reflection
44 | We have now set up a very simple CI job where the unit tests are run automatically every time someone pushes to GitHub and every time someone opens or make changes to a PR.
45 | You can even [create badges](https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/monitoring-workflows/adding-a-workflow-status-badge) that show if the CI-pipeline fails or succeeds so anyone can see the CI-pipeline status from your README file!
46 | However, pay attention to how we set up these pipelines: we use `uses`-steps and `run`-steps.
47 | The `run`-steps simply run terminal commands, while the `uses`-steps are a bit "scarier": They run jobs defined in external GitHub repositories.
48 | We'll look a bit more at that later, but first, let's improve our CI pipeline a bit.
49 |
50 | ## Exercises
51 |
52 | 1. Add the following configuration right under the `run-tests:` line and update the `runs-on: ubuntu-latest` line so it says `runs-on: ${{ matrix.os }}` instead. What do you think this does?
53 | ```yaml
54 | strategy:
55 | matrix:
56 | os: [ubuntu-latest, macos-latest, windows-latest]
57 | ```
58 | 2. Commit and push your changes to see how the new pipeline runs. Are there any changes compared to last time the pipeline ran?
59 | 3. Update the pipeline so it also runs with multiple Python versions (e.g. '3.11', '3.12', and '3.13'). If you're stuck, then you can check out the [GitHub Actions docs](https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/running-variations-of-jobs-in-a-workflow). Commit and push the code to see if you succeeded.
60 | 4. Add a new step before the unit tests are run where you run the command `pdm run ruff check . && pdm run ruff format --check .`. Commit and push the code to see if you succeeded. What do you think this command does?
61 |
62 |
63 |
64 | ## Reflection
65 |
66 | We now have a CI-pipeline that runs our tests on several Python versions AND several operating systems.
67 | This gives us confidence that the code we write is robust and will work for many users.
68 | However, remember what we said earlier?
69 | We're running arbitrary code from other repositories in our CI-CD pipeline!
70 |
71 | ## Exercises
72 |
73 | 1. Visit https://github.com/pdm-project/setup-pdm and discuss what you think this repository contains with your neighbour.
74 | 1. Discuss with your neighbour: what do you think can happen if a nefarious actor gains access to the `pdm-project/setup-pdm` repository? Is there any way for us to protect ourselves from such a scenario?
75 | 1. Visit https://github.com/pdm-project/setup-pdm/tree/v4 and press the commit hash link (it should be a string of seven numbers and letters (a-f) below the \<\> Code button). Copy the hash from the URL you just visited (the URL should have the following format: https://github.com/{user/org}/{repo}/commit/{hash}). It should be a long string of seemingly random letters and numbers (e.g. `deb8d8a4e2a03aabcef6f2cc981923fc6b29ef99`).
76 | 1. Update the `uses: pdm-project/setup-pdm@v4`-line so it says `uses: pdm-project/setup-pdm@deb8d8a4e2a03aabcef6f2cc981923fc6b29ef99`. Discuss with your neighbour: Why should you do this with your CI-pipelines? And is there any considerations regarding when you should pin based on the Git hash?
77 |
78 |
79 |
80 | ## Reflection
81 |
82 | We have now updated our pipeline to use a specific commit of the `setup-pdm` step.
83 | This is an immutable reference, so we can feel pretty confident that the contents of the action won't suddenly change and inject malware in our code.
84 | It is, in general, good practice to [pin your CI/CD pipelines based on Git hashes](https://julienrenaux.fr/2019/12/20/github-actions-security-risk/), and you can set up [Dependabot](https://docs.github.com/en/code-security/dependabot) to keep your dependencies up-to-date (but we do not have experience with that).
85 | Still, it might be OK to use non-static references like tags, on certain workflows where you 100% trust the author (e.g. company-internal workflows).
86 | So, when you use GitHub actions, you should be aware that [they have been used as threat vectors](https://blog.pypi.org/posts/2024-12-11-ultralytics-attack-analysis/), but in general, the safety they provide in letting you run a CI pipeline outweighs the potential risk they bring.
87 | Furthermore, if you want to protect yourself, then you can use [zizmor](https://woodruffw.github.io/zizmor/) to scan your GitHub actions for potential security flaws (which we will look at later).
88 |
89 | ## Next up
90 | [Setting up continuous delivery (CD) to automatically push changes to PyPI](./17-publishing-with-github.md)
91 |
--------------------------------------------------------------------------------
/uv/tutorial/01-introduction/05-virtual-environments.md:
--------------------------------------------------------------------------------
1 | # Virtual environments with uv
2 |
3 | In the previous section, we saw how two dependencies can be incompatible.
4 | But what if we have two different projects we are developing, and their dependencies are incompatible?
5 | That shouldn't be a problem: each app could work by itself.
6 | However, if we just install the dependencies in the global Python installation, we cannot work on both applications at the same time!
7 |
8 | Virtual environments come to the rescue: virtual environments are more or less separate Python installations isolated from the rest of your system.
9 | These environments are incredibly useful as they let us develop our libraries and applications without worrying about messing up any other applications or libraries we have installed on our computer.
10 |
11 | There are many different tools that can create virtual environments such as the builtin `venv` module, the slightly more powerful [`virtualenv`](https://virtualenv.pypa.io/en/latest/) module or `conda`.
12 | However, you don't need to worry about that at all, because uv takes care of creating and managing virtual environments for us (with its own virtual environment implementation)!
13 |
14 | ## Exercises
15 |
16 | > [!NOTE]
17 | > You may need to replace python here with either py (Windows) or python3 (Linux and Mac)
18 |
19 | > [!NOTE]
20 | > If you're using the integrated terminal in VSCode, PyCharm or another "smart" code editor, then you may not see the system Python executable in exercise 1. and 6.
21 |
22 | 1. Run `python -c "import sys; print(sys.executable)"` in a terminal emulator (cmd or powershell) to see which Python executable your system uses by default (the first result).
23 | 2. Run `uv run python -c "import sys; print(sys.executable)"` to see which Python executable uv uses by default in your project. Is there any difference?
24 | 3. Activate the virtual environment by running `.venv\Scripts\activate` (Windows) or `source .venv/bin/activate` (Linux/Mac OS)
25 | 4. Run `python -c "import sys; print(sys.executable)"` to see which Python executable your system now uses by default
26 | 5. Deactivate the virtual environment by running `deactivate`
27 | 6. Run `python -c "import sys; print(sys.executable)"` to see which Python executable your system now uses by default
28 | 7. Discuss with your neighbour: What are the benefits of using a different virtual environment for each project compared to using just one large environment for everything?
29 |
30 |
31 |
32 | ## Reflection
33 |
34 | When we run Python in a virtual environment, we use a Python executable that is different from when we run the system Python executable.
35 | Moreover, if we want to run Python in the virtual environment, then we can either run `uv run python` or activate it by first running `.venv\Scripts\activate` (Windows) or `.venv\bin\activate` (Linux/Mac OS) and then `python`.
36 | We recommend using `uv run`, as that will sync your environment with the `pyproject.toml`-file as well, but use whichever method you prefer.
37 |
38 | > [!IMPORTANT]
39 | > uv has a couple of very strange behaviours, which is there for speed.
40 | > First and foremost, it tries to install libraries by hardlinking them to a specific local installation cache.
41 | > That was a lot of fancy words, but essentially, it means that if you open a library installed in a uv virtual environment and edit it, then you will edit that library for ALL uv environments that have installed it!
42 | > This will usually not be a problem for beginner programmers, but it's not unusual for more experienced programmers to make small edits to libraries while debugging, and if you do that with uv, then you should be aware that you are modifying the global installation cache.
43 | > In other words: uv environments are not completely isolated!
44 | >
45 | > If you want to avoid this behaviour, you can set the environment variable `UV_LINK_MODE=copy`.
46 |
47 | > [!IMPORTANT]
48 | > [uv will not compile `.py`-files to `.pyc`-files](https://pythonspeed.com/articles/faster-pip-installs/) (those in the `__pycache__`-directories).
49 | > These are files that Python generate whenever it runs a script or imports a module, and creating them is a bit time-consuming.
50 | > Python will therefore helpfully cache these `.pyc`-files in the `__pycache__`-directories, and only update them when the corresponding `.py`-files are updated.
51 | > Furthermore, Pip autogenerates these `.pyc`-files for every module in every package you install.
52 | > Uv, on the other hand, skips this step, as it makes installing packages slower, and a key feature of uv is lightning-fast environment setup ([Poetry also skips this step](https://github.com/python-poetry/poetry/pull/6205)).
53 | > Also, you can disable bytecode compilation in pip as well by running `pip install --no-compile {package}`.
54 | >
55 | > When you're developing, this doesn't matter much, however it means that if you use `uv` to generate containers that you deploy at cloud services, then you should probably include the `--compile-bytecode`-flag, which makes sure that the `.pyc`-files are generated upon installation, not at runtime.
56 | > Otherwise, you'd have to pay the bytecode compilation price every time you start a container, instead of when you build the container image.
57 |
58 | ## Specifying virtual environments in your IDE
59 |
60 | Some IDEs (integrated development environments -- a fancy name for a code editor) let you specify a virtual environment for your projects, and by doing so, the IDE can provide better code completion and syntax highlighting.
61 |
62 | ### Specifying virtual environment in VSCode
63 |
64 | To specify the virtual environment in VSCode, press Ctrl⇧ ShiftP to get the command palette and write `Python: Select interpreter`.
65 |
66 | ### Specifying virtual environment in PyCharm
67 | Press CtrlAltS to open settings and select **Project: | Python Interpreter**. Click **Add Interpreter** → **Add Local Interpreter** → **Virtualenv Environment** and select the environment from the list (or browse for the correct environment if it doesn't appear).
68 | For more info see [the official PyCharm documentation](https://www.jetbrains.com/help/pycharm/creating-virtual-environment.html#python_create_virtual_env).
69 |
70 | ## Optional reading: Installing new Python versions
71 | If you work on many Python projects at once, then you may need to install multiple Python versions as well.
72 | Your previous project might have used Python 3.12, but you want to use the latest features of Python 3.13!
73 | Luckily, there's an easy way to get around this problem: the `uv python install`-command.
74 |
75 | You can, for example, use uv to install the latest version of Python 3.13 by running `uv python install 3.13`, which will download the Python 3.13 version of *[Python build standalone](https://github.com/astral-sh/python-build-standalone)* (originally developed by [Gregory Szorc (indygreg)](https://github.com/indygreg), but now maintained by Astral, the makers of uv).
76 | Python build standalone is a patched standalone version of CPython that can be copied around on your file system.
77 | It has a couple of [behaviour quirks](https://gregoryszorc.com/docs/python-build-standalone/main/quirks.html) compared to the stock CPython, but likely nothing you will notice in practice.
78 |
79 | ## Next up
80 | [Development dependencies and unit tests](../02-more-about-dependencies/06-unit-tests.md)
81 |
--------------------------------------------------------------------------------
/uv/tutorial/04-managing-your-code/17-publishing-with-github.md:
--------------------------------------------------------------------------------
1 | # Continuous delivery with GitHub Actions
2 |
3 | We now have a continuous integration pipeline that automatically checks if new code can be included in the next release.
4 | However, we still don't have a good way of doing a release!
5 | We need to manually clone the repository, build wheels and source distributions and push them to PyPI (remember your token!), which is an annoying workflow.
6 |
7 | What we want instead, is for the release to happen automatically any time we add a tag on the format `v*.*.*` to our GitHub repository.
8 | This is a form of continuous delivery (CD), and luckily, there's a way to do just that with GitHub actions!
9 | And what's more, we don't even need to set up tokens.
10 |
11 | ## Exercises
12 |
13 | 1. Set up your GitHub repository as a trusted publisher on Test PyPI. To do so, navigate to the project management page on Test PyPI (https://test.pypi.org/manage/project/{project-name}/releases/), press the publishing button on the left and add a GitHub trusted publisher.
14 | 2. Create a new GitHub actions file name `.github/workflows/publish.yml` with the following content
15 | ```yaml
16 | name: release
17 |
18 | on:
19 | push:
20 | tags:
21 | - "v*.*.*"
22 |
23 | permissions:
24 | id-token: write
25 |
26 | jobs:
27 | build-and-publish:
28 | runs-on: ubuntu-latest
29 | steps:
30 | - name: Checkout code
31 | uses: actions/checkout@v4
32 | - name: Setup uv
33 | uses: astral-sh/setup-uv@v6
34 | with:
35 | python-version: 3.13
36 | enable-cache: true
37 | - name: build package
38 | run: uv build
39 | - name: Publish package distributions to PyPI
40 | uses: pypa/gh-action-pypi-publish@release/v1
41 | with:
42 | repository-url: https://test.pypi.org/legacy/
43 | ```
44 | 3. Modify the pipeline so the unit tests and linting checks are run before the release. What do you think will happen if you add a tag to a commit that doesn't pass the linter checks or unit tests?
45 | 4. Commit and push the new pipeline. In your browser, navigate to your GitHub repository and press the "Releases" button on the right.
46 | 5. Open `src/{package_name}/__init__.py` and add the line `__version__ = "0.2.0"`. Then, open the `pyproject.toml`-file and update the version number to "0.2.0" as well. Commit and push these changes.
47 | 6. In the GitHub repository: press the "Make a release button" (or "Draft a release" if you've already made one) on the right-hand side of your project's front page and make a release with the tag `v0.2.0`. Check if your CD-pipeline runs.
48 |
49 |
50 |
51 | ## Reflection
52 |
53 | You have now set up CI-CD pipelines for your project with CI-pipelines that check if your code follows your coding standards and CD-pipelines that automatically publish your code to PyPI whenever you make a new version!
54 |
55 | However, the downside of this setup is that it still requires us to remember many steps whenever we want to publish a new version of our package: Update the `__version__` variable in `__init__.py`, update the `version`-field in `pyproject.toml`, commit, push and make a release.
56 | A much easier way would be if Python, somehow, could automatically set the version based on the tag alone.
57 | While no build backends natively support this, most still have plugins that enable *dynamic versioning*.
58 |
59 | ## Exercises
60 |
61 | 1. Update `__init__.py` so it contains the following
62 | ```python
63 | import importlib.metadata
64 |
65 | __version__ = importlib.metadata.version(__name__)
66 | ```
67 | 2. Modify the `build_system` table of `pyproject.toml` so it contains the following (what do you think this change does?):
68 | ```toml
69 | [build-system]
70 | requires = ["setuptools", "setuptools-scm"]
71 | build-backend = "setuptools.build_meta"
72 |
73 | [tool.setuptools_scm]
74 | ```
75 | 3. Delete the `version` field in the project table of `pyproject.toml` and add the field `dynamic = ["version"]`. What do you think this signifies?
76 | 4. Commit and push your code
77 | 5. Run `git fetch --tags` to get all tags from GitHub.
78 | 6. Install the project again with `uv install`. Which version are you currently on?
79 | 7. Add the tag `v0.3.0` by running `git tag v0.3.0`. Which version of your library do you think will be installed when you run `uv sync` now? Run `uv sync` and check if you were right.
80 | 8. Push the tag to GitHub with `git push origin tag v0.3.0`. Go to GitHub and create a release with this tag.
81 | 9. Check if the CD-pipeline runs. Once it has succeeded, check `test.pypi.org` to see if you can find the latest version of your code.
82 |
83 |
84 |
85 | ## Reflection
86 |
87 | You have now set up dynamic versioning for the package, with the version number automatically updating based on the git tag.
88 | To do this, we first updated the `__init__.py`-file so it fetched the version from the metadata of the package (which is created when the package is built).
89 |
90 | Then, we updated the build system to use setuptools with the `setuptools-scm` plugin.
91 | This plugin automatically fetches the version from Git tags.
92 | The final thing we had to do to enable dynamic versioning was to remove the version field from `pyproject.toml` and add the `dynamic = ["version"]` field.
93 | This tells the build system that the version is dynamic and should be generated at build time.
94 |
95 | While we can create new releases very easily, there are still things to consider, specifically regarding security.
96 | With CD, we must be extra careful that no one can hijack our CD-pipeline, as that can lead to our name being used to spread malware.
97 | To avoid this, we can use a GitHub Actions security scanner tool named "[Zizmor](https://woodruffw.github.io/zizmor/)", which became popular after a clever pipeline injection on the popular Ultralytics package for computer vision (more details are available [here](https://blog.yossarian.net/2024/12/06/zizmor-ultralytics-injection)).
98 |
99 | ## Exercises
100 | 1. Add `zizmor` as a development dependency and run it on your workflows (try both normal mode and pedantic mode with `zizmor -p`). Read about the warnings and fixes on the [audit rules page of Zizmor's documentation](https://woodruffw.github.io/zizmor/audits/)
101 | 2. Resolve all Zizmor findings (decide if you want pedantic mode or normal mode).
102 | 3. Add the [Zizmor pre-commit hook](https://github.com/woodruffw/zizmor-pre-commit) to `pre-commit-config.yaml`
103 | 4. Add Zizmor to your CI-workflow.
104 |
105 |
106 |
107 | ## Reflection
108 |
109 | We have now secured our CI-workflows with Zizmor as well, so now, we can feel confident that we are following best practices regarding workflow security.
110 |
111 | ## Optional: build native wheels for many platforms with `cibuildwheel`
112 |
113 | If you are publishing an extension module (i.e. a library with compiled code from languages like C, C++ or Rust), then you'd ideally want to provide pre-compiled wheels for as many platforms and Python versions as possible.
114 | Luckily, PyPA provides a tool that helps us do exactly that: [`cibuildwheel`](https://github.com/pypa/cibuildwheel).
115 | So it might be worthwhile to check that out if you are publishing an extension module!
116 |
117 | ## Next up
118 | [A summary of what we've learned](./18-summary.md)
119 |
--------------------------------------------------------------------------------
/pdm/tutorial/04-managing-your-code/17-publishing-with-github.md:
--------------------------------------------------------------------------------
1 | # Continuous delivery with GitHub Actions
2 |
3 | We now have a continuous integration pipeline that automatically checks if new code can be included in the next release.
4 | However, we still don't have a good way of doing a release!
5 | We need to manually clone the repository, build wheels and source distributions and push them to PyPI (remember your token!), which is an annoying workflow.
6 |
7 | What we want instead, is for the release to happen automatically any time we add a tag on the format `v*.*.*` to our GitHub repository.
8 | This is a form of continuous delivery (CD), and luckily, there's a way to do just that with GitHub actions!
9 | And what's more, we don't even need to set up tokens.
10 |
11 | ## Exercises
12 |
13 | 1. Set up your GitHub repository as a trusted publisher on Test PyPI. To do so, navigate to the project management page on Test PyPI (https://test.pypi.org/manage/project/{project-name}/releases/), press the publishing button on the left and add a GitHub trusted publisher.
14 | 2. Create a new GitHub actions file name `.github/workflows/publish.yml` with the following content
15 | ```yaml
16 | name: release
17 |
18 | on:
19 | push:
20 | tags:
21 | - "v*.*.*"
22 |
23 | permissions:
24 | id-token: write
25 |
26 | jobs:
27 | build-and-publish:
28 | runs-on: ubuntu-latest
29 | steps:
30 | - name: Checkout code
31 | uses: actions/checkout@v4
32 | - name: Setup PDM
33 | uses: pdm-project/setup-pdm@v4
34 | with:
35 | python-version: 3.13
36 | cache: true
37 | - name: Install project
38 | run: pdm install
39 | - name: build package
40 | run: pdm build
41 | - name: Publish package distributions to PyPI
42 | uses: pypa/gh-action-pypi-publish@release/v1
43 | with:
44 | repository-url: https://test.pypi.org/legacy/
45 | ```
46 | 3. Modify the pipeline so the unit tests and linting checks are run before the release. What do you think will happen if you add a tag to a commit that doesn't pass the linter checks or unit tests?
47 | 4. Commit and push the new pipeline. In your browser, navigate to your GitHub repository and press the "Releases" button on the right.
48 | 5. Open `src/{package_name}/__init__.py` and add the line `__version__ = "0.2.0"`. Then, open the `pyproject.toml`-file and update the version number to "0.2.0" as well. Commit and push these changes.
49 | 6. In the GitHub repository: press the "Make a release button" (or "Draft a release" if you've already made one) on the right-hand side of your project's front page and make a release with the tag `v0.2.0`. Check if your CD-pipeline runs.
50 |
51 |
52 |
53 | ## Reflection
54 |
55 | You have now set up CI-CD pipelines for your project with CI-pipelines that check if your code follows your coding standards and CD-pipelines that automatically publish your code to PyPI whenever you make a new version!
56 |
57 | However, the downside of this setup is that it still requires us to remember many steps whenever we want to publish a new version of our package: Update the `__version__` variable in `__init__.py`, update the `version`-field in `pyproject.toml`, commit, push and make a release.
58 | A much easier way would be if Python, somehow, could automatically set the version based on the tag alone.
59 | While no build backends natively support this, most still have plugins that enable *dynamic versioning*.
60 |
61 | ## Exercises
62 |
63 | 1. Update `__init__.py` so it contains the following
64 | ```python
65 | import importlib.metadata
66 |
67 | __version__ = importlib.metadata.version(__name__)
68 | ```
69 | 2. Modify the `build_system` table of `pyproject.toml` so it contains the following (what do you think this change does?):
70 | ```toml
71 | [build-system]
72 | requires = ["setuptools", "setuptools-scm"]
73 | build-backend = "setuptools.build_meta"
74 |
75 | [tool.setuptools_scm]
76 | ```
77 | 3. Delete the `version` field in the project table of `pyproject.toml` and add the field `dynamic = ["version"]`. What do you think this signifies?
78 | 4. Commit and push your code
79 | 5. Run `git fetch --tags` to get all tags from GitHub.
80 | 6. Install the project again with `pdm install`. Which version are you currently on?
81 | 7. Add the tag `v0.3.0` by running `git tag v0.3.0`. Which version of your library do you think will be installed when you run `pdm install` now? Run `pdm install` and check if you were right.
82 | 8. Push the tag to GitHub with `git push origin tag v0.3.0`. Go to GitHub and create a release with this tag.
83 | 9. Check if the CD-pipeline runs. Once it has succeeded, check `test.pypi.org` to see if you can find the latest version of your code.
84 |
85 |
86 |
87 | ## Reflection
88 |
89 | You have now set up dynamic versioning for the package, with the version number automatically updating based on the git tag.
90 | To do this, we first updated the `__init__.py`-file so it fetched the version from the metadata of the package (which is created when the package is built).
91 |
92 | Then, we updated the build system to use setuptools with the `setuptools-scm` plugin.
93 | This plugin automatically fetches the version from Git tags.
94 | The final thing we had to do to enable dynamic versioning was to remove the version field from `pyproject.toml` and add the `dynamic = ["version"]` field.
95 | This tells the build system that the version is dynamic and should be generated at build time.
96 |
97 | While we can create new releases very easily, there are still things to consider, specifically regarding security.
98 | With CD, we must be extra careful that no one can hijack our CD-pipeline, as that can lead to our name being used to spread malware.
99 | To avoid this, we can use a GitHub Actions security scanner tool named "[Zizmor](https://woodruffw.github.io/zizmor/)", which became popular after a clever pipeline injection on the popular Ultralytics package for computer vision (more details are available [here](https://blog.yossarian.net/2024/12/06/zizmor-ultralytics-injection)).
100 |
101 | ## Exercises
102 | 1. Add `zizmor` as a development dependency and run it on your workflows (try both normal mode and pedantic mode with `zizmor -p`). Read about the warnings and fixes on the [audit rules page of Zizmor's documentation](https://woodruffw.github.io/zizmor/audits/)
103 | 2. Resolve all Zizmor findings (decide if you want pedantic mode or normal mode).
104 | 3. Add the [Zizmor pre-commit hook](https://github.com/woodruffw/zizmor-pre-commit) to `pre-commit-config.yaml`
105 | 4. Add Zizmor to your CI-workflow.
106 |
107 |
108 |
109 | ## Reflection
110 |
111 | We have now secured our CI-workflows with Zizmor as well, so now, we can feel confident that we are following best practices regarding workflow security.
112 |
113 | ## Optional: build native wheels for many platforms with `cibuildwheel`
114 |
115 | If you are publishing an extension module (i.e. a library with compiled code from languages like C, C++ or Rust), then you'd ideally want to provide pre-compiled wheels for as many platforms and Python versions as possible.
116 | Luckily, PyPA provides a tool that helps us do exactly that: [`cibuildwheel`](https://github.com/pypa/cibuildwheel).
117 | So it might be worthwhile to check that out if you are publishing an extension module!
118 |
119 | ## Next up
120 | [A summary of what we've learned](./18-summary.md)
121 |
--------------------------------------------------------------------------------
/uv/code/packaging-tutorial/requirements.txt:
--------------------------------------------------------------------------------
1 | # This file was autogenerated by uv via the following command:
2 | # uv export -o requirements.txt
3 | -e .
4 | annotated-types==0.7.0 \
5 | --hash=sha256:1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53 \
6 | --hash=sha256:aff07c09a53a08bc8cfccb9c85b05f1aa9a2a6f23728d790723543408344ce89
7 | # via pydantic
8 | anyio==4.9.0 \
9 | --hash=sha256:673c0c244e15788651a4ff38710fea9675823028a6f08a5eda409e0c9840a028 \
10 | --hash=sha256:9f76d541cad6e36af7beb62e978876f3b41e3e04f2c1fbf0884604c0a9c4d93c
11 | # via httpx
12 | certifi==2025.4.26 \
13 | --hash=sha256:0a816057ea3cdefcef70270d2c515e4506bbc954f417fa5ade2021213bb8f0c6 \
14 | --hash=sha256:30350364dfe371162649852c63336a15c70c6510c2ad5015b21c2345311805f3
15 | # via
16 | # httpcore
17 | # httpx
18 | colorama==0.4.6 ; sys_platform == 'win32' \
19 | --hash=sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44 \
20 | --hash=sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6
21 | # via pytest
22 | h11==0.16.0 \
23 | --hash=sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1 \
24 | --hash=sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86
25 | # via httpcore
26 | httpcore==1.0.9 \
27 | --hash=sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55 \
28 | --hash=sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8
29 | # via httpx
30 | httpx==0.28.1 \
31 | --hash=sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc \
32 | --hash=sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad
33 | # via packaging-tutorial
34 | idna==3.10 \
35 | --hash=sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9 \
36 | --hash=sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3
37 | # via
38 | # anyio
39 | # httpx
40 | iniconfig==2.1.0 \
41 | --hash=sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7 \
42 | --hash=sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760
43 | # via pytest
44 | packaging==25.0 \
45 | --hash=sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484 \
46 | --hash=sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f
47 | # via pytest
48 | pluggy==1.5.0 \
49 | --hash=sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1 \
50 | --hash=sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669
51 | # via pytest
52 | pydantic==2.11.4 \
53 | --hash=sha256:32738d19d63a226a52eed76645a98ee07c1f410ee41d93b4afbfa85ed8111c2d \
54 | --hash=sha256:d9615eaa9ac5a063471da949c8fc16376a84afb5024688b3ff885693506764eb
55 | # via
56 | # packaging-tutorial
57 | # pydantic-settings
58 | pydantic-core==2.33.2 \
59 | --hash=sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56 \
60 | --hash=sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef \
61 | --hash=sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a \
62 | --hash=sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f \
63 | --hash=sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab \
64 | --hash=sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916 \
65 | --hash=sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf \
66 | --hash=sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a \
67 | --hash=sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7 \
68 | --hash=sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612 \
69 | --hash=sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1 \
70 | --hash=sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7 \
71 | --hash=sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a \
72 | --hash=sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7 \
73 | --hash=sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025 \
74 | --hash=sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849 \
75 | --hash=sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b \
76 | --hash=sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e \
77 | --hash=sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea \
78 | --hash=sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac \
79 | --hash=sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51 \
80 | --hash=sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e \
81 | --hash=sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162 \
82 | --hash=sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65 \
83 | --hash=sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de \
84 | --hash=sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc \
85 | --hash=sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb \
86 | --hash=sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef \
87 | --hash=sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1 \
88 | --hash=sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5 \
89 | --hash=sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88 \
90 | --hash=sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290 \
91 | --hash=sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d \
92 | --hash=sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc \
93 | --hash=sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc \
94 | --hash=sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30 \
95 | --hash=sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e \
96 | --hash=sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9 \
97 | --hash=sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9 \
98 | --hash=sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f \
99 | --hash=sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5 \
100 | --hash=sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab \
101 | --hash=sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593 \
102 | --hash=sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1 \
103 | --hash=sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f \
104 | --hash=sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8 \
105 | --hash=sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf \
106 | --hash=sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246 \
107 | --hash=sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9 \
108 | --hash=sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011 \
109 | --hash=sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6 \
110 | --hash=sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8 \
111 | --hash=sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2 \
112 | --hash=sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6 \
113 | --hash=sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d
114 | # via pydantic
115 | pydantic-settings==2.9.1 \
116 | --hash=sha256:59b4f431b1defb26fe620c71a7d3968a710d719f5f4cdbbdb7926edeb770f6ef \
117 | --hash=sha256:c509bf79d27563add44e8446233359004ed85066cd096d8b510f715e6ef5d268
118 | # via packaging-tutorial
119 | pytest==8.3.5 \
120 | --hash=sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820 \
121 | --hash=sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845
122 | python-dotenv==1.1.0 \
123 | --hash=sha256:41f90bc6f5f177fb41f53e87666db362025010eb28f60a01c9143bfa33a2b2d5 \
124 | --hash=sha256:d7c01d9e2293916c18baf562d95698754b0dbbb5e74d457c45d4f6561fb9d55d
125 | # via pydantic-settings
126 | sniffio==1.3.1 \
127 | --hash=sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2 \
128 | --hash=sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc
129 | # via anyio
130 | typing-extensions==4.13.2 \
131 | --hash=sha256:a439e7c04b49fec3e5d3e2beaa21755cadbbdc391694e28ccdd36ca4a1408f8c \
132 | --hash=sha256:e6c81219bd689f51865d9e372991c540bda33a0379d5573cddb9a3a23f7caaef
133 | # via
134 | # anyio
135 | # pydantic
136 | # pydantic-core
137 | # typing-inspection
138 | typing-inspection==0.4.0 \
139 | --hash=sha256:50e72559fcd2a6367a19f7a7e610e6afcb9fac940c650290eed893d61386832f \
140 | --hash=sha256:9765c87de36671694a67904bf2c96e395be9c6439bb6c87b5142569dcdd65122
141 | # via
142 | # pydantic
143 | # pydantic-settings
144 |
--------------------------------------------------------------------------------
/pdm/tutorial/01-introduction/04-dependencies.md:
--------------------------------------------------------------------------------
1 | # Our first installable module
2 |
3 | We are now ready to start developing our package.
4 | To start with, we'll add a runnable module inside our package (we'll soon have a better way of testing whether our code works).
5 | Specifically, we'll add a Python module that fetches the PyCon US programme, parses it and prints out the immediate next sessions (which it does to help us check if the code works, later, we'll look at unit tests for a better way to check this.)
6 |
7 | ## Exercises
8 | 1. Create a file `pycon.py` in the `src/{{package name}}` directory (replace `package name` with the name of your package) and copy the contents of [`code/packaging-tutorial/src/packaging_tutorial/pycon.py`](../../code/packaging-tutorial/src/packaging_tutorial/pycon.py) into it. You do not need to understand the code - this is just an example to show how we can package our code.
9 | 2. Run the command `pdm install` in the same terminal window you used to run `pdm init`. You should get a few lines of output, among them the line `Virtualenv is created successfully at {{path to your package}}\.venv` and `🎉 All complete!`.
10 | 3. Run this file by typing `pdm run python src/{{package_name}}/pycon.py` in your terminal emulator (replace `{{package name}}` with the name of your package). What happened?
11 | 4. Add httpx as a dependency to your project by running `pdm add httpx`
12 | 5. Open the `pyproject.toml` file. Can you find httpx anywhere in that file?
13 | 6. Try to run the `pycon.py` file again
14 |
15 |
16 |
17 | ## Reflection
18 |
19 | When we tried to run the script above, we got an `ImportError` because we hadn't installed HTTPX first.
20 | This is great since we want to have full control over what packages we need to install to run our code.
21 | Otherwise, it would be very difficult for others to use it!
22 | We should have all this information about what we need to run our code in the `pyproject.toml`-file and let PDM take care of installing and managing the dependencies.
23 | We could manually add the dependencies to the `pyproject.toml`-file.
24 | However, instead we used PDM handy way for adding dependencies and installing dependencies at the same time: `pdm add`.
25 | In fact, when you ran `pdm add httpx`, three things happened: PDM checked if you could add HTTPX as a dependency, then it updated the `pyproject.toml` file before it installed HTTPX in the virtual environment for your project.
26 | You could do all of this manually as well.
27 |
28 | Before we move on, we can make a little script to check if we can install our library.
29 |
30 | ## Exercises
31 |
32 | 1. Create a new folder in your project root (i.e. outside of `src`) called `scripts`, and create a new file `get_charlas.py` into it. Copy the contents of [`code/packaging-tutorial/scripts/get_charlas.py`](../../code/packaging-tutorial/scripts/get_charlas.py) into it (you may need to modify the `import packaging_tutorial.pycon`-import, replacing `packaging_tutorial` with the name of the directory inside `src`) and run it. What did it do?
33 | 2. Discuss with your neighbour: why do you think the import in the `get_charlas.py`-script worked?
34 |
35 |
36 |
37 | ## Reflection
38 | The reason the import in the `get_charlas.py`-script worked was that when we ran `pdm install` in exercise 2, our package was installed into our virtual environment (more about virtual environments later).
39 | This means we can import the code as any other package you have installed!
40 | Importantly, when you set up a project with pdm, it is installed in a way so any change in `src/{{package_name}}` is automatically included in the imported code.
41 | This is called an *editable install*, and the process was standardised in [PEP 660](https://peps.python.org/pep-0660/) and is described well in the [Setuptools documentation](https://setuptools.pypa.io/en/latest/userguide/development_mode.html).
42 | Another way to accomplish more or less the same as running `pdm install` is therefore to run `pip install -e .`.
43 | However, the benefit of using PDM is that you always know that you are working with the correct environment, and you also get the lock file (more about theset things later).
44 |
45 | Now that we have tested that we can import our code, we are ready to add a couple of dependencies and improve it a little bit further.
46 |
47 | ## Exercises
48 |
49 | 1. Open the `pyproject.toml` file and manually add `pydantic` to the dependencies.
50 | 2. Replace the code in your `pycon.py`-file with the code from the [`code/packaging-tutorial/src/packaging_tutorial/validated_pycon.py`](../../code/packaging-tutorial/src/packaging_tutorial/validated_pycon.py) file from this repository and run the code with `pdm run python scripts/get_charlas.py`. What happened, and why did this happen?
51 | 3. Install the dependencies by running `pdm install` and try to run the `scripts/get_charlas.py` file again. What happened now?
52 |
53 |
54 |
55 | ## Reflection
56 | So we see that we can add dependencies to the `pyproject.toml` file manually, which is very useful when we want to set up a template for others to use.
57 | Still, we recommend that you use `pdm add` in most cases. It's a one-step process that ensures that you get a compatible set of dependencies.
58 | Note that PDM will try to add the latest version of a given dependency that is compatible with the rest of your dependencies, which means that the order in which you add dependencies can matter (but 99% of the time, it won't matter).
59 |
60 | ## Next up
61 | [Virtual environments](./05-virtual-environments.md), or, if you want to look at the bonus reading: [runnable scripts](./0x-bonus-scripts.md).
62 |
63 | # Optional reading: Dependency compatibility
64 |
65 | ## Bonus: Exercises
66 |
67 | 1. Remove the `pydantic` dependency by running `uv remove pydantic`
68 | 2. Use whichever method you prefer to add dependencies to your project and try to add and install the following two dependencies (remember the versions!):
69 |
70 | * `"pydantic<2"`
71 | * `"pydantic-settings==2.1.0"`
72 |
73 |
74 |
75 | ## Bonus: Reflection
76 | As you may have noticed, PDM spent a while saying `⠙ Resolving dependencies` before saying
77 |
78 | ```raw
79 | 🔒 Lock failed
80 | Unable to find a resolution for pydantic
81 | because of the following conflicts:
82 | pydantic<2 (from project)
83 | pydantic>=2.3.0 (from pydantic-settings@2.1.0)
84 | To fix this, you could loosen the dependency version constraints in pyproject.toml. See
85 | https://pdm.fming.dev/latest/usage/dependency/#solve-the-locking-failure for more details.
86 | ```
87 |
88 | This means that you have two dependencies that are *incompatible*.
89 | Specifically, you want to add [Pydantic](https://docs.pydantic.dev/latest/) with a version less than 2.0.0.
90 | However, all versions of [Pydantic settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings/) with version greater than or equal to 1.0 requires Pydantic version 2 or greater.
91 | This will clearly not work: you cannot have both have a Pydantic version less than 2 and Pydantic version greater than or equal to 2!
92 |
93 | A nice feature of PDM is that it will check this for us so we don't accidentally end up with incompatible dependencies for our project.
94 | If you use `pdm add`, it takes it one step further and tries to find a compatible dependency to add to your project.
95 | So when you tried to run `"pdm add pydantic<2" "pydantic-settings>=1"`, PDM tried to find versions the dependencies that were compatible (with each other and already existing dependencies.)
96 | This was not possible for `"pydantic-settings>=1"` and `"pydantic<2"`.
97 | Luckily though, we don't rely on any outdated Pydantic features, so we can upgrade to Pydantic 2
98 |
99 | ## Bonus: Exercises
100 | 1. Remove the pydantic and pydantic-settings dependencies from the `pyproject.toml` file. Add them again by typing `pdm add pydantic pydantic-settings`.
101 | 2. Replace the code in your `pycon.py`-file with the code from the [`code/packaging-tutorial/src/packaging_tutorial/validated_pycon_with_config.py`](../../code/packaging-tutorial/src/packaging_tutorial/validated_pycon_with_config.py). How does this file differ from the original `pycon.py`-file?
102 | 3. Test that the code works by running the `get_charlas`-script with `pdm run python scripts/get_charlas.py`
103 |
104 |
105 |
106 | ## Bonus: Reflection
107 |
108 | The easiest way to resolve dependency problems is to try to delete all of your clashing dependencies and add them again.
109 | This can work because PDM will try to find the latest compatible version of your dependencies.
110 | Thus, if you don't have any special requirements for your dependencies, then this approach can work.
111 | However, if you for some reason absolutely MUST have a specific version of your dependencies, then you might be out of luck.
112 |
113 | ## Aside: dependencies in libraries and applications
114 | Dependency compatibility is tricky business.
115 | Generally, you want your *applications* to have completely fixed versions in the listed dependencies and your *libraries* to have as unconstrained versions as possible in the listed dependencies.
116 | The reason for this difference is that we we want full control over the libraries we use in applications we develop (e.g. to avoid supply chain attacks) but we don't want to mandate which versions consumers of our libraries should use.
117 | Managing versions is easier for dependencies that strictly adhers to [*semantic versioning (semver)*](https://semver.org/).
118 | TODO: TENK In those cases, you can change the dependencies in the `pyproject.toml`-file from `library==X.Y.Z` to `library>=X.Y,=0.26,<1`).
119 | However, strictly adhering to semver is difficult, and most libraries don't do this.
120 | In this case, you can just removethe version pin altogether in the dependency list (or use `>=`-pins).
121 | For example, you may use `httpx` or `httpx>=0.26.0` instead of `httpx==0.26.0`.
122 |
--------------------------------------------------------------------------------
/uv/tutorial/01-introduction/04-dependencies.md:
--------------------------------------------------------------------------------
1 | # Our first installable module
2 |
3 | We are now ready to start developing our package.
4 | To start with, we'll add a runnable module inside our package (we'll soon have a better way of testing whether our code works).
5 | Specifically, we'll add a Python module that fetches the PyCon US programme, parses it and prints out the immediate next sessions (which it does to help us check if the code works, later, we'll look at unit tests for a better way to check this.)
6 |
7 | ## Exercises
8 | 1. Create a file `pycon.py` in the `src/{{package name}}` directory (replace `package name` with the name of your package) and copy the contents of [`code/packaging-tutorial/src/packaging_tutorial/pycon.py`](../../code/packaging-tutorial/src/packaging_tutorial/pycon.py) into it. You do not need to understand the code - this is just an example to show how we can package our code.
9 | 2. Run the command `uv sync` in the same terminal window you used to run `uv init --package`. You should get a few lines of output, among them the line `Creating virtual environment at: .venv`.
10 | 3. Run the file you created by typing `uv run python src/{{package_name}}/pycon.py` in your terminal emulator (replace `{{package name}}` with the name of your package). What happened?
11 | 4. Add httpx as a dependency to your project by running `uv add httpx`
12 | 5. Open the `pyproject.toml` file. Can you find httpx anywhere in that file?
13 | 6. Try to run the `pycon.py` file again
14 |
15 |
16 |
17 | ## Reflection
18 |
19 | When first we tried to run the Python file above, we got an `ImportError` because we hadn't installed HTTPX first.
20 | This is great since we want to have full control over what packages we need to install to run our code.
21 | Otherwise, it would be very difficult for others to use it!
22 | We should have all this information about what we need to run our code in the `pyproject.toml`-file and let uv take care of installing and managing the dependencies.
23 | We could manually add the dependencies to the `pyproject.toml`-file.
24 | However, instead, we used uv's handy command for adding dependencies and installing dependencies at the same time: `uv add`.
25 | In fact, when you ran `uv add httpx`, three things happened: uv checked if you could add HTTPX as a dependency, then it updated the `pyproject.toml` file before it installed HTTPX in the virtual environment for your project.
26 | You could do all of this manually as well.
27 |
28 | Before we move on, we can make a little script to check if we can install our library.
29 |
30 | ## Exercises
31 |
32 | 1. Create a new folder in your project root (i.e. outside of `src`) called `scripts`, and create a new file `get_charlas.py` into it. Copy the contents of [`code/packaging-tutorial/scripts/get_charlas.py`](../../code/packaging-tutorial/scripts/get_charlas.py) into it (you may need to modify the `import packaging_tutorial.pycon`-import, replacing `packaging_tutorial` with the name of the directory inside `src`) and run it. What did it do?
33 | 2. Discuss with your neighbour: why do you think the import in the `get_charlas.py`-script worked?
34 |
35 |
36 |
37 |
38 | ## Reflection
39 | The reason the import in the `get_charlas.py`-script worked was that when we ran `uv sync` in exercise 2, our package was installed into our virtual environment (more about virtual environments later).
40 | This means we can import the code as any other package you have installed!
41 | Importantly, when you set up a project with uv, it is installed in a way so any change in `src/{{package_name}}` is automatically included in the imported code.
42 | This is called an *editable install*, and the process was standardised in [PEP 660](https://peps.python.org/pep-0660/) and is described well in the [Setuptools documentation](https://setuptools.pypa.io/en/latest/userguide/development_mode.html).
43 | Another way to accomplish more or less the same as running `uv sync` is therefore to run `pip install -e .`.
44 | However, the benefit of using uv is that you always know that you are working with the correct environment, and you also get the lock file (more about theset things later).
45 |
46 | Now that we have tested that we can import our code, we are ready to add a couple of dependencies and improve it a little bit further.
47 |
48 | ## Exercises
49 |
50 | 1. Open the `pyproject.toml` file and manually add `pydantic` to the dependencies.
51 | 2. Replace the code in your `pycon.py`-file with the code from the [`code/packaging-tutorial/src/packaging_tutorial/validated_pycon.py`](../../code/packaging-tutorial/src/packaging_tutorial/validated_pycon.py) file from this repository and run the code with `uv run --no-sync python scripts/get_charlas.py`. What happened, and why did this happen?
52 | 3. Try again, but this time by running `uv run python scripts/get_charlas.py`. What happened now?
53 |
54 |
55 |
56 | ## Reflection
57 | So we see that we can add dependencies to the `pyproject.toml` file manually, which is very useful when we want to set up a template for others to use.
58 | If you do this, then uv will install all dependencies automatically next time you use `uv run` unless you specifically add the `--no-sync`-flag.
59 | This is a bit different from other tools, which usually don't sync dependency when you run the code.
60 | However, uv syncs dependencies very fast (since does a lot of smart caching), so it's able to re-sync almost instantly and will therefore do so each time you run code through it.
61 |
62 | Still, we recommend that you use `uv add` in most cases.
63 | It's a one-step process that ensures that you get a compatible set of dependencies.
64 | Note that uv will try to add the latest version of a given dependency that is compatible with the rest of your dependencies, which means that the order in which you add dependencies can matter (but 99% of the time, it won't matter).
65 |
66 | ## Next up
67 | [Virtual environments](./05-virtual-environments.md), or, if you want to look at the bonus reading: [runnable scripts](./0x-bonus-scripts.md).
68 |
69 | # Optional reading: Dependency compatibility
70 |
71 | ## Exercises
72 |
73 | 1. Remove the `pydantic` dependency by running `uv remove pydantic`
74 | 2. Use whichever method you prefer to add dependencies to your project and try to add and install the following two dependencies (remember the versions!):
75 |
76 | * `"pydantic<2"`
77 | * `"pydantic-settings>=1"`
78 |
79 |
80 |
81 | ## Reflection
82 | As you may have noticed, uv took a little bit longer this time, before failing with the message
83 |
84 | ```raw
85 | × No solution found when resolving dependencies:
86 | ╰─▶ Because only the following versions of pydantic-settings are available:
87 | [...]
88 |
89 | And because your project depends on pydantic<2 and pydantic-settings>=1, we can conclude that your project's requirements are unsatisfiable.
90 |
91 | hint: Pre-releases are available for `pydantic-settings` in the requested range (e.g., 2.0b2), but pre-releases weren't enabled (try: `--prerelease=allow`)
92 | help: If you want to add the package regardless of the failed resolution, provide the `--frozen` flag to skip locking and syncing.
93 | ```
94 |
95 | This means that you have two dependencies that are *incompatible*.
96 | Specifically, you want to add [Pydantic](https://docs.pydantic.dev/latest/) with a version less than 2.0.0.
97 | However, all versions of [Pydantic settings](https://docs.pydantic.dev/latest/concepts/pydantic_settings/) with version greater than or equal to 1.0 requires Pydantic version 2 or greater.
98 | This will clearly not work: you cannot have both have a Pydantic version less than 2 and Pydantic version greater than or equal to 2!
99 |
100 | A nice feature of uv is that it will check this for us so we don't accidentally end up with incompatible dependencies for our project.
101 | If you use `uv add`, it takes it one step further and tries to find a compatible dependency to add to your project.
102 | So when you tried to run `"uv add pydantic<2" "pydantic-settings>=1"`, uv tried to find versions of the dependencies that were compatible (with each other and already existing dependencies.)
103 | This was not possible for `"pydantic-settings>=1"` and `"pydantic<2"`.
104 | Luckily though, we don't rely on any outdated Pydantic features, so we can upgrade to Pydantic 2
105 |
106 | ## Exercises
107 | 1. Remove the pydantic and pydantic-settings dependencies from the `pyproject.toml` file. Add them again by typing `uv add pydantic pydantic-settings`.
108 | 2. Replace the code in your `pycon.py`-file with the code from the [`code/packaging-tutorial/src/packaging_tutorial/validated_pycon_with_config.py`](../../code/packaging-tutorial/src/packaging_tutorial/validated_pycon_with_config.py). How does this file differ from the original `pycon.py`-file?
109 | 3. Test that the code works by running the `get_charlas`-script with `uv run python scripts/get_charlas.py`
110 |
111 |
112 |
113 | ## Reflection
114 |
115 | The easiest way to resolve dependency problems is to try to delete all of your clashing dependencies and add them again.
116 | This can work because uv will try to find the latest compatible version of your dependencies.
117 | Thus, if you don't have any special requirements for your dependencies, then this approach can work.
118 | However, if you for some reason absolutely MUST have a specific version of your dependencies, then you might be out of luck.
119 |
120 | ## Aside: dependencies in libraries and applications
121 | Dependency compatibility is tricky business.
122 | Generally, you want your *applications* to have completely fixed versions in the listed dependencies and your *libraries* to have as unconstrained versions as possible in the listed dependencies.
123 | The reason for this difference is that we we want full control over the libraries we use in applications we develop (e.g. to avoid supply chain attacks) but we don't want to mandate which versions consumers of our libraries should use.
124 | Managing versions is easier for dependencies that strictly adhers to [*semantic versioning (semver)*](https://semver.org/).
125 | In those cases, you can change the dependencies in the `pyproject.toml`-file from `library==X.Y.Z` to `library>=X.Y,=0.26,<1`).
126 | However, strictly adhering to semver is difficult, and most libraries don't do this.
127 | In this case, you can just removethe version pin altogether in the dependency list (or use `>=`-pins).
128 | For example, you may use `httpx` or `httpx>=0.26.0` instead of `httpx==0.26.0`.
129 |
--------------------------------------------------------------------------------