Pytest plugin to create CodSpeed benchmarks
Project Links
Meta
Author: Arthur Pastel
Requires Python: >=3.9
Classifiers
Development Status
- 5 - Production/Stable
Framework
- Pytest
Intended Audience
- Developers
- Information Technology
License
- OSI Approved :: MIT License
Programming Language
- Python :: 3
- Python :: 3.9
- Python :: 3.10
- Python :: 3.11
- Python :: 3.12
- Python :: 3.13
Topic
- Software Development :: Testing
- System :: Benchmark
- Utilities
Typing
- Typed
Requirements
Python: 3.9 and later
pytest: any recent version
Installation
pip install pytest-codspeed
Usage
Creating benchmarks
Creating benchmarks with pytest-codspeed
is compatible with the standard pytest-benchmark
API. So if you already have benchmarks written with it, you can start using pytest-codspeed
right away.
Marking a whole test function as a benchmark with pytest.mark.benchmark
import pytest
from statistics import median
@pytest.mark.benchmark
def test_median_performance():
return median([1, 2, 3, 4, 5])
Benchmarking selected lines of a test function with the benchmark
fixture
import pytest
from statistics import mean
def test_mean_performance(benchmark):
# Precompute some data useful for the benchmark but that should not be
# included in the benchmark time
data = [1, 2, 3, 4, 5]
# Benchmark the execution of the function
benchmark(lambda: mean(data))
def test_mean_and_median_performance(benchmark):
# Precompute some data useful for the benchmark but that should not be
# included in the benchmark time
data = [1, 2, 3, 4, 5]
# Benchmark the execution of the function:
# The `@benchmark` decorator will automatically call the function and
# measure its execution
@benchmark
def bench():
mean(data)
median(data)
Running benchmarks
Testing the benchmarks locally
If you want to run only the benchmarks tests locally, you can use the --codspeed
pytest flag:
pytest tests/ --codspeed
Note: Running
pytest-codspeed
locally will not produce any performance reporting. It's only useful for making sure that your benchmarks are working as expected. If you want to get performance reporting, you should run the benchmarks in your CI.
In your CI
You can use the CodSpeedHQ/action to run the benchmarks in Github Actions and upload the results to CodSpeed.
Example workflow:
name: CodSpeed
on:
push:
branches:
- "main" # or "master"
pull_request:
jobs:
benchmarks:
name: Run benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: pytest tests/ --codspeed
Jun 06, 2025
4.0.0b0
May 27, 2025
3.3.0b0
Jan 31, 2025
3.2.0
Jan 09, 2025
3.1.2
Jan 07, 2025
3.1.1
Jan 07, 2025
3.1.1b3
Jan 07, 2025
3.1.1b1
Jan 07, 2025
3.1.1b0
Dec 09, 2024
3.1.0
Dec 06, 2024
3.1.0b0
Oct 29, 2024
3.0.0
Sep 27, 2024
3.0.0b4
Sep 26, 2024
3.0.0b3
Sep 24, 2024
3.0.0b2
Sep 20, 2024
3.0.0b1
Sep 18, 2024
3.0.0b0
Mar 19, 2024
2.2.1
Sep 01, 2023
2.2.0
Jul 27, 2023
2.1.0
Jul 22, 2023
2.0.1
Jul 04, 2023
2.0.0
Dec 02, 2022
1.2.2
Nov 28, 2022
1.2.1
Nov 22, 2022
1.2.0
Nov 10, 2022
1.1.0
Nov 06, 2022
1.0.4
Nov 06, 2022
1.0.3
Nov 05, 2022
1.0.2
Nov 05, 2022
1.0.1