pytest-codspeed 4.1.1


pip install pytest-codspeed

  Latest version

Released: Oct 07, 2025


Meta
Author: Arthur Pastel
Requires Python: >=3.9

Classifiers

Development Status
  • 5 - Production/Stable

Framework
  • Pytest

Intended Audience
  • Developers
  • Information Technology

License
  • OSI Approved :: MIT License

Programming Language
  • Python :: 3
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11
  • Python :: 3.12
  • Python :: 3.13

Topic
  • Software Development :: Testing
  • System :: Benchmark
  • Utilities

Typing
  • Typed

pytest-codspeed

CI PyPi Version Python Version Discord CodSpeed Badge

Pytest plugin to create CodSpeed benchmarks


Documentation: https://codspeed.io/docs/reference/pytest-codspeed


Installation

pip install pytest-codspeed

Usage

Creating benchmarks

In a nutshell, pytest-codspeed offers two approaches to create performance benchmarks that integrate seamlessly with your existing test suite.

Use @pytest.mark.benchmark to measure entire test functions automatically:

import pytest
from statistics import median

@pytest.mark.benchmark
def test_median_performance():
    input = [1, 2, 3, 4, 5]
    output = sum(i**2 for i in input)
    assert output == 55

Since this measure the entire function, you might want to use the benchmark fixture for precise control over what code gets measured:

def test_mean_performance(benchmark):
    data = [1, 2, 3, 4, 5]
    # Only the function call is measured
    result = benchmark(lambda: sum(i**2 for i in data))
    assert result == 55

Check out the full documentation for more details.

Testing the benchmarks locally

If you want to run the benchmarks tests locally, you can use the --codspeed pytest flag:

$ pytest tests/ --codspeed
============================= test session starts ====================
platform darwin -- Python 3.13.0, pytest-7.4.4, pluggy-1.5.0
codspeed: 3.0.0 (enabled, mode: walltime, timer_resolution: 41.7ns)
rootdir: /home/user/codspeed-test, configfile: pytest.ini
plugins: codspeed-3.0.0
collected 1 items

tests/test_sum_squares.py .                                    [ 100%]

                         Benchmark Results
┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━┓
┃     Benchmark   Time (best)  Rel. StdDev  Run time  Iters  ┃
┣━━━━━━━━━━━━━━━━╋━━━━━━━━━━━━━╋━━━━━━━━━━━━━╋━━━━━━━━━━╋━━━━━━━━┫
┃test_sum_squares┃     1,873ns         4.8%     3.00s  66,930 ┃
┗━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━┻━━━━━━━━━━━━━┻━━━━━━━━━━┻━━━━━━━━┛
=============================== 1 benchmarked ========================
=============================== 1 passed in 4.12s ====================

Running the benchmarks in your CI

You can use the CodSpeedHQ/action to run the benchmarks in Github Actions and upload the results to CodSpeed.

Here is an example of a GitHub Actions workflow that runs the benchmarks and reports the results to CodSpeed on every push to the main branch and every pull request:

name: CodSpeed

on:
  push:
    branches:
      - "main" # or "master"
  pull_request: # required to have reports on PRs
  # `workflow_dispatch` allows CodSpeed to trigger backtest
  # performance analysis in order to generate initial data.
  workflow_dispatch:

jobs:
  benchmarks:
    name: Run benchmarks
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.13"

      - name: Install dependencies
        run: pip install -r requirements.txt

      - name: Run benchmarks
        uses: CodSpeedHQ/action@v4
        with:
          mode: instrumentation   # or `walltime`
          token: ${{ secrets.CODSPEED_TOKEN }}
          run: pytest tests/ --codspeed

Wheel compatibility matrix

Platform CPython 3.9 CPython 3.10 CPython 3.11 CPython 3.12 CPython 3.13 Python 3
any
manylinux2014_aarch64
manylinux2014_x86_64
manylinux_2_17_aarch64
manylinux_2_17_x86_64
manylinux_2_28_aarch64
manylinux_2_28_x86_64

Files in release

Extras:
Dependencies:
cffi (>=1.17.1)
pytest (>=3.8)
rich (>=13.8.1)
importlib-metadata (>=8.5.0)