beaker-gantry 3.6.0


pip install beaker-gantry

  Latest version

Released: Mar 30, 2026


Meta
Author: Allen Institute for Artificial Intelligence, Pete Walsh
Requires Python: >=3.10

Classifiers

Intended Audience
  • Science/Research

Programming Language
  • Python :: 3

Topic
  • Scientific/Engineering :: Artificial Intelligence


Beaker Gantry

Gantry is a CLI that streamlines running experiments in Beaker.


CI PyPI License

2025-07-18 12 49 12

⚑️Easy to use

  • No Docker required! 🚫 🐳
  • No writing Beaker YAML experiment specs.
  • Easy setup.
  • Simple CLI.

🏎 Fast

  • Fire off Beaker experiments from your laptop instantly!
  • No local image build or upload.

πŸͺΆ Lightweight

  • Pure Python (built on top of beaker's Python client).
  • Minimal dependencies.

Who is this for?

Gantry is for both new and seasoned Beaker users who need to run batch jobs (as opposed to interactive sessions) from a rapidly changing repository, especially Python-based jobs.

Without Gantry, this workflow usually looks like this:

  1. Add a Dockerfile to your repository.
  2. Build the Docker image locally.
  3. Push the Docker image to Beaker.
  4. Write a YAML Beaker experiment spec that points to the image you just uploaded.
  5. Submit the experiment spec.
  6. Make changes and repeat from step 2.

This requires experience with Docker, experience writing Beaker experiment specs, and a fast and reliable internet connection.

With Gantry, on the other hand, that same workflow simplifies down to this:

  1. (Optional) Write a pyproject.toml/setup.py file, a PIP requirements.txt file, a or conda environment.yml file to specify your Python environment.
  2. Commit and push your changes.
  3. Submit and track a Beaker experiment with the gantry run command.
  4. Make changes and repeat from step 2.

In this README

Additional info

πŸ‘‹ Examples

πŸ’» For developers

Installing

Installing with pip

Gantry is available on PyPI. Just run

pip install beaker-gantry

Installing globally with uv

Gantry can be installed and made available on the PATH using uv:

uv tool install beaker-gantry

With this command, beaker-gantry is automatically installed to an isolated virtual environment.

Installing from source

To install Gantry from source, first clone the repository:

git clone https://github.com/allenai/beaker-gantry.git
cd beaker-gantry

Then run

pip install -e .

Quick start

One-time setup

  1. Create and clone your repository.

    If you haven't already done so, create a GitHub repository for your project and clone it locally. Every gantry command you run must be invoked from the root directory of your repository.

  2. Configure Gantry.

    If you've already configured the Beaker command-line client, Gantry will find and use the existing configuration file (usually located at $HOME/.beaker/config.yml). Otherwise just set the environment variable BEAKER_TOKEN to your Beaker user token.

    Some gantry settings can also be specified in a pyproject.toml file under the section [tool.gantry]. For now those settings are:

    1. workspace - The default Beaker workspace to use.
    2. gh_token_secret - The name of the Beaker secret with your GitHub API token.
    3. budget - The default Beaker budget to use.
    4. log_level - The (local) Python log level. Defaults to "warning".
    5. quiet - A boolean. If true the gantry logo won't be displayed on the command line.

    For example:

    # pyproject.toml
    [tool.gantry]
    workspace = "ai2/my-default-workspace"
    gh_token_secret = "GITHUB_TOKEN"
    budget = "ai2/my-teams-budget"
    log_level = "warning"
    quiet = false
    

    The first time you call gantry run ... you'll also be prompted to provide a GitHub personal access token with the repo scope if your repository is private. This allows Gantry to clone your private repository when it runs in Beaker. You don't have to do this just yet (Gantry will prompt you for it), but if you need to update this token later you can use the gantry config set-gh-token command.

  3. (Optional) Specify your Python environment.

    Typically you'll have to create one of several different files to specify your Python environment. There are three widely used options:

    1. A pyproject.toml or setup.py file.
    2. A PIP requirements.txt file.
    3. A conda environment.yml file.

    Gantry will automatically find and use these files to reconstruct your Python environment at runtime. Alternatively you can provide a custom Python install command with the --install option to gantry run, or skip the Python setup completely with --no-python.

Submit your first experiment with Gantry

Let's spin up a Beaker experiment that just prints "Hello, World!" from Python.

First make sure you've committed and pushed all changes so far in your repository. Then (from the root of your repository) run:

gantry run --show-logs -- python -c 'print("Hello, World!")'

❗Note: Everything after the -- is the command + arguments you want to run on Beaker. It's necessary to include the -- if any of your arguments look like options themselves (like -c in this example) so gantry can differentiate them from its own options.

In this case we didn't request any GPUs nor a specific cluster, so this could run on any Beaker cluster. We can use the --gpu-type and --gpus options to get GPUs. For example:

gantry run --show-logs --gpu-type=h100 --gpus=1 -- python -c 'print("Hello, World!")'

Or we can use the --cluster option to request clusters by their name or aliases. For example:

gantry run --show-logs --cluster=ai2/jupiter --gpus=1 -- python -c 'print("Hello, World!")'

Try gantry run --help to see all of the available options.

FAQ

Can I use my own Docker/Beaker image?

Click to expand πŸ’¬

You sure can! Just set the --beaker-image TEXT or --docker-image TEXT option. Gantry can use any image that has bash, curl, and git installed.

If your image comes with a Python environment that you want gantry to use, add the flag --system-python. For example:

gantry run --show-logs --docker-image='python:3.10' --system-python -- python --version

Will Gantry work for GPU experiments?

Click to expand πŸ’¬

Absolutely! This was the main use-case Gantry was developed for. Just set the --gpus INT option for gantry run to the number of GPUs you need, and optionally --gpu-type TEXT (e.g. --gpu-type=h100).

How can I save results or metrics from an experiment?

Click to expand πŸ’¬

By default Gantry uses the /results directory on the image as the location of the results dataset, which will also be set as the environment variable RESULTS_DIR. That means that everything your experiment writes to this directory will be persisted as a Beaker dataset when the experiment finalizes. And you can also attach metrics in Beaker for your experiment by writing a JSON file called metrics.json to the results directory, or by calling the function gantry.api.write_metrics() from within your experiment.

How can I see the Beaker experiment spec that Gantry uses?

Click to expand πŸ’¬

You can use the --dry-run option with gantry run to see what Gantry will submit without actually submitting an experiment. You can also use --save-spec PATH in combination with --dry-run to save the actual experiment spec to a YAML file.

How can I update Gantry's GitHub token?

Click to expand πŸ’¬

Use the command gantry config set-gh-token.

How can I attach Beaker datasets to an experiment?

Click to expand πŸ’¬

Use the --dataset option for gantry run. For example:

gantry run --show-logs --dataset='petew/squad-train:/input-data' -- ls /input-data

How can I attach a WEKA bucket to an experiment?

Click to expand πŸ’¬

Use the --weka option for gantry run. For example:

gantry run --show-logs --weka='oe-training-default:/mount/weka' -- ls -l /mount/weka

How can I run distributed multi-node batch jobs with Gantry?

Click to expand πŸ’¬

If you're using torchrun you can simply set the option --replicas INT along with the flag --torchrun. Gantry will automatically configure your experiment and torchrun to run your command with all GPUs across all replicas.

For example:

gantry run \
  --show-logs \
  --gpus=8 \
  --gpu-type='h100' \
  --replicas=2 \
  --torchrun \
  --install 'uv pip install . torch numpy --torch-backend=cu129' \
  -- python -m gantry.all_reduce_bench

In general, the three options --replicas INT, --leader-selection, --host-networking used together give you the ability to run distributed batch jobs. See the Beaker docs for more information. Consider also setting --propagate-failure, --propagate-preemption, and --synchronized-start-timeout TEXT depending on your workload.

Here's a complete example using torchrun manually (without the --torchrun flag):

gantry run \
  --show-logs \
  --gpus=8 \
  --gpu-type='h100' \
  --replicas=2 \
  --leader-selection \
  --host-networking \
  --propagate-failure \
  --propagate-preemption \
  --synchronized-start-timeout='5m' \
  --install 'uv pip install . torch numpy --torch-backend=cu129' \
  --exec-method=bash \
  -- torchrun \
    '--nnodes="$BEAKER_REPLICA_COUNT:$BEAKER_REPLICA_COUNT"' \
    '--nproc-per-node="$BEAKER_ASSIGNED_GPU_COUNT"' \
    '--rdzv-id=12347' \
    '--rdzv-backend=static' \
    '--rdzv-endpoint="$BEAKER_LEADER_REPLICA_HOSTNAME:29400"' \
    '--node-rank="$BEAKER_REPLICA_RANK"' \
    '--rdzv-conf="read_timeout=420"' \
    -m gantry.all_reduce_bench

Note that we have environment variables like BEAKER_REPLICA_COUNT in the arguments to our torchrun command that we want to have expanded at runtime. To accomplish this we do two things:

  1. We wrap those arguments in single quotes to avoid expanding them locally.
  2. We set --exec-method=bash to tell gantry to run our command and arguments with bash -c, which will do variable expansion.

Alternatively you could put your whole torchrun command into a script, let's call it launch-torchrun.sh, without single quotes around the arguments. Then change your gantry run command like this:

 gantry run \
   --show-logs \
   --gpus=8 \
   --gpu-type='h100' \
   --replicas=2 \
   --leader-selection \
   --host-networking \
   --propagate-failure \
   --propagate-preemption \
   --synchronized-start-timeout='5m' \
   --install 'uv pip install . torch numpy --torch-backend=cu129' \
-  --exec-method='bash' \
-  -- torchrun \
-    '--nnodes="$BEAKER_REPLICA_COUNT:$BEAKER_REPLICA_COUNT"' \
-    '--nproc-per-node="$BEAKER_ASSIGNED_GPU_COUNT"' \
-    '--rdzv-id=12347' \
-    '--rdzv-backend=static' \
-    '--rdzv-endpoint="$BEAKER_LEADER_REPLICA_HOSTNAME:29400"' \
-    '--node-rank="$BEAKER_REPLICA_RANK"' \
-    '--rdzv-conf="read_timeout=420"' \
-    -m gantry.all_reduce_bench
+  -- ./launch-torchrun.sh

How can I customize the Python setup steps?

Click to expand πŸ’¬

If gantry's default Python setup steps don't work for you, you can override them through the --install TEXT option with a custom command or shell script. For example:

gantry run --show-logs --install='pip install -r custom_requirements.txt' -- echo "Hello, World!"

Can I use conda like with older versions of gantry?

Click to expand πŸ’¬

Yes, you can still use conda if you wish by committing a conda environment.yml file to your repo or by simply specifying --python-manager=conda. For example:

gantry run --show-logs --python-manager=conda -- which python

Can I use gantry with non-Python workloads?

Click to expand πŸ’¬

Absolutely, just add the flag --no-python and optionally set --install or --post-setup to a custom command or shell script if you need custom setup steps.

Can I use gantry to launch Beaker jobs from GitHub Actions?

Click to expand πŸ’¬

Yes, in fact this is a great way to utilize otherwise idle on-premise hardware, especially with short-running, preemptible jobs such as those you might launch to run unit tests that require accelerators. To do this you should set up a Beaker API token as a GitHub Actions Secret, named BEAKER_TOKEN, in your repository. Then copy and modify this workflow for your needs:

name: Beaker

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

on:
  pull_request:
    branches:
      - main
  push:
    branches:
      - main

jobs:
  gpu_tests:
    name: GPU Tests
    runs-on: ubuntu-latest
    timeout-minutes: 15
    env:
      BEAKER_TOKEN: ${{ secrets.BEAKER_TOKEN }}
      GANTRY_GITHUB_TESTING: 'true'  # force better logging for CI
      BEAKER_WORKSPACE: 'ai2/your-workspace'  # TODO: change this to your Beaker workspace
    steps:
      - uses: actions/checkout@v5
        with:
          ref: ${{ github.event.pull_request.head.sha }}  # check out PR head commit instead of merge commit

      - uses: astral-sh/setup-uv@v6
        with:
          python-version: '3.12'

      - name: install gantry
        run:
          uv tool install 'beaker-gantry>=3.1,<4.0'

      - name: Determine current commit SHA (pull request)
        if: github.event_name == 'pull_request'
        run: |
          echo "COMMIT_SHA=${{ github.event.pull_request.head.sha }}" >> $GITHUB_ENV
          echo "BRANCH_NAME=${{ github.head_ref }}" >> $GITHUB_ENV

      - name: Determine current commit SHA (push)
        if: github.event_name != 'pull_request'
        run: |
          echo "COMMIT_SHA=$GITHUB_SHA" >> $GITHUB_ENV
          echo "BRANCH_NAME=${{ github.ref_name }}" >> $GITHUB_ENV

      - name: launch job
        run: |
          exec gantry run \
            --show-logs \
            --yes \
            --workspace ${{ env.BEAKER_WORKSPACE }} \
            --description 'GitHub Actions GPU tests' \
            --ref ${{ env.COMMIT_SHA }} \
            --branch ${{ env.BRANCH_NAME }} \
            --priority normal \
            --preemptible \
            --gpus 1 \
            --gpu-type h100 \
            --gpu-type a100 \
            -- pytest -v tests/cuda_tests/  # TODO: change to your own command

Note that we use exec gantry run ... instead of just gantry run. This ensures that if GitHub Actions cancels the job, the SIGINT and SIGTERM signals will propagate to gantry, allowing it to clean up gracefully and cancel the running job on Beaker.

Can I use gantry outside of a git repository?

Click to expand πŸ’¬

Yes, you'll just need to provide the --remote option along with --ref and/or --branch. For example: gantry run --show-logs --yes --dry-run --remote allenai/beaker-gantry --branch main -- echo 'hello, world!'

Why "Gantry"?

Click to expand πŸ’¬

A gantry is a structure that's used, among other things, to lift containers off of ships. Analogously Beaker Gantry's purpose is to lift Docker containers (or at least the management of Docker containers) away from users.

3.6.0 Mar 30, 2026
3.5.3 Mar 07, 2026
3.5.2 Mar 05, 2026
3.5.1 Mar 02, 2026
3.5.0 Feb 19, 2026
3.4.6 Jan 30, 2026
3.4.5 Jan 29, 2026
3.4.4 Jan 29, 2026
3.4.3 Jan 14, 2026
3.4.2 Jan 14, 2026
3.4.1 Jan 13, 2026
3.4.0 Jan 09, 2026
3.3.1 Dec 12, 2025
3.3.0 Oct 20, 2025
3.3.0.dev20250930 Sep 30, 2025
3.3.0.dev20250925 Sep 25, 2025
3.3.0.dev20250905 Sep 05, 2025
3.3.0.dev20250827 Aug 27, 2025
3.2.0 Aug 26, 2025
3.1.0 Aug 25, 2025
3.0.0 Aug 06, 2025
3.0.0rc3 Jul 30, 2025
3.0.0rc2 Jul 21, 2025
3.0.0rc1 Jul 18, 2025
2.8.7 Oct 17, 2025
2.8.6 Sep 02, 2025
2.8.5 Jul 17, 2025
2.8.4 Jul 16, 2025
2.8.3 Jul 15, 2025
2.8.2 Jul 15, 2025
2.8.1 Jul 08, 2025
2.8.0 Jul 07, 2025
2.7.1 Jun 25, 2025
2.7.0 Jun 24, 2025
2.6.2 Jun 11, 2025
2.6.1 Jun 10, 2025
2.6.0 Jun 04, 2025
2.5.0 Jun 03, 2025
2.4.0 Jun 03, 2025
2.3.0 May 15, 2025
2.2.0 May 14, 2025
2.1.1 May 09, 2025
2.1.0 May 08, 2025
2.0.2 May 05, 2025
2.0.1 May 02, 2025
2.0.0 May 01, 2025
1.17.2 Oct 20, 2025
1.17.1 Jul 16, 2025
1.17.0 Apr 28, 2025
1.16.0 Apr 23, 2025
1.15.0 Apr 14, 2025
1.14.1 Mar 24, 2025
1.14.0 Mar 20, 2025
1.13.0 Feb 25, 2025
1.12.2 Feb 21, 2025
1.12.1 Jan 27, 2025
1.12.0 Jan 15, 2025
1.11.3 Jan 09, 2025
1.11.2 Jan 09, 2025
1.11.1 Jan 09, 2025
1.11.0 Jan 09, 2025
1.10.1 Jan 09, 2025
1.10.0 Nov 21, 2024
1.9.1 Nov 20, 2024
1.9.0 Nov 01, 2024
1.8.4 Oct 06, 2024
1.8.3 Aug 02, 2024
1.8.2 Jul 17, 2024
1.8.1 Jul 17, 2024
1.8.0 Jul 14, 2024
1.7.1 Jul 10, 2024
1.7.0 Jun 21, 2024
1.6.0 Jun 14, 2024
1.5.1 Jun 13, 2024
1.5.0 Jun 10, 2024
1.4.0 Jun 01, 2024
1.3.0 May 31, 2024
1.2.0 May 31, 2024
1.1.0 May 29, 2024
1.0.1 May 24, 2024
1.0.0 May 24, 2024
0.24.0 May 21, 2024
0.23.2 May 16, 2024
0.23.1 May 14, 2024
0.23.0 May 10, 2024
0.22.4 Apr 30, 2024
0.22.3 Apr 24, 2024
0.22.2 Mar 01, 2024
0.22.1 Feb 29, 2024
0.22.0 Feb 28, 2024
0.21.0 Jan 30, 2024
0.20.1 Dec 15, 2023
0.20.0 Dec 15, 2023
0.19.0 Sep 08, 2023
0.18.0 Aug 23, 2023
0.17.2 Jul 21, 2023
0.17.1 Jul 12, 2023
0.17.0 Jun 23, 2023
0.16.0 May 22, 2023
0.15.1 May 09, 2023
0.15.0 May 08, 2023
0.14.1 Apr 17, 2023
0.14.0 Apr 17, 2023
0.13.1 Mar 21, 2023
0.13.0 Mar 10, 2023
0.12.0 Mar 09, 2023
0.11.0 Mar 08, 2023
0.10.0 Mar 07, 2023
0.9.4 Mar 07, 2023
0.9.3 Mar 02, 2023
0.9.2 Mar 02, 2023
0.9.1 Feb 13, 2023
0.9.0 Feb 11, 2023
0.8.2 Jan 19, 2023
0.8.1 Sep 30, 2022
0.8.0 Sep 16, 2022
0.7.0 Jun 17, 2022
0.6.0 Jun 10, 2022
0.5.2 Jun 10, 2022
0.5.1 Jun 10, 2022
0.5.0 Jun 03, 2022
0.4.0 May 23, 2022
0.3.1 May 20, 2022
0.3.0 May 18, 2022
0.2.1 May 17, 2022
0.2.0 May 16, 2022
0.1.0 May 13, 2022

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras:
Dependencies:
beaker-py (<3.0,>=2.5.1)
GitPython (<4.0,>=3.0)
rich
click
click-help-colors
click-option-group
petname (<3.0,>=2.6)
requests
packaging
tomli
dataclass-extensions
PyYAML