trl 1.0.0


pip install trl

  Latest version

Released: Mar 30, 2026

Project Links

Meta
Author: Leandro von Werra
Requires Python: >=3.10

Classifiers

Development Status
  • 2 - Pre-Alpha

Intended Audience
  • Developers
  • Science/Research

Natural Language
  • English

Operating System
  • OS Independent

Programming Language
  • Python :: 3
  • Python :: 3.10
  • Python :: 3.11
  • Python :: 3.12
  • Python :: 3.13
  • Python :: 3.14

TRL - Transformers Reinforcement Learning

TRL Banner


A comprehensive library to post-train foundation models

License Documentation GitHub release Hugging Face Hub

🎉 What's New

OpenEnv Integration: TRL now supports OpenEnv, the open-source framework from Meta for defining, deploying, and interacting with environments in reinforcement learning and agentic workflows.

Explore how to seamlessly integrate TRL with OpenEnv in our dedicated documentation.

Overview

TRL is a cutting-edge library designed for post-training foundation models using advanced techniques like Supervised Fine-Tuning (SFT), Group Relative Policy Optimization (GRPO), and Direct Preference Optimization (DPO). Built on top of the 🤗 Transformers ecosystem, TRL supports a variety of model architectures and modalities, and can be scaled-up across various hardware setups.

Highlights

  • Trainers: Various fine-tuning methods are easily accessible via trainers like SFTTrainer, GRPOTrainer, DPOTrainer, RewardTrainer and more.

  • Efficient and scalable:

    • Leverages 🤗 Accelerate to scale from single GPU to multi-node clusters using methods like DDP and DeepSpeed.
    • Full integration with 🤗 PEFT enables training on large models with modest hardware via quantization and LoRA/QLoRA.
    • Integrates 🦥 Unsloth for accelerating training using optimized kernels.
  • Command Line Interface (CLI): A simple interface lets you fine-tune with models without needing to write code.

Installation

Python Package

Install the library using pip:

pip install trl

From source

If you want to use the latest features before an official release, you can install TRL from source:

pip install git+https://github.com/huggingface/trl.git

Repository

If you want to use the examples you can clone the repository with the following command:

git clone https://github.com/huggingface/trl.git

Quick Start

For more flexibility and control over training, TRL provides dedicated trainer classes to post-train language models or PEFT adapters on a custom dataset. Each trainer in TRL is a light wrapper around the 🤗 Transformers trainer and natively supports distributed training methods like DDP, DeepSpeed ZeRO, and FSDP.

SFTTrainer

Here is a basic example of how to use the SFTTrainer:

from trl import SFTTrainer
from datasets import load_dataset

dataset = load_dataset("trl-lib/Capybara", split="train")

trainer = SFTTrainer(
    model="Qwen/Qwen2.5-0.5B",
    train_dataset=dataset,
)
trainer.train()

GRPOTrainer

GRPOTrainer implements the Group Relative Policy Optimization (GRPO) algorithm that is more memory-efficient than PPO and was used to train Deepseek AI's R1.

from datasets import load_dataset
from trl import GRPOTrainer
from trl.rewards import accuracy_reward

dataset = load_dataset("trl-lib/DeepMath-103K", split="train")

trainer = GRPOTrainer(
    model="Qwen/Qwen2.5-0.5B-Instruct",
    reward_funcs=accuracy_reward,
    train_dataset=dataset,
)
trainer.train()

[!NOTE] For reasoning models, use the reasoning_accuracy_reward() function for better results.

DPOTrainer

DPOTrainer implements the popular Direct Preference Optimization (DPO) algorithm that was used to post-train Llama 3 and many other models. Here is a basic example of how to use the DPOTrainer:

from datasets import load_dataset
from trl import DPOTrainer

dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")

trainer = DPOTrainer(
    model="Qwen3/Qwen-0.6B",
    train_dataset=dataset,
)
trainer.train()

RewardTrainer

Here is a basic example of how to use the RewardTrainer:

from trl import RewardTrainer
from datasets import load_dataset

dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")

trainer = RewardTrainer(
    model="Qwen/Qwen2.5-0.5B-Instruct",
    train_dataset=dataset,
)
trainer.train()

Command Line Interface (CLI)

You can use the TRL Command Line Interface (CLI) to quickly get started with post-training methods like Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO):

SFT:

trl sft --model_name_or_path Qwen/Qwen2.5-0.5B \
    --dataset_name trl-lib/Capybara \
    --output_dir Qwen2.5-0.5B-SFT

DPO:

trl dpo --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \
    --dataset_name argilla/Capybara-Preferences \
    --output_dir Qwen2.5-0.5B-DPO 

Read more about CLI in the relevant documentation section or use --help for more details.

Development

If you want to contribute to trl or customize it to your needs make sure to read the contribution guide and make sure you make a dev install:

git clone https://github.com/huggingface/trl.git
cd trl/
pip install -e .[dev]

Experimental

A minimal incubation area is available under trl.experimental for unstable / fast-evolving features. Anything there may change or be removed in any release without notice.

Example:

from trl.experimental.new_trainer import NewTrainer

Read more in the Experimental docs.

Citation

@software{vonwerra2020trl,
  title   = {{TRL: Transformers Reinforcement Learning}},
  author  = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
  license = {Apache-2.0},
  url     = {https://github.com/huggingface/trl},
  year    = {2020}
}

License

This repository's source code is available under the Apache-2.0 License.

1.0.0 Mar 30, 2026
1.0.0rc1 Mar 20, 2026
0.29.1 Mar 20, 2026
0.29.0 Feb 25, 2026
0.28.0 Feb 10, 2026
0.27.2 Feb 03, 2026
0.27.1 Jan 24, 2026
0.27.0 Jan 16, 2026
0.26.2 Dec 18, 2025
0.26.1 Dec 12, 2025
0.26.0 Dec 09, 2025
0.25.1 Nov 12, 2025
0.25.0 Nov 05, 2025
0.24.0 Oct 16, 2025
0.23.1 Oct 02, 2025
0.23.0 Sep 10, 2025
0.22.2 Sep 03, 2025
0.22.1 Aug 29, 2025
0.22.0 Aug 29, 2025
0.21.0 Aug 05, 2025
0.20.0 Jul 29, 2025
0.19.1 Jul 08, 2025
0.19.0 Jun 20, 2025
0.18.2 Jun 15, 2025
0.18.1 May 29, 2025
0.18.0 May 28, 2025
0.17.0 Apr 24, 2025
0.16.1 Apr 04, 2025
0.16.0 Mar 22, 2025
0.15.2 Feb 25, 2025
0.15.1 Feb 18, 2025
0.15.0 Feb 13, 2025
0.14.0 Jan 29, 2025
0.13.0 Dec 16, 2024
0.12.2 Dec 06, 2024
0.12.1 Nov 14, 2024
0.12.0 Nov 01, 2024
0.11.4 Oct 15, 2024
0.11.3 Oct 10, 2024
0.11.2 Oct 07, 2024
0.11.1 Sep 24, 2024
0.11.0 Sep 19, 2024
0.10.1 Aug 29, 2024
0.9.6 Jul 08, 2024
0.9.4 Jun 06, 2024
0.9.3 Jun 05, 2024
0.9.2 Jun 05, 2024
0.8.6 Apr 22, 2024
0.8.5 Apr 18, 2024
0.8.4 Apr 17, 2024
0.8.3 Apr 12, 2024
0.8.2 Apr 11, 2024
0.8.1 Mar 20, 2024
0.8.0 Mar 19, 2024
0.7.11 Feb 16, 2024
0.7.10 Jan 19, 2024
0.7.9 Jan 09, 2024
0.7.8 Jan 09, 2024
0.7.7 Dec 26, 2023
0.7.6 Dec 22, 2023
0.7.5 Dec 22, 2023
0.7.4 Nov 08, 2023
0.7.3 Nov 08, 2023
0.7.2 Oct 12, 2023
0.7.1 Aug 30, 2023
0.7.0 Aug 30, 2023
0.6.0 Aug 24, 2023
0.5.0 Aug 02, 2023
0.4.7 Jul 13, 2023
0.4.6 Jun 23, 2023
0.4.5 Jun 23, 2023
0.4.4 Jun 08, 2023
0.4.3 Jun 08, 2023
0.4.2 Jun 07, 2023
0.4.1 Mar 17, 2023
0.4.0 Mar 09, 2023
0.3.1 Mar 02, 2023
0.3.0 Mar 01, 2023
0.2.1 Jan 25, 2023
0.2.0 Jan 25, 2023
0.1.0 May 15, 2022
0.0.3 Feb 28, 2021
0.0.2 Jul 17, 2020
0.0.1 Mar 30, 2020

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras:
Dependencies:
accelerate (>=1.4.0)
datasets (>=4.7.0)
packaging (>20.0)
transformers (>=4.56.2)