transformers 4.57.1


pip install transformers

  Latest version

Released: Oct 14, 2025

Project Links

Meta
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Requires Python: >=3.9.0

Classifiers

Development Status
  • 5 - Production/Stable

Intended Audience
  • Developers
  • Education
  • Science/Research

License
  • OSI Approved :: Apache Software License

Operating System
  • OS Independent

Programming Language
  • Python :: 3
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11
  • Python :: 3.12
  • Python :: 3.13

Topic
  • Scientific/Engineering :: Artificial Intelligence

Hugging Face Transformers Library

Checkpoints on Hub Build GitHub Documentation GitHub release Contributor Covenant DOI

English | 简体中文 | 繁體中文 | 한국어 | Español | 日本語 | हिन्दी | Русский | Português | తెలుగు | Français | Deutsch | Italiano | Tiếng Việt | العربية | اردو | বাংলা |

State-of-the-art pretrained models for inference and training

Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal model, for both inference and training.

It centralizes the model definition so that this definition is agreed upon across the ecosystem. transformers is the pivot across frameworks: if a model definition is supported, it will be compatible with the majority of training frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, ...), inference engines (vLLM, SGLang, TGI, ...), and adjacent modeling libraries (llama.cpp, mlx, ...) which leverage the model definition from transformers.

We pledge to help support new state-of-the-art models and democratize their usage by having their model definition be simple, customizable, and efficient.

There are over 1M+ Transformers model checkpoints on the Hugging Face Hub you can use.

Explore the Hub today to find a model and use Transformers to help you get started right away.

Installation

Transformers works with Python 3.9+ PyTorch 2.1+, TensorFlow 2.6+, and Flax 0.4.1+.

Create and activate a virtual environment with venv or uv, a fast Rust-based Python package and project manager.

# venv
python -m venv .my-env
source .my-env/bin/activate
# uv
uv venv .my-env
source .my-env/bin/activate

Install Transformers in your virtual environment.

# pip
pip install "transformers[torch]"

# uv
uv pip install "transformers[torch]"

Install Transformers from source if you want the latest changes in the library or are interested in contributing. However, the latest version may not be stable. Feel free to open an issue if you encounter an error.

git clone https://github.com/huggingface/transformers.git
cd transformers

# pip
pip install '.[torch]'

# uv
uv pip install '.[torch]'

Quickstart

Get started with Transformers right away with the Pipeline API. The Pipeline is a high-level inference class that supports text, audio, vision, and multimodal tasks. It handles preprocessing the input and returns the appropriate output.

Instantiate a pipeline and specify model to use for text generation. The model is downloaded and cached so you can easily reuse it again. Finally, pass some text to prompt the model.

from transformers import pipeline

pipeline = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B")
pipeline("the secret to baking a really good cake is ")
[{'generated_text': 'the secret to baking a really good cake is 1) to use the right ingredients and 2) to follow the recipe exactly. the recipe for the cake is as follows: 1 cup of sugar, 1 cup of flour, 1 cup of milk, 1 cup of butter, 1 cup of eggs, 1 cup of chocolate chips. if you want to make 2 cakes, how much sugar do you need? To make 2 cakes, you will need 2 cups of sugar.'}]

To chat with a model, the usage pattern is the same. The only difference is you need to construct a chat history (the input to Pipeline) between you and the system.

[!TIP] You can also chat with a model directly from the command line.

transformers chat Qwen/Qwen2.5-0.5B-Instruct
import torch
from transformers import pipeline

chat = [
    {"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."},
    {"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"}
]

pipeline = pipeline(task="text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", dtype=torch.bfloat16, device_map="auto")
response = pipeline(chat, max_new_tokens=512)
print(response[0]["generated_text"][-1]["content"])

Expand the examples below to see how Pipeline works for different modalities and tasks.

Automatic speech recognition
from transformers import pipeline

pipeline = pipeline(task="automatic-speech-recognition", model="openai/whisper-large-v3")
pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
Image classification

from transformers import pipeline

pipeline = pipeline(task="image-classification", model="facebook/dinov2-small-imagenet1k-1-layer")
pipeline("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'label': 'macaw', 'score': 0.997848391532898},
 {'label': 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
  'score': 0.0016551691805943847},
 {'label': 'lorikeet', 'score': 0.00018523589824326336},
 {'label': 'African grey, African gray, Psittacus erithacus',
  'score': 7.85409429227002e-05},
 {'label': 'quail', 'score': 5.502637941390276e-05}]
Visual question answering

from transformers import pipeline

pipeline = pipeline(task="visual-question-answering", model="Salesforce/blip-vqa-base")
pipeline(
    image="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg",
    question="What is in the image?",
)
[{'answer': 'statue of liberty'}]

Why should I use Transformers?

  1. Easy-to-use state-of-the-art models:

    • High performance on natural language understanding & generation, computer vision, audio, video, and multimodal tasks.
    • Low barrier to entry for researchers, engineers, and developers.
    • Few user-facing abstractions with just three classes to learn.
    • A unified API for using all our pretrained models.
  2. Lower compute costs, smaller carbon footprint:

    • Share trained models instead of training from scratch.
    • Reduce compute time and production costs.
    • Dozens of model architectures with 1M+ pretrained checkpoints across all modalities.
  3. Choose the right framework for every part of a models lifetime:

    • Train state-of-the-art models in 3 lines of code.
    • Move a single model between PyTorch/JAX/TF2.0 frameworks at will.
    • Pick the right framework for training, evaluation, and production.
  4. Easily customize a model or an example to your needs:

    • We provide examples for each architecture to reproduce the results published by its original authors.
    • Model internals are exposed as consistently as possible.
    • Model files can be used independently of the library for quick experiments.
Hugging Face Enterprise Hub

Why shouldn't I use Transformers?

  • This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
  • The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library like Accelerate.
  • The example scripts are only examples. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.

100 projects using Transformers

Transformers is more than a toolkit to use pretrained models, it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects.

In order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on the community with the awesome-transformers page which lists 100 incredible projects built with Transformers.

If you own or use a project that you believe should be part of the list, please open a PR to add it!

Example models

You can test most of our models directly on their Hub model pages.

Expand each modality below to see a few example models for various use cases.

Audio
Computer vision
Multimodal
  • Audio or text to text with Qwen2-Audio
  • Document question answering with LayoutLMv3
  • Image or text to text with Qwen-VL
  • Image captioning BLIP-2
  • OCR-based document understanding with GOT-OCR2
  • Table question answering with TAPAS
  • Unified multimodal understanding and generation with Emu3
  • Vision to text with Llava-OneVision
  • Visual question answering with Llava
  • Visual referring expression segmentation with Kosmos-2
NLP
  • Masked word completion with ModernBERT
  • Named entity recognition with Gemma
  • Question answering with Mixtral
  • Summarization with BART
  • Translation with T5
  • Text generation with Llama
  • Text classification with Qwen

Citation

We now have a paper you can cite for the 🤗 Transformers library:

@inproceedings{wolf-etal-2020-transformers,
    title = "Transformers: State-of-the-Art Natural Language Processing",
    author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = oct,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
    pages = "38--45"
}
4.57.1 Oct 14, 2025
4.57.0 Oct 03, 2025
4.56.2 Sep 19, 2025
4.56.1 Sep 04, 2025
4.56.0 Aug 29, 2025
4.55.4 Aug 22, 2025
4.55.3 Aug 21, 2025
4.55.2 Aug 13, 2025
4.55.1 Aug 13, 2025
4.55.0 Aug 05, 2025
4.54.1 Jul 29, 2025
4.54.0 Jul 25, 2025
4.53.3 Jul 22, 2025
4.53.2 Jul 11, 2025
4.53.1 Jul 04, 2025
4.53.0 Jun 26, 2025
4.52.4 May 30, 2025
4.52.3 May 22, 2025
4.52.2 May 21, 2025
4.52.1 May 20, 2025
4.52.0 May 20, 2025
4.51.3 Apr 14, 2025
4.51.2 Apr 10, 2025
4.51.1 Apr 08, 2025
4.51.0 Apr 05, 2025
4.50.3 Mar 28, 2025
4.50.2 Mar 27, 2025
4.50.1 Mar 25, 2025
4.50.0 Mar 21, 2025
4.49.0 Feb 17, 2025
4.48.3 Feb 07, 2025
4.48.2 Jan 30, 2025
4.48.1 Jan 20, 2025
4.48.0 Jan 10, 2025
4.47.1 Dec 17, 2024
4.47.0 Dec 05, 2024
4.46.3 Nov 18, 2024
4.46.2 Nov 05, 2024
4.46.1 Oct 29, 2024
4.46.0 Oct 24, 2024
4.45.2 Oct 07, 2024
4.45.1 Sep 26, 2024
4.45.0 Sep 25, 2024
4.44.2 Aug 22, 2024
4.44.1 Aug 20, 2024
4.44.0 Aug 06, 2024
4.43.4 Aug 05, 2024
4.43.3 Jul 26, 2024
4.43.2 Jul 24, 2024
4.43.1 Jul 23, 2024
4.43.0 Jul 23, 2024
4.42.4 Jul 11, 2024
4.42.3 Jun 28, 2024
4.42.2 Jun 28, 2024
4.42.1 Jun 27, 2024
4.42.0 Jun 27, 2024
4.41.2 May 30, 2024
4.41.1 May 22, 2024
4.41.0 May 17, 2024
4.40.2 May 06, 2024
4.40.1 Apr 23, 2024
4.40.0 Apr 18, 2024
4.39.3 Apr 02, 2024
4.39.2 Mar 28, 2024
4.39.1 Mar 22, 2024
4.39.0 Mar 21, 2024
4.38.2 Mar 01, 2024
4.38.1 Feb 22, 2024
4.38.0 Feb 21, 2024
4.37.2 Jan 29, 2024
4.37.1 Jan 24, 2024
4.37.0 Jan 22, 2024
4.36.2 Dec 18, 2023
4.36.1 Dec 14, 2023
4.36.0 Dec 11, 2023
4.35.2 Nov 15, 2023
4.35.1 Nov 14, 2023
4.35.0 Nov 02, 2023
4.34.1 Oct 18, 2023
4.34.0 Oct 03, 2023
4.33.3 Sep 27, 2023
4.33.2 Sep 15, 2023
4.33.1 Sep 06, 2023
4.33.0 Sep 05, 2023
4.32.1 Aug 28, 2023
4.32.0 Aug 22, 2023
4.31.0 Jul 18, 2023
4.30.2 Jun 13, 2023
4.30.1 Jun 09, 2023
4.30.0 Jun 08, 2023
4.29.2 May 16, 2023
4.29.1 May 11, 2023
4.29.0 May 10, 2023
4.28.1 Apr 14, 2023
4.28.0 Apr 13, 2023
4.27.4 Mar 29, 2023
4.27.3 Mar 23, 2023
4.27.2 Mar 20, 2023
4.27.1 Mar 15, 2023
4.27.0 Mar 15, 2023
4.26.1 Feb 09, 2023
4.26.0 Jan 24, 2023
4.25.1 Dec 01, 2022
4.25.0 Dec 01, 2022
4.24.0 Nov 01, 2022
4.23.1 Oct 11, 2022
4.23.0 Oct 10, 2022
4.22.2 Sep 27, 2022
4.22.1 Sep 16, 2022
4.22.0 Sep 14, 2022
4.21.3 Sep 05, 2022
4.21.2 Aug 24, 2022
4.21.1 Aug 04, 2022
4.21.0 Jul 27, 2022
4.20.1 Jun 21, 2022
4.20.0 Jun 16, 2022
4.19.4 Jun 10, 2022
4.19.3 Jun 09, 2022
4.19.2 May 16, 2022
4.19.1 May 13, 2022
4.19.0 May 12, 2022
4.18.0 Apr 06, 2022
4.17.0 Mar 03, 2022
4.16.2 Jan 31, 2022
4.16.1 Jan 28, 2022
4.16.0 Jan 27, 2022
4.15.0 Dec 22, 2021
4.14.1 Dec 15, 2021
4.14.0 Dec 15, 2021
4.13.0 Dec 09, 2021
4.12.5 Nov 17, 2021
4.12.4 Nov 16, 2021
4.12.3 Nov 03, 2021
4.12.2 Oct 29, 2021
4.12.1 Oct 29, 2021
4.12.0 Oct 28, 2021
4.11.3 Oct 06, 2021
4.11.2 Sep 30, 2021
4.11.1 Sep 29, 2021
4.11.0 Sep 27, 2021
4.10.3 Sep 22, 2021
4.10.2 Sep 10, 2021
4.10.1 Sep 10, 2021
4.10.0 Aug 31, 2021
4.9.2 Aug 09, 2021
4.9.1 Jul 26, 2021
4.9.0 Jul 22, 2021
4.8.2 Jun 30, 2021
4.8.1 Jun 24, 2021
4.8.0 Jun 23, 2021
4.7.0 Jun 17, 2021
4.6.1 May 20, 2021
4.6.0 May 12, 2021
4.5.1 Apr 13, 2021
4.5.0 Apr 06, 2021
4.4.2 Mar 18, 2021
4.4.1 Mar 16, 2021
4.4.0 Mar 16, 2021
4.3.3 Feb 24, 2021
4.3.2 Feb 09, 2021
4.3.1 Feb 09, 2021
4.3.0 Feb 08, 2021
4.3.0rc1 Feb 04, 2021
4.2.2 Jan 21, 2021
4.2.1 Jan 14, 2021
4.2.0 Jan 13, 2021
4.1.1 Dec 17, 2020
4.1.0 Dec 17, 2020
4.0.1 Dec 09, 2020
4.0.0 Nov 30, 2020
4.0.0rc1 Nov 19, 2020
3.5.1 Nov 13, 2020
3.5.0 Nov 10, 2020
3.4.0 Oct 20, 2020
3.3.1 Sep 29, 2020
3.3.0 Sep 28, 2020
3.2.0 Sep 22, 2020
3.1.0 Sep 01, 2020
3.0.2 Jul 06, 2020
3.0.1 Jul 03, 2020
3.0.0 Jun 29, 2020
2.11.0 Jun 02, 2020
2.10.0 May 22, 2020
2.9.1 May 14, 2020
2.9.0 May 07, 2020
2.8.0 Apr 06, 2020
2.7.0 Mar 30, 2020
2.6.0 Mar 24, 2020
2.5.1 Feb 24, 2020
2.5.0 Feb 19, 2020
2.4.1 Jan 31, 2020
2.4.0 Jan 31, 2020
2.3.0 Dec 20, 2019
2.2.2 Dec 13, 2019
2.2.1 Dec 03, 2019
2.2.0 Nov 26, 2019
2.1.1 Oct 11, 2019
2.1.0 Oct 09, 2019
2.0.0 Sep 26, 2019
0.1 Aug 17, 2016

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras:
Dependencies:
filelock
huggingface-hub (<1.0,>=0.34.0)
numpy (>=1.17)
packaging (>=20.0)
pyyaml (>=5.1)
regex (!=2019.12.17)
requests
tokenizers (<=0.23.0,>=0.22.0)
safetensors (>=0.4.3)
tqdm (>=4.27)