optimum 2.0.0


pip install optimum

  Latest version

Released: Oct 09, 2025

Project Links

Meta
Author: HuggingFace Inc. Special Ops Team
Requires Python: >=3.9.0

Classifiers

Development Status
  • 5 - Production/Stable

License
  • OSI Approved :: Apache Software License

Intended Audience
  • Developers
  • Education
  • Science/Research

Operating System
  • OS Independent

Programming Language
  • Python :: 3
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11

Topic
  • Scientific/Engineering :: Artificial Intelligence

🤗 Optimum

PyPI - License PyPI - Python Version PyPI - Version PyPI - Downloads Documentation

Optimum is an extension of Transformers 🤖 Diffusers 🧨 TIMM 🖼️ and Sentence-Transformers 🤗, providing a set of optimization tools and enabling maximum efficiency to train and run models on targeted hardware, while keeping things easy to use.

Installation

Optimum can be installed using pip as follows:

python -m pip install optimum

If you'd like to use the accelerator-specific features of Optimum, you can check the documentation and install the required dependencies according to the table below:

Accelerator Installation
ONNX pip install --upgrade --upgrade-strategy eager optimum[onnx]
ONNX Runtime pip install --upgrade --upgrade-strategy eager optimum[onnxruntime]
ONNX Runtime GPU pip install --upgrade --upgrade-strategy eager optimum[onnxruntime-gpu]
Intel Neural Compressor pip install --upgrade --upgrade-strategy eager optimum[neural-compressor]
OpenVINO pip install --upgrade --upgrade-strategy eager optimum[openvino]
IPEX pip install --upgrade --upgrade-strategy eager optimum[ipex]
NVIDIA TensorRT-LLM docker run -it --gpus all --ipc host huggingface/optimum-nvidia
AMD Instinct GPUs and Ryzen AI NPU pip install --upgrade --upgrade-strategy eager optimum[amd]
AWS Trainum & Inferentia pip install --upgrade --upgrade-strategy eager optimum[neuronx]
Intel Gaudi Accelerators (HPU) pip install --upgrade --upgrade-strategy eager optimum[habana]
FuriosaAI pip install --upgrade --upgrade-strategy eager optimum[furiosa]

The --upgrade --upgrade-strategy eager option is needed to ensure the different packages are upgraded to the latest possible version.

To install from source:

python -m pip install git+https://github.com/huggingface/optimum.git

For the accelerator-specific features, append optimum[accelerator_type] to the above command:

python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git

Accelerated Inference

Optimum provides multiple tools to export and run optimized models on various ecosystems:

  • ONNX / ONNX Runtime, one of the most popular open formats for model export, and a high-performance inference engine for deployment.
  • OpenVINO, a toolkit for optimizing, quantizing and deploying deep learning models on Intel hardware.
  • ExecuTorch, PyTorch’s native solution for on-device inference across mobile and edge devices.
  • Intel Gaudi Accelerators enabling optimal performance on first-gen Gaudi, Gaudi2 and Gaudi3.
  • AWS Inferentia for accelerated inference on Inf2 and Inf1 instances.
  • NVIDIA TensorRT-LLM.

The export and optimizations can be done both programmatically and with a command line.

ONNX + ONNX Runtime

🚨🚨🚨 ONNX integration was moved to optimum-onnx so make sure to follow the installation instructions 🚨🚨🚨

Before you begin, make sure you have all the necessary libraries installed :

pip install --upgrade --upgrade-strategy eager optimum[onnx]

It is possible to export Transformers, Diffusers, Sentence Transformers and Timm models to the ONNX format and perform graph optimization as well as quantization easily.

For more information on the ONNX export, please check the documentation.

Once the model is exported to the ONNX format, we provide Python classes enabling you to run the exported ONNX model in a seamless manner using ONNX Runtime in the backend.

For this make sure you have ONNX Runtime installed, fore more information check out the installation instructions.

More details on how to run ONNX models with ORTModelForXXX classes here.

Intel (OpenVINO + Neural Compressor + IPEX)

Before you begin, make sure you have all the necessary libraries installed.

You can find more information on the different integration in our documentation and in the examples of optimum-intel.

ExecuTorch

Before you begin, make sure you have all the necessary libraries installed :

pip install optimum-executorch@git+https://github.com/huggingface/optimum-executorch.git

Users can export Transformers models to ExecuTorch and run inference on edge devices within PyTorch's ecosystem.

For more information about export Transformers to ExecuTorch, please check the doc for Optimum-ExecuTorch.

Quanto

Quanto is a pytorch quantization backend which allows you to quantize a model either using the python API or the optimum-cli.

You can see more details and examples in the Quanto repository.

Accelerated training

Optimum provides wrappers around the original Transformers Trainer to enable training on powerful hardware easily. We support many providers:

Intel Gaudi Accelerators

Before you begin, make sure you have all the necessary libraries installed :

pip install --upgrade --upgrade-strategy eager optimum[habana]

You can find examples in the documentation and in the examples.

AWS Trainium

Before you begin, make sure you have all the necessary libraries installed :

pip install --upgrade --upgrade-strategy eager optimum[neuronx]

You can find examples in the documentation and in the tutorials.

2.0.0 Oct 09, 2025
1.27.0 Jul 30, 2025
1.26.1 Jun 13, 2025
1.26.0 Jun 13, 2025
1.25.3 May 16, 2025
1.25.2 May 15, 2025
1.25.1 May 15, 2025
1.25.0 May 13, 2025
1.24.0 Jan 30, 2025
1.23.3 Oct 29, 2024
1.23.2 Oct 22, 2024
1.23.1 Oct 11, 2024
1.23.0 Oct 10, 2024
1.22.0 Sep 10, 2024
1.21.4 Aug 16, 2024
1.21.3 Aug 06, 2024
1.21.2 Jul 05, 2024
1.21.1 Jul 02, 2024
1.21.0 Jul 02, 2024
1.20.0 May 29, 2024
1.19.2 May 09, 2024
1.19.1 Apr 24, 2024
1.19.0 Apr 16, 2024
1.18.1 Apr 09, 2024
1.18.0 Mar 25, 2024
1.17.1 Feb 18, 2024
1.17.0 Feb 16, 2024
1.16.2 Jan 19, 2024
1.16.1 Dec 15, 2023
1.16.0 Dec 13, 2023
1.15.0 Dec 06, 2023
1.14.1 Nov 14, 2023
1.14.0 Nov 06, 2023
1.13.3 Nov 03, 2023
1.13.2 Sep 21, 2023
1.13.1 Sep 08, 2023
1.13.0 Sep 08, 2023
1.12.0 Aug 23, 2023
1.11.2 Aug 17, 2023
1.11.1 Aug 11, 2023
1.11.0 Aug 03, 2023
1.10.1 Jul 27, 2023
1.10.0 Jul 25, 2023
1.9.1 Jul 07, 2023
1.9.0 Jun 30, 2023
1.8.8 Jun 16, 2023
1.8.7 Jun 10, 2023
1.8.6 May 18, 2023
1.8.5 May 11, 2023
1.8.4 May 07, 2023
1.8.3 Apr 28, 2023
1.8.2 Apr 17, 2023
1.8.1 Apr 17, 2023
1.8.0 Apr 17, 2023
1.7.3 Mar 23, 2023
1.7.2 Mar 23, 2023
1.7.1 Mar 03, 2023
1.7.0 Mar 02, 2023
1.6.4 Feb 13, 2023
1.6.3 Jan 25, 2023
1.6.2 Jan 25, 2023
1.6.1 Dec 23, 2022
1.6.0 Dec 23, 2022
1.5.2 Dec 19, 2022
1.5.1 Nov 24, 2022
1.5.0 Nov 17, 2022
1.4.1 Oct 25, 2022
1.4.0 Sep 08, 2022
1.3.0 Jul 12, 2022
1.2.3 Jun 13, 2022
1.2.2 Jun 02, 2022
1.2.1 May 12, 2022
1.2.0 May 10, 2022
1.1.1 Apr 26, 2022
1.1.0 Apr 01, 2022
1.0.0 Feb 23, 2022
0.1.3 Dec 23, 2021
0.1.2 Dec 08, 2021
0.1.1 Nov 05, 2021
0.1.0 Nov 05, 2021
0.0.1 Sep 14, 2021

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras:
Dependencies:
transformers (>=4.29)
torch (>=1.11)
packaging
numpy
huggingface_hub (>=0.8.0)