vllm 0.19.0


pip install vllm

  Latest version

Released: Apr 03, 2026


Meta
Author: vLLM Team
Requires Python: <3.14,>=3.10

Classifiers

Programming Language
  • Python :: 3.10
  • Python :: 3.11
  • Python :: 3.12
  • Python :: 3.13

Intended Audience
  • Developers
  • Information Technology
  • Science/Research

Topic
  • Scientific/Engineering :: Artificial Intelligence
  • Scientific/Engineering :: Information Analysis

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack |

🔥 We have built a vllm website to help you get started with vllm. Please visit vllm.ai to learn more. For events, please visit vllm.ai/events to join us.


About

vLLM is a fast and easy-to-use library for LLM inference and serving.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with CUDA/HIP graph
  • Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
  • Optimized CUDA kernels, including integration with FlashAttention and FlashInfer
  • Speculative decoding
  • Chunked prefill

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data and expert parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support for NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, Arm CPUs, and TPU. Additionally, support for diverse hardware plugins such as Intel Gaudi, IBM Spyre and Huawei Ascend.
  • Prefix caching support
  • Multi-LoRA support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
  • Embedding Models (e.g., E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Getting Started

Install vLLM with pip or from source:

pip install vllm

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use GitHub Issues
  • For discussing with fellow users, please use the vLLM Forum
  • For coordinating contributions and development, please use Slack
  • For security disclosures, please use GitHub's Security Advisories feature
  • For collaborations and partnerships, please contact us at collaboration@vllm.ai

Media Kit

0.19.0 Apr 03, 2026
0.18.1 Mar 31, 2026
0.18.0 Mar 20, 2026
0.17.1 Mar 11, 2026
0.17.0 Mar 07, 2026
0.16.0 Feb 26, 2026
0.15.1 Feb 05, 2026
0.15.0 Jan 29, 2026
0.14.1 Jan 24, 2026
0.14.0 Jan 20, 2026
0.13.0 Dec 19, 2025
0.12.0 Dec 03, 2025
0.11.2 Nov 20, 2025
0.11.1 Nov 19, 2025
0.11.0 Oct 04, 2025
0.10.2 Sep 13, 2025
0.10.1.1 Aug 20, 2025
0.10.1 Aug 19, 2025
0.10.0 Jul 25, 2025
0.9.2 Jul 08, 2025
0.9.1 Jun 10, 2025
0.9.0.1 May 30, 2025
0.9.0 May 28, 2025
0.8.5.post1 May 02, 2025
0.8.5 Apr 28, 2025
0.8.4 Apr 15, 2025
0.8.3 Apr 06, 2025
0.8.2 Mar 25, 2025
0.8.1 Mar 19, 2025
0.8.0 Mar 18, 2025
0.7.3 Feb 20, 2025
0.7.2 Feb 06, 2025
0.7.1 Feb 01, 2025
0.7.0 Jan 27, 2025
0.6.6.post1 Dec 27, 2024
0.6.6 Dec 27, 2024
0.6.5 Dec 18, 2024
0.6.4.post1 Nov 15, 2024
0.6.4 Nov 15, 2024
0.6.3.post1 Oct 17, 2024
0.6.3 Oct 14, 2024
0.6.2 Sep 25, 2024
0.6.1.post2 Sep 13, 2024
0.6.1.post1 Sep 13, 2024
0.6.1 Sep 11, 2024
0.6.0 Sep 05, 2024
0.5.5 Aug 23, 2024
0.5.4 Aug 05, 2024
0.5.3.post1 Jul 23, 2024
0.5.3 Jul 23, 2024
0.5.2 Jul 15, 2024
0.5.1 Jul 06, 2024
0.5.0.post1 Jun 14, 2024
0.5.0 Jun 11, 2024
0.4.3 Jun 01, 2024
0.4.2 May 05, 2024
0.4.1 Apr 24, 2024
0.4.0.post1 Apr 03, 2024
0.4.0 Mar 31, 2024
0.3.3 Mar 01, 2024
0.3.2 Feb 21, 2024
0.3.1 Feb 17, 2024
0.3.0 Jan 31, 2024
0.2.7 Jan 04, 2024
0.2.6 Dec 17, 2023
0.2.5 Dec 14, 2023
0.2.4 Dec 11, 2023
0.2.3 Dec 03, 2023
0.2.2 Nov 19, 2023
0.2.1.post1 Oct 17, 2023
0.2.1 Oct 16, 2023
0.2.0 Sep 28, 2023
0.1.7 Sep 11, 2023
0.1.6 Sep 08, 2023
0.1.5 Sep 08, 2023
0.1.4 Aug 25, 2023
0.1.3 Aug 02, 2023
0.1.2 Jul 05, 2023
0.1.1 Jun 22, 2023
0.1.0 Jun 20, 2023
0.0.1 Jun 19, 2023

Wheel compatibility matrix

Platform CPython >=3.8 (abi3)
manylinux_2_31_aarch64
manylinux_2_31_x86_64

Files in release

Extras:
Dependencies:
regex
cachetools
psutil
sentencepiece
numpy
requests (>=2.26.0)
tqdm
blake3
py-cpuinfo
transformers (<5,>=4.56.0)
tokenizers (>=0.21.1)
protobuf (!=6.30.*,!=6.31.*,!=6.32.*,!=6.33.0.*,!=6.33.1.*,!=6.33.2.*,!=6.33.3.*,!=6.33.4.*,>=5.29.6)
fastapi[standard] (>=0.115.0)
aiohttp (>=3.13.3)
openai (>=2.0.0)
pydantic (>=2.12.0)
prometheus_client (>=0.18.0)
pillow
prometheus-fastapi-instrumentator (>=7.0.0)
tiktoken (>=0.6.0)
lm-format-enforcer (==0.11.3)
llguidance or (<1.4.0,>=1.3.0)
outlines_core (==0.2.11)
diskcache (==5.6.3)
lark (==1.2.2)
xgrammar or (<1.0.0,>=0.1.32)
typing_extensions (>=4.10)
filelock (>=3.16.1)
partial-json-parser
pyzmq (>=25.0.0)
msgspec
gguf (>=0.17.0)
mistral_common[image] (>=1.10.0)
opencv-python-headless (>=4.13.0)
pyyaml
six (>=1.16.0)
setuptools (<81.0.0,>=77.0.3)
einops
compressed-tensors (==0.14.0.1)
depyf (==0.20.0)
cloudpickle
watchfiles
python-json-logger
ninja
pybase64
cbor2
ijson
setproctitle
openai-harmony (>=0.0.3)
anthropic (>=0.71.0)
model-hosting-container-standards (<1.0.0,>=0.1.13)
mcp
opentelemetry-sdk (>=1.27.0)
opentelemetry-api (>=1.27.0)
opentelemetry-exporter-otlp (>=1.27.0)
opentelemetry-semantic-conventions-ai (>=0.4.1)
numba (==0.61.2)
torch (==2.10.0)
torchaudio (==2.10.0)
torchvision (==0.25.0)
flashinfer-python (==0.6.6)
flashinfer-cubin (==0.6.6)
nvidia-cudnn-frontend (<1.19.0,>=1.13.0)
nvidia-cutlass-dsl (>=4.4.0.dev1)
quack-kernels (>=0.2.7)