pytorch-lightning 2.5.5


pip install pytorch-lightning

  Latest version

Released: Sep 05, 2025


Meta
Author: Lightning AI et al.
Requires Python: >=3.9

Classifiers

Environment
  • Console

Natural Language
  • English

Development Status
  • 5 - Production/Stable

Intended Audience
  • Developers

Topic
  • Scientific/Engineering :: Artificial Intelligence
  • Scientific/Engineering :: Image Recognition
  • Scientific/Engineering :: Information Analysis

License
  • OSI Approved :: Apache Software License

Operating System
  • OS Independent

Programming Language
  • Python :: 3
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11
  • Python :: 3.12

The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.


WebsiteKey FeaturesHow To UseDocsExamplesCommunityLightning AILicense

PyPI - Python Version PyPI Status PyPI - Downloads Conda DockerHub codecov

ReadTheDocsDiscord license

*Codecov is > 90%+ but build delays may show less

PyTorch Lightning is just organized PyTorch

Lightning disentangles PyTorch code to decouple the science from the engineering. PT to PL


Lightning Design Philosophy

Lightning structures PyTorch code with these principles:

Lightning forces the following structure to your code which makes it reusable and shareable:

  • Research code (the LightningModule).
  • Engineering code (you delete, and is handled by the Trainer).
  • Non-essential research code (logging, etc... this goes in Callbacks).
  • Data (use PyTorch DataLoaders or organize them into a LightningDataModule).

Once you do this, you can train on multiple-GPUs, TPUs, CPUs, HPUs and even in 16-bit precision without changing your code!

Get started in just 15 minutes


Continuous Integration

Lightning is rigorously tested across multiple CPUs, GPUs and TPUs and against major Python and PyTorch versions.

Current build statuses
System / PyTorch ver. 1.12 1.13 2.0 2.1
Linux py3.9 [GPUs] Build Status
Linux (multiple Python versions) Test PyTorch Test PyTorch Test PyTorch Test PyTorch
OSX (multiple Python versions) Test PyTorch Test PyTorch Test PyTorch Test PyTorch
Windows (multiple Python versions) Test PyTorch Test PyTorch Test PyTorch Test PyTorch

How To Use

Step 0: Install

Simple installation from PyPI

pip install pytorch-lightning

Step 1: Add these imports

import os
import torch
from torch import nn
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
import pytorch_lightning as pl

Step 2: Define a LightningModule (nn.Module subclass)

A LightningModule defines a full system (ie: a GAN, autoencoder, BERT or a simple Image Classifier).

class LitAutoEncoder(pl.LightningModule):
    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(nn.Linear(28 * 28, 128), nn.ReLU(), nn.Linear(128, 3))
        self.decoder = nn.Sequential(nn.Linear(3, 128), nn.ReLU(), nn.Linear(128, 28 * 28))

    def forward(self, x):
        # in lightning, forward defines the prediction/inference actions
        embedding = self.encoder(x)
        return embedding

    def training_step(self, batch, batch_idx):
        # training_step defines the train loop. It is independent of forward
        x, _ = batch
        x = x.view(x.size(0), -1)
        z = self.encoder(x)
        x_hat = self.decoder(z)
        loss = F.mse_loss(x_hat, x)
        self.log("train_loss", loss)
        return loss

    def configure_optimizers(self):
        optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
        return optimizer

Note: Training_step defines the training loop. Forward defines how the LightningModule behaves during inference/prediction.

Step 3: Train!

dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
train, val = random_split(dataset, [55000, 5000])

autoencoder = LitAutoEncoder()
trainer = pl.Trainer()
trainer.fit(autoencoder, DataLoader(train), DataLoader(val))

Advanced features

Lightning has over 40+ advanced features designed for professional AI research at scale.

Here are some examples:

Highlighted feature code snippets
# 8 GPUs
# no code changes needed
trainer = Trainer(max_epochs=1, accelerator="gpu", devices=8)

# 256 GPUs
trainer = Trainer(max_epochs=1, accelerator="gpu", devices=8, num_nodes=32)
Train on TPUs without code changes
# no code changes needed
trainer = Trainer(accelerator="tpu", devices=8)
16-bit precision
# no code changes needed
trainer = Trainer(precision=16)
Experiment managers
from pytorch_lightning import loggers

# tensorboard
trainer = Trainer(logger=TensorBoardLogger("logs/"))

# weights and biases
trainer = Trainer(logger=loggers.WandbLogger())

# comet
trainer = Trainer(logger=loggers.CometLogger())

# mlflow
trainer = Trainer(logger=loggers.MLFlowLogger())

# neptune
trainer = Trainer(logger=loggers.NeptuneLogger())

# ... and dozens more
EarlyStopping
es = EarlyStopping(monitor="val_loss")
trainer = Trainer(callbacks=[es])
Checkpointing
checkpointing = ModelCheckpoint(monitor="val_loss")
trainer = Trainer(callbacks=[checkpointing])
Export to torchscript (JIT) (production use)
# torchscript
autoencoder = LitAutoEncoder()
torch.jit.save(autoencoder.to_torchscript(), "model.pt")
Export to ONNX (production use)
autoencoder = LitAutoEncoder()
input_sample = torch.randn((1, 64))
with tempfile.NamedTemporaryFile(suffix=".onnx", delete=False) as tmpfile:
    autoencoder.to_onnx(tmpfile.name, input_sample, export_params=True)

Pro-level control of optimization (advanced users)

For complex/professional level work, you have optional full control of the optimizers.

class LitAutoEncoder(pl.LightningModule):
    def __init__(self):
        super().__init__()
        self.automatic_optimization = False

    def training_step(self, batch, batch_idx):
        # access your optimizers with use_pl_optimizer=False. Default is True
        opt_a, opt_b = self.optimizers(use_pl_optimizer=True)

        loss_a = ...
        self.manual_backward(loss_a, opt_a)
        opt_a.step()
        opt_a.zero_grad()

        loss_b = ...
        self.manual_backward(loss_b, opt_b, retain_graph=True)
        self.manual_backward(loss_b, opt_b)
        opt_b.step()
        opt_b.zero_grad()

Advantages over unstructured PyTorch

  • Models become hardware agnostic
  • Code is clear to read because engineering code is abstracted away
  • Easier to reproduce
  • Make fewer mistakes because lightning handles the tricky engineering
  • Keeps all the flexibility (LightningModules are still PyTorch modules), but removes a ton of boilerplate
  • Lightning has dozens of integrations with popular machine learning tools.
  • Tested rigorously with every new PR. We test every combination of PyTorch and Python supported versions, every OS, multi GPUs and even TPUs.
  • Minimal running speed overhead (about 300 ms per epoch compared with pure PyTorch).

Examples

Self-supervised Learning
Convolutional Architectures
Reinforcement Learning
GANs
Classic ML

Community

The PyTorch Lightning community is maintained by

  • 10+ core contributors who are all a mix of professional engineers, Research Scientists, and Ph.D. students from top AI labs.
  • 680+ active community contributors.

Want to help us build Lightning and reduce boilerplate for thousands of researchers? Learn how to make your first contribution here

PyTorch Lightning is also part of the PyTorch ecosystem which requires projects to have solid testing, documentation and support.

Asking for help

If you have any questions please:

  1. Read the docs.
  2. Search through existing Discussions, or add a new question
  3. Join our Discord community.
2.5.5 Sep 05, 2025
2.5.4 Aug 29, 2025
2.5.3 Aug 13, 2025
2.5.2 Jun 20, 2025
2.5.1.post0 Apr 25, 2025
2.5.1 Mar 19, 2025
2.5.1rc2 Mar 17, 2025
2.5.1rc1 Mar 08, 2025
2.5.1rc0 Mar 05, 2025
2.5.0.post0 Dec 21, 2024
2.5.0 Dec 20, 2024
2.5.0rc0 Dec 12, 2024
2.4.0 Aug 07, 2024
2.3.3 Jul 08, 2024
2.3.2 Jul 04, 2024
2.3.1 Jun 27, 2024
2.3.0 Jun 13, 2024
2.2.5 May 22, 2024
2.2.4 May 01, 2024
2.2.3 Apr 23, 2024
2.2.2 Apr 12, 2024
2.2.1 Mar 04, 2024
2.2.0.post0 Feb 12, 2024
2.2.0 Feb 08, 2024
2.2.0rc0 Feb 01, 2024
2.1.4 Feb 01, 2024
2.1.3 Dec 21, 2023
2.1.2 Nov 15, 2023
2.1.1 Nov 08, 2023
2.1.0 Oct 12, 2023
2.1.0rc1 Oct 10, 2023
2.1.0rc0 Aug 16, 2023
2.0.9.post0 Sep 28, 2023
2.0.9 Sep 14, 2023
2.0.8 Aug 30, 2023
2.0.7 Aug 16, 2023
2.0.6 Jul 25, 2023
2.0.5 Jul 10, 2023
2.0.4 Jun 22, 2023
2.0.3 Jun 07, 2023
2.0.2 Apr 24, 2023
2.0.1.post0 Apr 11, 2023
2.0.1 Mar 30, 2023
2.0.0 Mar 15, 2023
2.0.0rc0 Feb 23, 2023
1.9.5 Apr 12, 2023
1.9.4 Mar 02, 2023
1.9.3 Feb 21, 2023
1.9.2 Feb 15, 2023
1.9.1 Feb 10, 2023
1.9.0 Jan 18, 2023
1.9.0rc0 Jan 06, 2023
1.8.6 Dec 21, 2022
1.8.5.post0 Dec 16, 2022
1.8.5 Dec 15, 2022
1.8.4.post0 Dec 10, 2022
1.8.4 Dec 09, 2022
1.8.3.post2 Dec 09, 2022
1.8.3.post1 Nov 25, 2022
1.8.3.post0 Nov 23, 2022
1.8.3 Nov 23, 2022
1.8.2 Nov 18, 2022
1.8.1 Nov 10, 2022
1.8.0.post1 Nov 02, 2022
1.8.0 Nov 01, 2022
1.8.0rc2 Nov 01, 2022
1.8.0rc1 Oct 27, 2022
1.8.0rc0 Oct 25, 2022
1.7.7 Sep 22, 2022
1.7.6 Sep 13, 2022
1.7.5 Sep 07, 2022
1.7.4 Aug 31, 2022
1.7.3 Aug 25, 2022
1.7.2 Aug 17, 2022
1.7.1 Aug 09, 2022
1.7.0 Aug 02, 2022
1.7.0rc1 Jul 28, 2022
1.7.0rc0 Jul 27, 2022
1.6.5.post0 Dec 17, 2024
1.6.5 Jul 13, 2022
1.6.4 Jun 01, 2022
1.6.3 May 03, 2022
1.6.2 Apr 27, 2022
1.6.1 Apr 13, 2022
1.6.0 Mar 29, 2022
1.6.0rc1 Mar 25, 2022
1.6.0rc0 Mar 24, 2022
1.5.10.post0 Dec 17, 2024
1.5.10 Feb 09, 2022
1.5.9 Jan 20, 2022
1.5.8 Jan 05, 2022
1.5.7 Dec 21, 2021
1.5.6 Dec 15, 2021
1.5.5 Dec 07, 2021
1.5.4 Nov 30, 2021
1.5.3 Nov 24, 2021
1.5.2 Nov 16, 2021
1.5.1 Nov 09, 2021
1.5.0 Nov 02, 2021
1.5.0rc1 Oct 22, 2021
1.5.0rc0 Oct 11, 2021
1.4.9 Sep 30, 2021
1.4.8 Sep 22, 2021
1.4.7 Sep 15, 2021
1.4.6 Sep 10, 2021
1.4.5 Sep 01, 2021
1.4.4 Aug 24, 2021
1.4.3 Aug 23, 2021
1.4.2 Aug 11, 2021
1.4.1 Aug 03, 2021
1.4.0 Jul 27, 2021
1.4.0rc2 Jul 26, 2021
1.4.0rc1 Jul 21, 2021
1.4.0rc0 Jul 19, 2021
1.3.8 Jul 01, 2021
1.3.7.post0 Jun 23, 2021
1.3.7 Jun 22, 2021
1.3.6 Jun 17, 2021
1.3.5 Jun 09, 2021
1.3.4 Jun 03, 2021
1.3.3 May 26, 2021
1.3.2 May 19, 2021
1.3.1 May 11, 2021
1.3.0 May 06, 2021
1.3.0rc3 May 06, 2021
1.3.0rc2 May 04, 2021
1.3.0rc1 Apr 09, 2021
1.2.10 Apr 23, 2021
1.2.9 Apr 22, 2021
1.2.8 Apr 14, 2021
1.2.7 Apr 07, 2021
1.2.6 Mar 30, 2021
1.2.5 Mar 24, 2021
1.2.4 Mar 16, 2021
1.2.3 Mar 09, 2021
1.2.2 Mar 05, 2021
1.2.1 Feb 24, 2021
1.2.0 Feb 18, 2021
1.2.0rc2 Feb 18, 2021
1.2.0rc1 Feb 13, 2021
1.2.0rc0 Jan 28, 2021
1.1.8 Feb 08, 2021
1.1.7 Feb 03, 2021
1.1.6 Jan 26, 2021
1.1.5 Jan 21, 2021
1.1.4 Jan 12, 2021
1.1.3 Jan 06, 2021
1.1.2 Dec 23, 2020
1.1.1 Dec 15, 2020
1.1.0 Dec 10, 2020
1.0.8 Nov 24, 2020
1.0.7 Nov 17, 2020
1.0.6 Nov 11, 2020
1.0.5 Nov 04, 2020
1.0.4 Oct 27, 2020
1.0.3 Oct 20, 2020
1.0.2 Oct 15, 2020
1.0.1 Oct 14, 2020
1.0.0 Oct 13, 2020
0.10.0 Oct 07, 2020
0.9.0 Aug 20, 2020
0.8.5 Jul 10, 2020
0.8.4 Jul 01, 2020
0.8.3 Jun 29, 2020
0.8.1 Jun 19, 2020
0.7.6 May 15, 2020
0.7.5 Apr 27, 2020
0.7.3 Apr 10, 2020
0.7.1 Mar 06, 2020
0.6.0 Jan 21, 2020
0.5.3.3 Jan 21, 2020
0.5.3.2 Nov 09, 2019
0.5.3.1 Nov 07, 2019
0.5.3 Nov 06, 2019
0.5.2.1 Oct 10, 2019
0.5.2 Oct 10, 2019
0.5.1.3 Oct 06, 2019
0.5.1.2 Oct 06, 2019
0.5.1 Oct 05, 2019
0.5.0 Sep 26, 2019
0.4.9 Sep 16, 2019
0.4.8 Aug 31, 2019
0.4.7 Aug 24, 2019
0.4.6 Aug 15, 2019
0.4.5 Aug 13, 2019
0.4.4 Aug 12, 2019
0.4.3 Aug 10, 2019
0.4.2 Aug 08, 2019
0.4.1 Aug 08, 2019
0.4.0 Aug 08, 2019
0.3.6.9 Aug 03, 2019
0.3.6.8 Aug 01, 2019
0.3.6.7 Aug 01, 2019
0.3.6.6 Jul 28, 2019
0.3.6.5 Jul 28, 2019
0.3.6.4 Jul 27, 2019
0.3.6.3 Jul 27, 2019
0.3.6.1 Jul 26, 2019
0.3.6 Jul 25, 2019
0.3.5 Jul 25, 2019
0.3.4.1 Jul 23, 2019
0.3.4 Jul 22, 2019
0.3.3 Jul 21, 2019
0.3.2 Jul 21, 2019
0.3.1 Jul 21, 2019
0.3 Jul 21, 2019
0.2.6 Jul 20, 2019
0.2.5.2 Jul 18, 2019
0.2.5.1 Jul 18, 2019
0.2.5 Jul 18, 2019
0.2.4.1 Jul 17, 2019
0.2.4 Jul 16, 2019
0.2.3 Jul 14, 2019
0.2.2 Jul 11, 2019
0.2 Jul 09, 2019
0.0.2 Mar 31, 2019
Extras:
Dependencies:
torch (>=2.1.0)
tqdm (>=4.57.0)
PyYAML (>5.4)
fsspec[http] (>=2022.5.0)
torchmetrics (>0.7.0)
packaging (>=20.0)
typing-extensions (>4.5.0)
lightning-utilities (>=0.10.0)