mlflow-skinny 3.10.1


pip install mlflow-skinny

  Latest version

Released: Mar 05, 2026


Meta
Maintainer: Databricks
Requires Python: >=3.10

Classifiers

Development Status
  • 5 - Production/Stable

Intended Audience
  • Developers
  • End Users/Desktop
  • Science/Research
  • Information Technology

Topic
  • Scientific/Engineering :: Artificial Intelligence
  • Software Development :: Libraries :: Python Modules

License
  • OSI Approved :: Apache Software License

Operating System
  • OS Independent

Programming Language
  • Python :: 3.10

📣 This is the mlflow-skinny package, a lightweight MLflow package without SQL storage, server, UI, or data science dependencies. Additional dependencies can be installed to leverage the full feature set of MLflow. For example:

  • To use the mlflow.sklearn component of MLflow Models, install scikit-learn, numpy and pandas.
  • To use SQL-based metadata storage, install sqlalchemy, alembic, and sqlparse.
  • To use serving-based features, install flask and pandas.



MLflow logo

Open-Source Platform for Productionizing AI

MLflow is an open-source developer platform to build AI/LLM applications and models with confidence. Enhance your AI applications with end-to-end experiment tracking, observability, and evaluations, all in one integrated platform.


🚀 Installation

To install the MLflow Python package, run the following command:

pip install mlflow

📦 Core Components

MLflow is the only platform that provides a unified solution for all your AI/ML needs, including LLMs, Agents, Deep Learning, and traditional machine learning.

💡 For LLM / GenAI Developers

Tracing

🔍 Tracing / Observability

Trace the internal states of your LLM/agentic applications for debugging quality issues and monitoring performance with ease.

Getting Started →

LLM Evaluation

📊 LLM Evaluation

A suite of automated model evaluation tools, seamlessly integrated with experiment tracking to compare across multiple versions.

Getting Started →

Prompt Management

🤖 Prompt Management

Version, track, and reuse prompts across your organization, helping maintain consistency and improve collaboration in prompt development.

Getting Started →

MLflow Hero

📦 App Version Tracking

MLflow keeps track of many moving parts in your AI applications, such as models, prompts, tools, and code, with end-to-end lineage.

Getting Started →

🎓 For Data Scientists

Tracking

📝 Experiment Tracking

Track your models, parameters, metrics, and evaluation results in ML experiments and compare them using an interactive UI.

Getting Started →

Model Registry

💾 Model Registry

A centralized model store designed to collaboratively manage the full lifecycle and deployment of machine learning models.

Getting Started →

Deployment

🚀 Deployment

Tools for seamless model deployment to batch and real-time scoring on platforms like Docker, Kubernetes, Azure ML, and AWS SageMaker.

Getting Started →

🌐 Hosting MLflow Anywhere

Providers

You can run MLflow in many different environments, including local machines, on-premise servers, and cloud infrastructure.

Trusted by thousands of organizations, MLflow is now offered as a managed service by most major cloud providers:

For hosting MLflow on your own infrastructure, please refer to this guidance.

🗣️ Supported Programming Languages

🔗 Integrations

MLflow is natively integrated with many popular machine learning frameworks and GenAI libraries.

Integrations

Usage Examples

Tracing (Observability) (Doc)

MLflow Tracing provides LLM observability for various GenAI libraries such as OpenAI, LangChain, LlamaIndex, DSPy, AutoGen, and more. To enable auto-tracing, call mlflow.xyz.autolog() before running your models. Refer to the documentation for customization and manual instrumentation.

import mlflow
from openai import OpenAI

# Enable tracing for OpenAI
mlflow.openai.autolog()

# Query OpenAI LLM normally
response = OpenAI().chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hi!"}],
    temperature=0.1,
)

Then navigate to the "Traces" tab in the MLflow UI to find the trace records for the OpenAI query.

Evaluating LLMs, Prompts, and Agents (Doc)

The following example runs automatic evaluation for question-answering tasks with several built-in metrics.

import os
import openai
import mlflow
from mlflow.genai.scorers import Correctness, Guidelines

client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# 1. Define a simple QA dataset
dataset = [
    {
        "inputs": {"question": "Can MLflow manage prompts?"},
        "expectations": {"expected_response": "Yes!"},
    },
    {
        "inputs": {"question": "Can MLflow create a taco for my lunch?"},
        "expectations": {
            "expected_response": "No, unfortunately, MLflow is not a taco maker."
        },
    },
]


# 2. Define a prediction function to generate responses
def predict_fn(question: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o-mini", messages=[{"role": "user", "content": question}]
    )
    return response.choices[0].message.content


# 3. Run the evaluation
results = mlflow.genai.evaluate(
    data=dataset,
    predict_fn=predict_fn,
    scorers=[
        # Built-in LLM judge
        Correctness(),
        # Custom criteria using LLM judge
        Guidelines(name="is_english", guidelines="The answer must be in English"),
    ],
)

Navigate to the "Evaluations" tab in the MLflow UI to find the evaluation results.

Tracking Model Training (Doc)

The following example trains a simple regression model with scikit-learn, while enabling MLflow's autologging feature for experiment tracking.

import mlflow

from sklearn.model_selection import train_test_split
from sklearn.datasets import load_diabetes
from sklearn.ensemble import RandomForestRegressor

# Enable MLflow's automatic experiment tracking for scikit-learn
mlflow.sklearn.autolog()

# Load the training dataset
db = load_diabetes()
X_train, X_test, y_train, y_test = train_test_split(db.data, db.target)

rf = RandomForestRegressor(n_estimators=100, max_depth=6, max_features=3)
# MLflow triggers logging automatically upon model fitting
rf.fit(X_train, y_train)

Once the above code finishes, run the following command in a separate terminal and access the MLflow UI via the printed URL. An MLflow Run should be automatically created, which tracks the training dataset, hyperparameters, performance metrics, the trained model, dependencies, and even more.

mlflow server

💭 Support

  • For help or questions about MLflow usage (e.g. "how do I do X?") visit the documentation.
  • In the documentation, you can ask the question to our AI-powered chat bot. Click on the "Ask AI" button at the right bottom.
  • Join the virtual events like office hours and meetups.
  • To report a bug, file a documentation issue, or submit a feature request, please open a GitHub issue.
  • For release announcements and other discussions, please subscribe to our mailing list (mlflow-users@googlegroups.com) or join us on Slack.

🤝 Contributing

We happily welcome contributions to MLflow!

Please see our contribution guide to learn more about contributing to MLflow.

⭐️ Star History

Star History Chart

✏️ Citation

If you use MLflow in your research, please cite it using the "Cite this repository" button at the top of the GitHub repository page, which will provide you with citation formats including APA and BibTeX.

👥 Core Members

MLflow is currently maintained by the following core members with significant contributions from hundreds of exceptionally talented community members.

3.11.0rc1 Apr 01, 2026
3.11.0rc0 Mar 16, 2026
3.10.1 Mar 05, 2026
3.10.0 Feb 20, 2026
3.10.0rc0 Feb 12, 2026
3.9.0 Jan 29, 2026
3.9.0rc0 Jan 16, 2026
3.8.1 Dec 26, 2025
3.8.0 Dec 21, 2025
3.8.0rc0 Dec 15, 2025
3.7.0 Dec 05, 2025
3.7.0rc0 Nov 27, 2025
3.6.0 Nov 07, 2025
3.6.0rc0 Nov 03, 2025
3.5.1 Oct 22, 2025
3.5.0 Oct 16, 2025
3.5.0rc0 Oct 08, 2025
3.4.0 Sep 17, 2025
3.4.0rc0 Sep 12, 2025
3.3.2 Aug 27, 2025
3.3.1 Aug 20, 2025
3.3.0 Aug 19, 2025
3.3.0rc0 Aug 13, 2025
3.2.0 Aug 05, 2025
3.2.0rc0 Jul 29, 2025
3.1.4 Jul 23, 2025
3.1.3 Jul 22, 2025
3.1.2 Jul 08, 2025
3.1.1 Jun 25, 2025
3.1.0 Jun 10, 2025
3.1.0rc0 May 29, 2025
3.0.1 Jun 25, 2025
3.0.0 Jun 10, 2025
3.0.0rc3 May 21, 2025
3.0.0rc2 May 13, 2025
3.0.0rc1 Apr 25, 2025
3.0.0rc0 Apr 04, 2025
2.22.4 Dec 05, 2025
2.22.3 Dec 05, 2025
2.22.2 Aug 28, 2025
2.22.1 Jun 06, 2025
2.22.0 Apr 24, 2025
2.22.0rc0 Apr 16, 2025
2.21.3 Apr 03, 2025
2.21.2 Mar 26, 2025
2.21.1 Mar 25, 2025
2.21.0 Mar 14, 2025
2.21.0rc0 Mar 05, 2025
2.20.4 Mar 13, 2025
2.20.3 Feb 26, 2025
2.20.2 Feb 13, 2025
2.20.1 Jan 30, 2025
2.20.0 Jan 23, 2025
2.20.0rc0 Jan 14, 2025
2.19.0 Dec 11, 2024
2.19.0rc0 Dec 04, 2024
2.18.0 Nov 18, 2024
2.18.0rc0 Nov 12, 2024
2.17.2 Oct 31, 2024
2.17.1 Oct 25, 2024
2.17.0 Oct 12, 2024
2.17.0rc0 Sep 27, 2024
2.16.2 Sep 17, 2024
2.16.1 Sep 13, 2024
2.16.0 Aug 30, 2024
2.15.1 Aug 06, 2024
2.15.0 Jul 29, 2024
2.15.0rc0 Jul 22, 2024
2.14.3 Jul 12, 2024
2.14.2 Jul 04, 2024
2.14.2.dev0 Jun 26, 2024
2.14.1 Jun 20, 2024
2.14.0 Jun 17, 2024
2.14.0rc0 Jun 10, 2024
2.13.2 Jun 06, 2024
2.13.1 May 30, 2024
2.13.0 May 20, 2024
2.12.2 May 09, 2024
2.12.1 Apr 17, 2024
2.12.0 Apr 17, 2024
2.11.4 May 17, 2024
2.11.3 Mar 21, 2024
2.11.2 Mar 20, 2024
2.11.1 Mar 06, 2024
2.11.0 Mar 01, 2024
2.10.2 Feb 09, 2024
2.10.1 Feb 06, 2024
2.10.0 Jan 26, 2024
2.9.2 Dec 14, 2023
2.9.1 Dec 07, 2023
2.9.0 Dec 06, 2023
2.8.1 Nov 16, 2023
2.8.0 Oct 29, 2023
2.7.1 Sep 17, 2023
2.7.0 Sep 12, 2023
2.6.0 Aug 15, 2023
2.5.0 Jul 17, 2023
2.4.2 Jul 10, 2023
2.4.1 Jun 10, 2023
2.4.0 Jun 06, 2023
2.3.2 May 12, 2023
2.3.1 Apr 28, 2023
2.3.0 Apr 18, 2023
2.2.2 Mar 14, 2023
2.2.1 Mar 02, 2023
2.2.0 Mar 02, 2023
2.1.1 Dec 26, 2022
2.1.0 Dec 21, 2022
2.0.1 Nov 15, 2022
2.0.0 Nov 15, 2022
2.0.0rc0 Nov 01, 2022
1.30.1 Apr 05, 2023
1.30.0 Oct 20, 2022
1.29.0 Sep 19, 2022
1.28.0 Aug 11, 2022
1.27.0 Jun 29, 2022
1.26.1 May 28, 2022
1.26.0 May 16, 2022
1.25.1 Apr 13, 2022
1.25.0 Apr 11, 2022
1.24.0 Feb 28, 2022
1.23.1 Jan 27, 2022
1.23.0 Jan 17, 2022
1.22.0 Nov 30, 2021
1.21.0 Oct 25, 2021
1.20.2 Sep 04, 2021
1.20.1 Aug 26, 2021
1.20.0 Aug 25, 2021
1.19.0 Jul 14, 2021
1.18.0 Jun 18, 2021
1.17.0 May 08, 2021
1.16.0 Apr 26, 2021
1.15.0 Mar 26, 2021
1.14.1 Mar 01, 2021
1.14.0 Feb 20, 2021

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras:
Dependencies:
cachetools (<8,>=5.0.0)
click (<9,>=7.0)
cloudpickle (<4)
databricks-sdk (<1,>=0.20.0)
fastapi (<1)
gitpython (<4,>=3.1.9)
importlib_metadata (!=4.7.0,<9,>=3.7.0)
opentelemetry-api (<3,>=1.9.0)
opentelemetry-proto (<3,>=1.9.0)
opentelemetry-sdk (<3,>=1.9.0)
packaging (<27)
protobuf (<7,>=3.12.0)
pydantic (<3,>=2.0.0)
python-dotenv (<2,>=0.19.0)
pyyaml (<7,>=5.1)
requests (<3,>=2.17.3)
sqlparse (<1,>=0.4.0)
typing-extensions (<5,>=4.0.0)
uvicorn (<1)