Load any mixture of text to text data in one line of code
Project Links
Meta
Author: IBM Research
Requires Python: >=3.8
Classifiers
Programming Language
- Python :: 3
Operating System
- OS Independent
๐ฆ Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data for end-to-end AI benchmarking
Why Unitxt?
- ๐ Comprehensive: Evaluate text, tables, vision, speech, and code in one unified framework
- ๐ผ Enterprise-Ready: Battle-tested components with extensive catalog of benchmarks
- ๐ง Model Agnostic: Works with HuggingFace, OpenAI, WatsonX, and custom models
- ๐ Reproducible: Shareable, modular components ensure consistent results
Quick Links
- ๐ Documentation
- ๐ Getting Started
- ๐ Browse Catalog
Installation
pip install unitxt
Quick Start
Command Line Evaluation
# Simple evaluation
unitxt-evaluate \
--tasks "card=cards.mmlu_pro.engineering" \
--model cross_provider \
--model_args "model_name=llama-3-1-8b-instruct" \
--limit 10
# Multi-task evaluation
unitxt-evaluate \
--tasks "card=cards.text2sql.bird+card=cards.mmlu_pro.engineering" \
--model cross_provider \
--model_args "model_name=llama-3-1-8b-instruct,max_tokens=256" \
--split test \
--limit 10 \
--output_path ./results/evaluate_cli \
--log_samples \
--apply_chat_template
# Benchmark evaluation
unitxt-evaluate \
--tasks "benchmarks.tool_calling" \
--model cross_provider \
--model_args "model_name=llama-3-1-8b-instruct,max_tokens=256" \
--split test \
--limit 10 \
--output_path ./results/evaluate_cli \
--log_samples \
--apply_chat_template
Loading as Dataset
Load thousands of datasets in chat API format, ready for any model:
from unitxt import load_dataset
dataset = load_dataset(
card="cards.gpqa.diamond",
split="test",
format="formats.chat_api",
)
๐ Available on The Catalog
๐ Interactive Dashboard
Launch the graphical user interface to explore datasets and benchmarks:
pip install unitxt[ui]
unitxt-explore
Complete Python Example
Evaluate your own data with any model:
# Import required components
from unitxt import evaluate, create_dataset
from unitxt.blocks import Task, InputOutputTemplate
from unitxt.inference import HFAutoModelInferenceEngine
# Question-answer dataset
data = [
{"question": "What is the capital of Texas?", "answer": "Austin"},
{"question": "What is the color of the sky?", "answer": "Blue"},
]
# Define the task and evaluation metric
task = Task(
input_fields={"question": str},
reference_fields={"answer": str},
prediction_type=str,
metrics=["metrics.accuracy"],
)
# Create a template to format inputs and outputs
template = InputOutputTemplate(
instruction="Answer the following question.",
input_format="{question}",
output_format="{answer}",
postprocessors=["processors.lower_case"],
)
# Prepare the dataset
dataset = create_dataset(
task=task,
template=template,
format="formats.chat_api",
test_set=data,
split="test",
)
# Set up the model (supports Hugging Face, WatsonX, OpenAI, etc.)
model = HFAutoModelInferenceEngine(
model_name="Qwen/Qwen1.5-0.5B-Chat", max_new_tokens=32
)
# Generate predictions and evaluate
predictions = model(dataset)
results = evaluate(predictions=predictions, data=dataset)
# Print results
print("Global Results:\n", results.global_scores.summary)
print("Instance Results:\n", results.instance_scores.summary)
Contributing
Read the contributing guide for details on how to contribute to Unitxt.
Citation
If you use Unitxt in your research, please cite our paper:
@inproceedings{bandel-etal-2024-unitxt,
title = "Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation for Generative {AI}",
author = "Bandel, Elron and
Perlitz, Yotam and
Venezian, Elad and
Friedman, Roni and
Arviv, Ofir and
Orbach, Matan and
Don-Yehiya, Shachar and
Sheinwald, Dafna and
Gera, Ariel and
Choshen, Leshem and
Shmueli-Scheuer, Michal and
Katz, Yoav",
editor = "Chang, Kai-Wei and
Lee, Annie and
Rajani, Nazneen",
booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.naacl-demo.21",
pages = "207--215",
}
1.26.9
Jan 13, 2026
1.26.8
Jan 06, 2026
1.26.7
Dec 03, 2025
1.26.6
Aug 07, 2025
1.26.5
Jul 31, 2025
1.26.4
Jul 22, 2025
1.26.3
Jul 16, 2025
1.26.2
Jul 16, 2025
1.26.1
Jul 10, 2025
1.26.0
Jul 09, 2025
1.25.0
Jun 25, 2025
1.24.0
Jun 03, 2025
1.23.1
May 29, 2025
1.23.0
May 13, 2025
1.22.4
May 04, 2025
1.22.3
Apr 27, 2025
1.22.2
Apr 16, 2025
1.22.1
Apr 09, 2025
1.22.0
Apr 06, 2025
1.21.0
Mar 19, 2025
1.20.0
Mar 09, 2025
1.19.0
Feb 25, 2025
1.18.0
Feb 04, 2025
1.17.2
Feb 02, 2025
1.17.1
Jan 27, 2025
1.17.0
Jan 21, 2025
1.16.4
Jan 07, 2025
1.16.3
Jan 07, 2025
1.16.2
Jan 07, 2025
1.16.1
Jan 05, 2025
1.16.0
Dec 23, 2024
1.15.10
Dec 09, 2024
1.15.9
Dec 01, 2024
1.15.8
Nov 26, 2024
1.15.7
Nov 22, 2024
1.15.6
Nov 19, 2024
1.14.1
Oct 27, 2024
1.14.0
Oct 20, 2024
1.13.1
Sep 30, 2024
1.13.0
Sep 25, 2024
1.12.4
Aug 28, 2024
1.12.3
Aug 15, 2024
1.12.2
Jul 31, 2024
1.12.0
Jul 31, 2024
1.11.1
Jul 08, 2024
1.11.0
Jul 07, 2024
1.10.3
Jul 04, 2024
1.10.2
Jul 04, 2024
1.10.1
Jul 01, 2024
1.10.0
Jun 03, 2024
1.9.0
May 20, 2024
1.8.1
May 06, 2024
1.8.0
May 05, 2024
1.7.9
May 05, 2024
1.7.8
May 05, 2024
1.7.7
Apr 17, 2024
1.7.6
Apr 08, 2024
1.7.4
Mar 28, 2024
1.7.3
Mar 28, 2024
1.7.2
Mar 24, 2024
1.7.1
Mar 13, 2024
1.7.0
Mar 05, 2024
1.6.6
Feb 08, 2024
1.6.5
Feb 07, 2024
1.6.4
Feb 05, 2024
1.6.3
Feb 05, 2024
1.6.2
Feb 05, 2024
1.6.1
Jan 30, 2024
1.6.0
Jan 30, 2024
1.5.3
Jan 22, 2024
1.5.2
Jan 22, 2024
1.5.1
Jan 18, 2024
1.5.0
Jan 18, 2024
1.4.6
Jan 11, 2024
1.4.5
Jan 11, 2024
1.4.4
Jan 11, 2024
1.4.3
Jan 09, 2024
1.4.2
Jan 08, 2024
1.4.1
Dec 31, 2023
1.4.0
Dec 31, 2023
1.3.2
Dec 19, 2023
1.3.1
Dec 18, 2023
1.3.0
Dec 17, 2023
1.2.0
Dec 10, 2023
1.1.4
Dec 03, 2023
1.1.3
Dec 03, 2023
1.1.1
Dec 03, 2023
1.0.47
Sep 18, 2023
1.0.45
Sep 18, 2023
1.0.43
Aug 23, 2023
1.0.42
Aug 16, 2023
1.0.38
Aug 08, 2023
1.0.35
Aug 07, 2023
1.0.34
Aug 07, 2023
1.0.31
Aug 06, 2023
1.0.20
Jul 20, 2023
1.0.19
Jul 20, 2023
1.0.18
Jul 19, 2023
1.0.17
Jul 19, 2023
1.0.16
Jul 19, 2023
1.0.15
Jul 19, 2023
1.0.14
Jul 19, 2023
1.0.13
Jul 19, 2023
1.0.12
Jul 19, 2023
1.0.11
Jul 19, 2023
1.0.10
Jul 19, 2023
1.0.9
Jul 18, 2023
1.0.7
Jul 16, 2023
1.0.3
Jul 13, 2023
1.0.2
Jul 12, 2023
1.0.0
Jul 10, 2023
0.0.1
Jun 19, 2023