LLM Evaluations
Project Links
Meta
Author: Arize AI
Requires Python: <3.15,>=3.10
Classifiers
Programming Language
- Python
- Python :: 3.10
- Python :: 3.11
- Python :: 3.12
- Python :: 3.13
- Python :: 3.14
arize-phoenix-evals
Phoenix Evals provides lightweight, composable building blocks for writing and running evaluations on LLM applications, including tools to determine relevance, toxicity, hallucination detection, and much more.
Features
- Works with your preferred model SDKs via adapters (OpenAI, LiteLLM, LangChain)
- Powerful input mapping and binding for working with complex data structures
- Several pre-built metrics for common evaluation tasks like hallucination detection
- Evaluators are natively instrumented via OpenTelemetry tracing for observability and dataset curation
- Blazing fast performance - achieve up to 20x speedup with built-in concurrency and batching
- Tons of convenience features to improve the developer experience!
Installation
Install Phoenix Evals 2.0 using pip:
pip install 'arize-phoenix-evals>=2.0.0' openai
Quick Start
from phoenix.evals import create_classifier
from phoenix.evals.llm import LLM
# Create an LLM instance
llm = LLM(provider="openai", model="gpt-4o")
# Create an evaluator
evaluator = create_classifier(
name="helpfulness",
prompt_template="Rate the response to the user query as helpful or not:\n\nQuery: {input}\nResponse: {output}",
llm=llm,
choices={"helpful": 1.0, "not_helpful": 0.0},
)
# Simple evaluation
scores = evaluator.evaluate({"input": "How do I reset?", "output": "Go to settings > reset."})
scores[0].pretty_print()
# With input mapping for nested data
scores = evaluator.evaluate(
{"data": {"query": "How do I reset?", "response": "Go to settings > reset."}},
input_mapping={"input": "data.query", "output": "data.response"}
)
scores[0].pretty_print()
Pre-Built Evaluators
The phoenix.evals.metrics module provides ready-to-use evaluators for common tasks:
| Evaluator | Class | Description |
|---|---|---|
| Faithfulness | FaithfulnessEvaluator |
Detects hallucinations β checks if output is grounded in context |
| Conciseness | ConcisenessEvaluator |
Evaluates whether the response is appropriately concise |
| Correctness | CorrectnessEvaluator |
Checks if the output is factually correct |
| Document Relevance | DocumentRelevanceEvaluator |
Measures how relevant a retrieved document is to a query |
| Refusal | RefusalEvaluator |
Detects whether the model refused to answer |
| Tool Invocation | ToolInvocationEvaluator |
Checks whether the correct tool was called with the right arguments |
| Tool Selection | ToolSelectionEvaluator |
Evaluates whether the right tool was selected for the task |
| Tool Response Handling | ToolResponseHandlingEvaluator |
Evaluates how well the model uses a tool's response |
| Exact Match | exact_match |
Checks for exact string equality between output and expected |
| Regex Match | MatchesRegex |
Checks whether the output matches a regular expression |
| Precision/Recall | PrecisionRecallFScore |
Computes precision, recall, and F-score for classification tasks |
from phoenix.evals.llm import LLM
from phoenix.evals.metrics import FaithfulnessEvaluator, exact_match, MatchesRegex
llm = LLM(provider="openai", model="gpt-4o")
# LLM-powered faithfulness evaluator
faithfulness = FaithfulnessEvaluator(llm=llm)
scores = faithfulness.evaluate({
"input": "What is the capital of France?",
"context": "Paris is the capital of France.",
"output": "The capital of France is Berlin.",
})
scores[0].pretty_print()
# Score(name='faithfulness', score=0.0, label='unfaithful', explanation='...')
# Code-based exact match
match_result = exact_match({"output": "Paris", "expected": "Paris"})
# Regex match
regex_result = MatchesRegex(pattern=r"^\d{4}-\d{2}-\d{2}$").evaluate({
"output": "2024-03-15"
})
LLM Providers
The LLM class supports multiple AI providers:
from phoenix.evals.llm import LLM
# OpenAI
llm = LLM(provider="openai", model="gpt-4o")
# Anthropic
llm = LLM(provider="anthropic", model="claude-3-5-sonnet-20241022")
# Google Gemini
llm = LLM(provider="google", model="gemini-1.5-pro")
# LiteLLM (unified interface for 100+ providers)
llm = LLM(provider="litellm", model="gpt-4o")
Evaluating Dataframes
import pandas as pd
from phoenix.evals import create_classifier, evaluate_dataframe, async_evaluate_dataframe
from phoenix.evals.llm import LLM
# Create an LLM instance
llm = LLM(provider="openai", model="gpt-4o")
# Create multiple evaluators
relevance_evaluator = create_classifier(
name="relevance",
prompt_template="Is the response relevant to the query?\n\nQuery: {input}\nResponse: {output}",
llm=llm,
choices={"relevant": 1.0, "irrelevant": 0.0},
)
helpfulness_evaluator = create_classifier(
name="helpfulness",
prompt_template="Is the response helpful?\n\nQuery: {input}\nResponse: {output}",
llm=llm,
choices={"helpful": 1.0, "not_helpful": 0.0},
)
# Prepare your dataframe
df = pd.DataFrame([
{"input": "How do I reset my password?", "output": "Go to settings > account > reset password."},
{"input": "What's the weather like?", "output": "I can help you with password resets."},
])
# Synchronous evaluation
results_df = evaluate_dataframe(
dataframe=df,
evaluators=[relevance_evaluator, helpfulness_evaluator],
)
print(results_df.head())
# Async evaluation (up to 20x faster with large dataframes)
import asyncio
results_df = asyncio.run(async_evaluate_dataframe(
dataframe=df,
evaluators=[relevance_evaluator, helpfulness_evaluator],
))
Documentation
- Full Documentation - Complete API reference and guides
- Phoenix Docs - Detailed use-cases and examples
- OpenInference - Auto-instrumentation libraries for frameworks
Community
Join our community to connect with thousands of AI builders:
- π Join our Slack community.
- π Read the Phoenix documentation.
- π‘ Ask questions and provide feedback in the #phoenix-support channel.
- π Leave a star on our GitHub.
- π Report bugs with GitHub Issues.
- π Follow us on π.
- πΊοΈ Check out our roadmap to see where we're heading next.
2.13.0
Apr 01, 2026
2.12.0
Mar 24, 2026
2.11.0
Feb 27, 2026
2.10.0
Feb 13, 2026
2.9.0
Feb 03, 2026
2.8.0
Jan 08, 2026
2.7.1
Dec 12, 2025
2.7.0
Dec 04, 2025
2.6.1
Nov 22, 2025
2.6.0
Nov 12, 2025
2.5.0
Oct 07, 2025
2.4.0
Oct 02, 2025
2.3.0
Oct 01, 2025
2.2.0
Sep 27, 2025
2.1.0
Sep 24, 2025
2.0.1
Sep 17, 2025
2.0.0
Sep 17, 2025
0.29.0
Aug 26, 2025
0.28.1
Aug 20, 2025
0.28.0
Aug 19, 2025
0.27.0
Aug 13, 2025
0.26.1
Aug 01, 2025
0.26.0
Aug 01, 2025
0.25.0
Jul 30, 2025
0.24.0
Jul 30, 2025
0.23.1
Jul 21, 2025
0.23.0
Jul 16, 2025
0.22.0
Jul 02, 2025
0.21.1
Jul 02, 2025
0.21.0
Jun 21, 2025
0.20.8
Jun 04, 2025
0.20.7
May 28, 2025
0.20.6
Apr 17, 2025
0.20.5
Apr 16, 2025
0.20.4
Mar 24, 2025
0.20.3
Feb 13, 2025
0.20.2
Feb 06, 2025
0.20.1
Feb 06, 2025
0.20.0
Feb 05, 2025
0.19.0
Jan 16, 2025
0.18.1
Jan 07, 2025
0.18.0
Dec 20, 2024
0.17.5
Nov 19, 2024
0.17.4
Nov 12, 2024
0.17.3
Nov 06, 2024
0.17.2
Oct 18, 2024
0.17.1
Oct 17, 2024
0.17.0
Oct 09, 2024
0.16.1
Sep 27, 2024
0.16.0
Sep 17, 2024
0.15.1
Aug 27, 2024
0.15.0
Aug 15, 2024
0.14.1
Jul 16, 2024
0.14.0
Jul 12, 2024
0.13.2
Jul 03, 2024
0.13.1
Jun 30, 2024
0.13.0
Jun 26, 2024
0.12.0
Jun 06, 2024
0.11.0
May 31, 2024
0.10.0
May 29, 2024
0.9.2
May 21, 2024
0.9.1
May 21, 2024
0.9.0
May 17, 2024
0.8.2
May 14, 2024
0.8.1
May 04, 2024
0.8.0
Apr 22, 2024
0.7.0
Apr 13, 2024
0.6.1
Apr 04, 2024
0.6.0
Mar 29, 2024
0.5.0
Mar 20, 2024
0.4.0
Mar 20, 2024
0.3.1
Mar 16, 2024
0.3.0
Mar 13, 2024
0.2.0
Mar 07, 2024
0.1.0
Mar 05, 2024
0.0.5
Feb 24, 2024
0.0.4
Feb 24, 2024
0.0.3
Feb 23, 2024
0.0.2
Feb 23, 2024