deepeval 3.9.5


pip install deepeval

  Latest version

Released: Mar 31, 2026


Meta
Author: Jeffrey Ip
Requires Python: >=3.9,<4.0

Classifiers

License
  • OSI Approved :: Apache Software License

Programming Language
  • Python :: 3
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11

DeepEval Logo

The LLM Evaluation Framework

confident-ai%2Fdeepeval | Trendshift

discord-invite

Documentation | Metrics and Features | Getting Started | Integrations | Confident AI

GitHub release Try Quickstart in Colab License Twitter Follow

Deutsch | Español | français | 日本語 | 한국어 | Português | Русский | 中文

DeepEval is a simple-to-use, open-source LLM evaluation framework, for evaluating large-language model systems. It is similar to Pytest but specialized for unit testing LLM apps. DeepEval incorporates the latest research to run evals via metrics such as G-Eval, task completion, answer relevancy, hallucination, etc., which uses LLM-as-a-judge and other NLP models that run locally on your machine.

Whether you're building AI agents, RAG pipelines, or chatbots, implemented via LangChain or OpenAI, DeepEval has you covered. With it, you can easily determine the optimal models, prompts, and architecture to improve your AI quality, prevent prompt drifting, or even transition from OpenAI to Claude with confidence.

[!IMPORTANT] Need a place for your DeepEval testing data to live 🏡❤️? Sign up to the DeepEval platform to compare iterations of your LLM app, generate & share testing reports, and more.

Demo GIF

Want to talk LLM evaluation, need help picking metrics, or just to say hi? Come join our discord.


🔥 Metrics and Features

  • 📐 Large variety of ready-to-use LLM eval metrics (all with explanations) powered by ANY LLM of your choice, statistical methods, or NLP models that run locally on your machine covering all use cases:

    • Custom, All-Purpose Metrics:

      • G-Eval — a research-backed LLM-as-a-judge metric for evaluating on any custom criteria with human-like accuracy
      • DAG — DeepEval's graph-based deterministic LLM-as-a-judge metric builder
    • Agentic Metrics
    • RAG Metrics
      • Answer Relevancy — measure how relevant the RAG pipeline's output is to the input
      • Faithfulness — evaluate whether the RAG pipeline's output factually aligns with the retrieval context
      • Contextual Recall — measure how well the RAG pipeline's retrieval context aligns with the expected output
      • Contextual Precision — evaluate whether relevant nodes in the RAG pipeline's retrieval context are ranked higher
      • Contextual Relevancy — measure the overall relevance of the RAG pipeline's retrieval context to the input
      • RAGAS — average of answer relevancy, faithfulness, contextual precision, and contextual recall
    • Multi-Turn Metrics
      • Knowledge Retention — evaluate whether the chatbot retains factual information throughout a conversation
      • Conversation Completeness — measure whether the chatbot satisfies user needs throughout a conversation
      • Turn Relevancy — evaluate whether the chatbot generates consistently relevant responses throughout a conversation
      • Turn Faithfulness — check if the chatbot's responses are factually grounded in retrieval context across turns
      • Role Adherence — evaluate whether the chatbot adheres to its assigned role throughout a conversation
    • MCP Metrics
      • MCP Task Completion — evaluate how effectively an MCP-based agent accomplishes a task
      • MCP Use — measure how effectively an agent uses its available MCP servers
      • Multi-Turn MCP Use — evaluate MCP server usage across conversation turns
    • Multimodal Metrics
      • Text to Image — evaluate image generation quality based on semantic consistency and perceptual quality
      • Image Editing — evaluate image editing quality based on semantic consistency and perceptual quality
      • Image Coherence — measure how well images align with their accompanying text
      • Image Helpfulness — evaluate how effectively images contribute to user comprehension of the text
      • Image Reference — evaluate how accurately images are referred to or explained by accompanying text
    • Other Metrics
      • Hallucination — check whether the LLM generates factually correct information against provided context
      • Summarization — evaluate whether summaries are factually correct and include necessary details
      • Bias — detect gender, racial, or political bias in LLM outputs
      • Toxicity — evaluate toxicity in LLM outputs
      • JSON Correctness — check whether the output matches an expected JSON schema
      • Prompt Alignment — measure whether the output aligns with instructions in the prompt template
  • 🎯 Supports both end-to-end and component-level LLM evaluation.

  • 🧩 Build your own custom metrics that are automatically integrated with DeepEval's ecosystem.

  • 🔮 Generate both single and multi-turn synthetic datasets for evaluation.

  • 🔗 Integrates seamlessly with ANY CI/CD environment.

  • 🧬 Optimize prompts automatically based on evaluation results.

  • 🏆 Easily benchmark ANY LLM on popular LLM benchmarks in under 10 lines of code., including MMLU, HellaSwag, DROP, BIG-Bench Hard, TruthfulQA, HumanEval, GSM8K.


🔌 Integrations

DeepEval plugs into any LLM framework — OpenAI Agents, LangChain, CrewAI, and more. To scale evals across your team — or let anyone run them without writing code — Confident AI gives you a native platform integration.

Frameworks

  • OpenAI — evaluate and trace OpenAI applications via a client wrapper
  • OpenAI Agents — evaluate OpenAI Agents end-to-end in under a minute
  • LangChain — evaluate LangChain applications with a callback handler
  • LangGraph — evaluate LangGraph agents with a callback handler
  • Pydantic AI — evaluate Pydantic AI agents with type-safe validation
  • CrewAI — evaluate CrewAI multi-agent systems
  • Anthropic — evaluate and trace Claude applications via a client wrapper
  • AWS AgentCore — evaluate agents deployed on Amazon AgentCore
  • LlamaIndex — evaluate RAG applications built with LlamaIndex

☁️ Platform + Ecosystem

Confident AI is an all-in-one platform that integrates natively with DeepEval.

  • Manage datasets, trace LLM applications, run evaluations, and monitor responses in production — all from one platform.
  • Don't need a UI? Confident AI can also be your data persistant layer - run evals, pull datasets, and inspect traces straight from claude code, cursor, via Confident AI's MCP server.

Confident AI MCP Architecture


🚀 QuickStart

Let's pretend your LLM application is a RAG based customer support chatbot; here's how DeepEval can help test what you've built.

Installation

Deepeval works with Python>=3.9+.

pip install -U deepeval

Create an account (highly recommended)

Using the deepeval platform will allow you to generate sharable testing reports on the cloud. It is free, takes no additional code to setup, and we highly recommend giving it a try.

To login, run:

deepeval login

Follow the instructions in the CLI to create an account, copy your API key, and paste it into the CLI. All test cases will automatically be logged (find more information on data privacy here).

Write your first test case

Create a test file:

touch test_chatbot.py

Open test_chatbot.py and write your first test case to run an end-to-end evaluation using DeepEval, which treats your LLM app as a black-box:

import pytest
from deepeval import assert_test
from deepeval.metrics import GEval
from deepeval.test_case import LLMTestCase, LLMTestCaseParams

def test_case():
    correctness_metric = GEval(
        name="Correctness",
        criteria="Determine if the 'actual output' is correct based on the 'expected output'.",
        evaluation_params=[LLMTestCaseParams.ACTUAL_OUTPUT, LLMTestCaseParams.EXPECTED_OUTPUT],
        threshold=0.5
    )
    test_case = LLMTestCase(
        input="What if these shoes don't fit?",
        # Replace this with the actual output from your LLM application
        actual_output="You have 30 days to get a full refund at no extra cost.",
        expected_output="We offer a 30-day full refund at no extra costs.",
        retrieval_context=["All customers are eligible for a 30 day full refund at no extra costs."]
    )
    assert_test(test_case, [correctness_metric])

Set your OPENAI_API_KEY as an environment variable (you can also evaluate using your own custom model, for more details visit this part of our docs):

export OPENAI_API_KEY="..."

And finally, run test_chatbot.py in the CLI:

deepeval test run test_chatbot.py

Congratulations! Your test case should have passed ✅ Let's breakdown what happened.

  • The variable input mimics a user input, and actual_output is a placeholder for what your application's supposed to output based on this input.
  • The variable expected_output represents the ideal answer for a given input, and GEval is a research-backed metric provided by deepeval for you to evaluate your LLM output's on any custom with human-like accuracy.
  • In this example, the metric criteria is correctness of the actual_output based on the provided expected_output.
  • All metric scores range from 0 - 1, which the threshold=0.5 threshold ultimately determines if your test have passed or not.

Read our documentation for more information!


Evaluating Nested Components

Use the @observe decorator to trace components (LLM calls, retrievers, tool calls, agents) and apply metrics at the component level — no need to rewrite your codebase:

from deepeval.tracing import observe, update_current_span
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
from deepeval.dataset import EvaluationDataset, Golden
from deepeval.metrics import GEval

correctness = GEval(
    name="Correctness",
    criteria="Determine if the 'actual output' is correct based on the 'expected output'.",
    evaluation_params=[LLMTestCaseParams.ACTUAL_OUTPUT, LLMTestCaseParams.EXPECTED_OUTPUT],
)

@observe(metrics=[correctness])
def inner_component():
    update_current_span(test_case=LLMTestCase(input="...", actual_output="..."))
    return "result"

@observe()
def llm_app(input: str):
    return inner_component()

dataset = EvaluationDataset(goldens=[Golden(input="Hi!")])
for golden in dataset.evals_iterator():
    llm_app(golden.input)

Learn more about component-level evaluations here.


Evaluate Without Pytest Integration

Alternatively, you can evaluate without Pytest, which is more suited for a notebook environment.

from deepeval import evaluate
from deepeval.metrics import AnswerRelevancyMetric
from deepeval.test_case import LLMTestCase

answer_relevancy_metric = AnswerRelevancyMetric(threshold=0.7)
test_case = LLMTestCase(
    input="What if these shoes don't fit?",
    # Replace this with the actual output from your LLM application
    actual_output="We offer a 30-day full refund at no extra costs.",
    retrieval_context=["All customers are eligible for a 30 day full refund at no extra costs."]
)
evaluate([test_case], [answer_relevancy_metric])

Using Standalone Metrics

DeepEval is extremely modular, making it easy for anyone to use any of our metrics. Continuing from the previous example:

from deepeval.metrics import AnswerRelevancyMetric
from deepeval.test_case import LLMTestCase

answer_relevancy_metric = AnswerRelevancyMetric(threshold=0.7)
test_case = LLMTestCase(
    input="What if these shoes don't fit?",
    # Replace this with the actual output from your LLM application
    actual_output="We offer a 30-day full refund at no extra costs.",
    retrieval_context=["All customers are eligible for a 30 day full refund at no extra costs."]
)

answer_relevancy_metric.measure(test_case)
print(answer_relevancy_metric.score)
# All metrics also offer an explanation
print(answer_relevancy_metric.reason)

Note that some metrics are for RAG pipelines, while others are for fine-tuning. Make sure to use our docs to pick the right one for your use case.

Evaluating a Dataset / Test Cases in Bulk

In DeepEval, a dataset is simply a collection of test cases. Here is how you can evaluate these in bulk:

import pytest
from deepeval import assert_test
from deepeval.dataset import EvaluationDataset, Golden
from deepeval.metrics import AnswerRelevancyMetric
from deepeval.test_case import LLMTestCase

dataset = EvaluationDataset(goldens=[Golden(input="What's the weather like today?")])

for golden in dataset.goldens:
    test_case = LLMTestCase(
        input=golden.input,
        actual_output=your_llm_app(golden.input)
    )
    dataset.add_test_case(test_case)

@pytest.mark.parametrize(
    "test_case",
    dataset.test_cases,
)
def test_customer_chatbot(test_case: LLMTestCase):
    answer_relevancy_metric = AnswerRelevancyMetric(threshold=0.5)
    assert_test(test_case, [answer_relevancy_metric])
# Run this in the CLI, you can also add an optional -n flag to run tests in parallel
deepeval test run test_<filename>.py -n 4

Alternatively, although we recommend using deepeval test run, you can evaluate a dataset/test cases without using our Pytest integration:

from deepeval import evaluate
...

evaluate(dataset, [answer_relevancy_metric])

A Note on Env Variables (.env / .env.local)

DeepEval auto-loads .env.local then .env from the current working directory at import time. Precedence: process env -> .env.local -> .env. Opt out with DEEPEVAL_DISABLE_DOTENV=1.

cp .env.example .env.local
# then edit .env.local (ignored by git)

DeepEval With Confident AI

Confident AI is an all-in-one platform to manage datasets, trace LLM applications, and run evaluations in production. Log in from the CLI to get started:

deepeval login

Then run your tests as usual — results are automatically synced to the platform:

deepeval test run test_chatbot.py

Demo GIF

Prefer to stay in your IDE? Use DeepEval via Confident AI's MCP server as the persistent layer to run evals, pull datasets, and inspect traces without leaving your editor.

Confident AI MCP Architecture

Everything on Confident AI is available here.


Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.


Roadmap

Features:

  • Integration with Confident AI
  • Implement G-Eval
  • Implement RAG metrics
  • Implement Conversational metrics
  • Evaluation Dataset Creation
  • Red-Teaming
  • DAG custom metrics
  • Guardrails

Authors

Built by the founders of Confident AI. Contact jeffreyip@confident-ai.com for all enquiries.


License

DeepEval is licensed under Apache 2.0 - see the LICENSE.md file for details.

3.9.5 Mar 31, 2026
3.9.4 Mar 30, 2026
3.9.3 Mar 25, 2026
3.9.2 Mar 20, 2026
3.9.1 Mar 17, 2026
3.8.9 Mar 05, 2026
3.8.8 Feb 26, 2026
3.8.7 Feb 26, 2026
3.8.6 Feb 23, 2026
3.8.5 Feb 23, 2026
3.8.4 Feb 04, 2026
3.8.3 Jan 31, 2026
3.8.2 Jan 29, 2026
3.8.1 Jan 22, 2026
3.8.0 Jan 15, 2026
3.7.9 Jan 06, 2026
3.7.8 Jan 02, 2026
3.7.7 Dec 29, 2025
3.7.6 Dec 17, 2025
3.7.5 Dec 09, 2025
3.7.4 Dec 03, 2025
3.7.3 Dec 01, 2025
3.7.2 Nov 17, 2025
3.7.1 Nov 16, 2025
3.7.0 Nov 04, 2025
3.6.9 Oct 28, 2025
3.6.8 Oct 27, 2025
3.6.7 Oct 15, 2025
3.6.6 Oct 08, 2025
3.6.5 Oct 08, 2025
3.6.4 Oct 07, 2025
3.6.3 Oct 07, 2025
3.6.2 Oct 04, 2025
3.6.1 Oct 02, 2025
3.6.0 Sep 30, 2025
3.5.9 Sep 26, 2025
3.5.8 Sep 25, 2025
3.5.7 Sep 25, 2025
3.5.6 Sep 23, 2025
3.5.5 Sep 22, 2025
3.5.4 Sep 20, 2025
3.5.3 Sep 19, 2025
3.5.2 Sep 17, 2025
3.5.1 Sep 16, 2025
3.5.0 Sep 16, 2025
3.4.9 Sep 12, 2025
3.4.8 Sep 08, 2025
3.4.7 Sep 05, 2025
3.4.6 Sep 04, 2025
3.4.5 Sep 04, 2025
3.4.4 Sep 04, 2025
3.4.3 Sep 02, 2025
3.4.2 Aug 29, 2025
3.4.1 Aug 25, 2025
3.4.0 Aug 18, 2025
3.3.9 Aug 10, 2025
3.3.6 Aug 06, 2025
3.3.5 Aug 04, 2025
3.3.4 Aug 02, 2025
3.3.3 Jul 30, 2025
3.3.2 Jul 23, 2025
3.3.1 Jul 23, 2025
3.3.0 Jul 18, 2025
3.2.9 Jul 18, 2025
3.2.8 Jul 16, 2025
3.2.6 Jul 11, 2025
3.2.5 Jul 10, 2025
3.2.4 Jul 07, 2025
3.2.3 Jul 02, 2025
3.2.2 Jun 30, 2025
3.2.1 Jun 27, 2025
3.2.0 Jun 25, 2025
3.1.9 Jun 25, 2025
3.1.8 Jun 23, 2025
3.1.7 Jun 21, 2025
3.1.6 Jun 19, 2025
3.1.5 Jun 19, 2025
3.1.4 Jun 17, 2025
3.1.3 Jun 16, 2025
3.1.0 Jun 12, 2025
3.0.9 Jun 12, 2025
3.0.8 Jun 10, 2025
3.0.7 Jun 08, 2025
3.0.6 Jun 07, 2025
3.0.5 Jun 05, 2025
3.0.4 Jun 05, 2025
3.0.3 Jun 02, 2025
3.0.2 May 30, 2025
3.0.1 May 28, 2025
3.0.0 May 27, 2025
2.9.7 May 23, 2025
2.9.6 May 22, 2025
2.9.5 May 22, 2025
2.9.4 May 20, 2025
2.9.3 May 18, 2025
2.9.2 May 16, 2025
2.9.1 May 15, 2025
2.9.0 May 15, 2025
2.8.9 May 12, 2025
2.8.8 May 08, 2025
2.8.6 May 08, 2025
2.8.5 May 07, 2025
2.8.4 May 06, 2025
2.8.2 Apr 30, 2025
2.8.1 Apr 29, 2025
2.8.0 Apr 29, 2025
2.7.9 Apr 28, 2025
2.7.8 Apr 28, 2025
2.7.6 Apr 23, 2025
2.7.5 Apr 18, 2025
2.7.4 Apr 17, 2025
2.7.3 Apr 15, 2025
2.7.2 Apr 14, 2025
2.7.1 Apr 09, 2025
2.7.0 Apr 07, 2025
2.6.9 Apr 07, 2025
2.6.8 Apr 07, 2025
2.6.7 Apr 06, 2025
2.6.6 Apr 02, 2025
2.6.5 Mar 26, 2025
2.6.4 Mar 21, 2025
2.6.3 Mar 20, 2025
2.6.2 Mar 20, 2025
2.6.1 Mar 19, 2025
2.6.0 Mar 19, 2025
2.5.9 Mar 18, 2025
2.5.8 Mar 18, 2025
2.5.7 Mar 17, 2025
2.5.6 Mar 16, 2025
2.5.5 Mar 14, 2025
2.5.4 Mar 14, 2025
2.5.3 Mar 12, 2025
2.5.2 Mar 08, 2025
2.5.1 Mar 06, 2025
2.5.0 Mar 05, 2025
2.4.9 Mar 03, 2025
2.4.8 Feb 28, 2025
2.4.7 Feb 27, 2025
2.4.6 Feb 26, 2025
2.4.5 Feb 25, 2025
2.4.4 Feb 25, 2025
2.4.3 Feb 24, 2025
2.4.2 Feb 22, 2025
2.4.1 Feb 20, 2025
2.4.0 Feb 20, 2025
2.3.9 Feb 20, 2025
2.3.8 Feb 17, 2025
2.3.7 Feb 13, 2025
2.3.6 Feb 10, 2025
2.3.4 Feb 07, 2025
2.3.3 Feb 06, 2025
2.3.2 Feb 05, 2025
2.3.1 Feb 03, 2025
2.3.0 Feb 01, 2025
2.2.9 Jan 31, 2025
2.2.8 Jan 31, 2025
2.2.7 Jan 31, 2025
2.2.6 Jan 24, 2025
2.2.5 Jan 23, 2025
2.2.4 Jan 23, 2025
2.2.3 Jan 22, 2025
2.2.2 Jan 22, 2025
2.2.1 Jan 22, 2025
2.2.0 Jan 22, 2025
2.1.9 Jan 20, 2025
2.1.8 Jan 16, 2025
2.1.7 Jan 15, 2025
2.1.6 Jan 10, 2025
2.1.5 Jan 09, 2025
2.1.4 Jan 09, 2025
2.1.3 Jan 09, 2025
2.1.2 Jan 09, 2025
2.1.1 Jan 02, 2025
2.1.0 Jan 02, 2025
2.0.9 Dec 21, 2024
2.0.8 Dec 20, 2024
2.0.7 Dec 20, 2024
2.0.6 Dec 18, 2024
2.0.5 Dec 09, 2024
2.0.4 Dec 09, 2024
2.0.3 Dec 06, 2024
2.0.2 Dec 06, 2024
2.0.1 Dec 02, 2024
2.0 Nov 27, 2024
1.6.2 Nov 26, 2024
1.6.1 Nov 26, 2024
1.6.0 Nov 26, 2024
1.5.9 Nov 22, 2024
1.5.8 Nov 19, 2024
1.5.7 Nov 19, 2024
1.5.6 Nov 18, 2024
1.5.5 Nov 17, 2024
1.5.4 Nov 16, 2024
1.5.3 Nov 13, 2024
1.5.2 Nov 13, 2024
1.5.1 Nov 13, 2024
1.5.0 Nov 06, 2024
1.4.9 Nov 06, 2024
1.4.8 Nov 03, 2024
1.4.7 Oct 31, 2024
1.4.6 Oct 28, 2024
1.4.5 Oct 25, 2024
1.4.4 Oct 22, 2024
1.4.3 Oct 22, 2024
1.4.2 Oct 20, 2024
1.4.1 Oct 18, 2024
1.4.0 Oct 18, 2024
1.3.9 Oct 16, 2024
1.3.8 Oct 15, 2024
1.3.7 Oct 15, 2024
1.3.6 Oct 14, 2024
1.3.5 Oct 08, 2024
1.3.4 Oct 05, 2024
1.3.3 Oct 04, 2024
1.3.2 Sep 27, 2024
1.3.1 Sep 26, 2024
1.3.0 Sep 24, 2024
1.2.9 Sep 23, 2024
1.2.8 Sep 23, 2024
1.2.7 Sep 23, 2024
1.2.6 Sep 22, 2024
1.2.5 Sep 22, 2024
1.2.4 Sep 21, 2024
1.2.3 Sep 20, 2024
1.2.2 Sep 17, 2024
1.2.1 Sep 17, 2024
1.2.0 Sep 16, 2024
1.1.9 Sep 13, 2024
1.1.8 Sep 12, 2024
1.1.7 Sep 12, 2024
1.1.6 Sep 02, 2024
1.1.5 Sep 02, 2024
1.1.4 Aug 30, 2024
1.1.3 Aug 29, 2024
1.1.2 Aug 27, 2024
1.1.1 Aug 26, 2024
1.1.0 Aug 23, 2024
1.0.9 Aug 22, 2024
1.0.8 Aug 22, 2024
1.0.7 Aug 22, 2024
1.0.6 Aug 18, 2024
1.0.5 Aug 18, 2024
1.0.4 Aug 16, 2024
1.0.3 Aug 15, 2024
1.0.2 Aug 15, 2024
1.0.1 Aug 14, 2024
1.0.0 Aug 12, 2024
0.21.78 Aug 10, 2024
0.21.77 Aug 09, 2024
0.21.76 Aug 09, 2024
0.21.75 Aug 09, 2024
0.21.74 Jul 30, 2024
0.21.73 Jul 24, 2024
0.21.72 Jul 24, 2024
0.21.71 Jul 23, 2024
0.21.70 Jul 23, 2024
0.21.69 Jul 23, 2024
0.21.68 Jul 20, 2024
0.21.67 Jul 18, 2024
0.21.66 Jul 16, 2024
0.21.65 Jul 10, 2024
0.21.64 Jul 03, 2024
0.21.63 Jul 03, 2024
0.21.62 Jun 25, 2024
0.21.61 Jun 25, 2024
0.21.60 Jun 23, 2024
0.21.59 Jun 20, 2024
0.21.58 Jun 20, 2024
0.21.57 Jun 18, 2024
0.21.56 Jun 17, 2024
0.21.55 Jun 12, 2024
0.21.54 Jun 12, 2024
0.21.53 Jun 11, 2024
0.21.52 Jun 11, 2024
0.21.51 Jun 07, 2024
0.21.50 Jun 05, 2024
0.21.49 Jun 04, 2024
0.21.48 May 28, 2024
0.21.47 May 27, 2024
0.21.46 May 27, 2024
0.21.45 May 22, 2024
0.21.44 May 22, 2024
0.21.43 May 20, 2024
0.21.42 May 14, 2024
0.21.41 May 13, 2024
0.21.40 May 13, 2024
0.21.39 May 08, 2024
0.21.38 May 08, 2024
0.21.37 May 07, 2024
0.21.36 Apr 28, 2024
0.21.35 Apr 26, 2024
0.21.34 Apr 25, 2024
0.21.33 Apr 24, 2024
0.21.32 Apr 22, 2024
0.21.31 Apr 21, 2024
0.21.30 Apr 19, 2024
0.21.29 Apr 17, 2024
0.21.28 Apr 16, 2024
0.21.27 Apr 16, 2024
0.21.26 Apr 14, 2024
0.21.25 Apr 12, 2024
0.21.24 Apr 10, 2024
0.21.23 Apr 07, 2024
0.21.22 Apr 07, 2024
0.21.21 Apr 04, 2024
0.21.20 Apr 04, 2024
0.21.19 Apr 04, 2024
0.21.18 Apr 03, 2024
0.21.17 Apr 02, 2024
0.21.16 Apr 01, 2024
0.21.15 Mar 31, 2024
0.21.14 Mar 31, 2024
0.21.13 Mar 28, 2024
0.21.12 Mar 26, 2024
0.21.11 Mar 26, 2024
0.21.1 Mar 26, 2024
0.21.0 Mar 20, 2024
0.20.99 Mar 19, 2024
0.20.98 Mar 19, 2024
0.20.97 Mar 16, 2024
0.20.96 Mar 16, 2024
0.20.95 Mar 16, 2024
0.20.94 Mar 16, 2024
0.20.93 Mar 16, 2024
0.20.92 Mar 15, 2024
0.20.91 Mar 15, 2024
0.20.90 Mar 14, 2024
0.20.89 Mar 11, 2024
0.20.88 Mar 11, 2024
0.20.87 Mar 11, 2024
0.20.86 Mar 11, 2024
0.20.85 Mar 09, 2024
0.20.84 Mar 09, 2024
0.20.83 Mar 09, 2024
0.20.82 Mar 09, 2024
0.20.81 Mar 04, 2024
0.20.80 Mar 04, 2024
0.20.79 Mar 04, 2024
0.20.78 Mar 01, 2024
0.20.77 Feb 28, 2024
0.20.76 Feb 27, 2024
0.20.75 Feb 27, 2024
0.20.74 Feb 25, 2024
0.20.73 Feb 25, 2024
0.20.72 Feb 25, 2024
0.20.71 Feb 23, 2024
0.20.70 Feb 22, 2024
0.20.69 Feb 22, 2024
0.20.68 Feb 21, 2024
0.20.67 Feb 21, 2024
0.20.66 Feb 19, 2024
0.20.65 Feb 15, 2024
0.20.64 Feb 14, 2024
0.20.63 Feb 11, 2024
0.20.62 Feb 09, 2024
0.20.61 Feb 09, 2024
0.20.60 Feb 08, 2024
0.20.59 Feb 08, 2024
0.20.58 Feb 07, 2024
0.20.57 Feb 06, 2024
0.20.56 Jan 30, 2024
0.20.55 Jan 29, 2024
0.20.54 Jan 29, 2024
0.20.53 Jan 25, 2024
0.20.52 Jan 23, 2024
0.20.51 Jan 23, 2024
0.20.50 Jan 21, 2024
0.20.49 Jan 19, 2024
0.20.48 Jan 16, 2024
0.20.47 Jan 16, 2024
0.20.46 Jan 12, 2024
0.20.45 Jan 12, 2024
0.20.44 Jan 03, 2024
0.20.43 Dec 26, 2023
0.20.42 Dec 21, 2023
0.20.41 Dec 20, 2023
0.20.40 Dec 19, 2023
0.20.39 Dec 16, 2023
0.20.38 Dec 16, 2023
0.20.37 Dec 15, 2023
0.20.36 Dec 15, 2023
0.20.35 Dec 14, 2023
0.20.34 Dec 14, 2023
0.20.33 Dec 12, 2023
0.20.32 Dec 12, 2023
0.20.31 Dec 12, 2023
0.20.30 Dec 10, 2023
0.20.29 Dec 07, 2023
0.20.28 Dec 06, 2023
0.20.27 Dec 04, 2023
0.20.26 Dec 04, 2023
0.20.25 Dec 02, 2023
0.20.24 Nov 28, 2023
0.20.23 Nov 23, 2023
0.20.22 Nov 22, 2023
0.20.21 Nov 22, 2023
0.20.20 Nov 22, 2023
0.20.19 Nov 16, 2023
0.20.18 Nov 14, 2023
0.20.17 Nov 13, 2023
0.20.16 Nov 07, 2023
0.20.15 Nov 06, 2023
0.20.14 Nov 05, 2023
0.20.13 Oct 27, 2023
0.20.12 Oct 23, 2023
0.20.11 Oct 20, 2023
0.20.10 Oct 18, 2023
0.20.8 Oct 13, 2023
0.20.7 Oct 12, 2023
0.20.6 Oct 12, 2023
0.20.5 Oct 12, 2023
0.20.4 Oct 11, 2023
0.20.3 Oct 11, 2023
0.20.2 Oct 10, 2023
0.20.1 Oct 06, 2023
0.20.0 Oct 02, 2023
0.19.2 Oct 01, 2023
0.19.1 Oct 01, 2023
0.19.0 Oct 01, 2023
0.18.0 Sep 29, 2023
0.17.8 Sep 28, 2023
0.17.7 Sep 27, 2023
0.17.6 Sep 27, 2023
0.17.5 Sep 25, 2023
0.17.4 Sep 24, 2023
0.17.3 Sep 24, 2023
0.17.2 Sep 24, 2023
0.17.1 Sep 24, 2023
0.17.0 Sep 24, 2023
0.16.4 Sep 22, 2023
0.16.3 Sep 22, 2023
0.16.2 Sep 22, 2023
0.16.1 Sep 22, 2023
0.16.0 Sep 22, 2023
0.15.2 Sep 20, 2023
0.15.0 Sep 20, 2023
0.14.1 Sep 12, 2023
0.14.0 Sep 12, 2023
0.13.0 Sep 10, 2023
0.12.4 Sep 08, 2023
0.12.3 Sep 08, 2023
0.12.2 Sep 07, 2023
0.12.1 Sep 07, 2023
0.12.0 Sep 04, 2023
0.11.5 Sep 02, 2023
0.11.4 Sep 02, 2023
0.11.3 Sep 02, 2023
0.11.2 Sep 02, 2023
0.11.1 Sep 02, 2023
0.11.0 Sep 01, 2023
0.10.13 Aug 31, 2023
0.10.12 Aug 29, 2023
0.10.11 Aug 29, 2023
0.10.10 Aug 29, 2023
0.10.9 Aug 29, 2023
0.10.8 Aug 29, 2023
0.10.7 Aug 29, 2023
0.10.6 Aug 29, 2023
0.10.5 Aug 29, 2023
0.10.4 Aug 28, 2023
0.10.3 Aug 28, 2023
0.10.2 Aug 28, 2023
0.10.1 Aug 28, 2023
0.10.0 Aug 27, 2023
0.9.18 Aug 25, 2023
0.9.16 Aug 25, 2023
0.9.15 Aug 25, 2023
0.9.13 Aug 25, 2023
0.9.12 Aug 25, 2023
0.9.11 Aug 25, 2023
0.9.10 Aug 25, 2023
0.9.9 Aug 25, 2023
0.9.8 Aug 24, 2023
0.9.7 Aug 24, 2023
0.9.6 Aug 24, 2023
0.9.5 Aug 24, 2023
0.9.4 Aug 24, 2023
0.9.2 Aug 24, 2023
0.9.1 Aug 23, 2023
0.9.0 Aug 23, 2023
0.8.0 Aug 22, 2023
0.7.1 Aug 22, 2023
0.7.0 Aug 22, 2023
0.6.1 Aug 21, 2023
0.6.0 Aug 20, 2023
0.5.0 Aug 18, 2023
0.4.2 Aug 17, 2023
0.4.1 Aug 17, 2023
0.4.0 Aug 17, 2023
0.3.1 Aug 16, 2023
0.2.2 Aug 15, 2023
0.2.1 Aug 15, 2023
0.2.0 Aug 15, 2023

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras: None
Dependencies:
aiohttp
click (<8.4.0,>=8.0.0)
grpcio (<2.0.0,>=1.67.1)
jinja2
nest_asyncio
openai
opentelemetry-api (<2.0.0,>=1.24.0)
opentelemetry-sdk (<2.0.0,>=1.24.0)
portalocker
posthog (<6.0.0,>=5.4.0)
pydantic (<3.0.0,>=2.11.7)
pydantic-settings (<3.0.0,>=2.10.1)
pyfiglet
pytest
pytest-asyncio
pytest-repeat
pytest-rerunfailures
pytest-xdist
python-dotenv (<2.0.0,>=1.1.1)
requests (<3.0.0,>=2.31.0)
rich (<15.0.0,>=13.6.0)
sentry-sdk
setuptools
tabulate (<0.10.0,>=0.9.0)
tenacity (<=10.0.0,>=8.0.0)
tqdm (<5.0.0,>=4.66.1)
typer (<1.0.0,>=0.9)
wheel