langsmith 0.4.37


pip install langsmith

  Latest version

Released: Oct 15, 2025


Meta
Author: LangChain
Requires Python: >=3.9

Classifiers

LangSmith Client SDK

Release Notes Python Downloads

This package contains the Python client for interacting with the LangSmith platform.

To install:

pip install -U langsmith
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=ls_...

Then trace:

import openai
from langsmith.wrappers import wrap_openai
from langsmith import traceable

# Auto-trace LLM calls in-context
client = wrap_openai(openai.Client())

@traceable # Auto-trace this function
def pipeline(user_input: str):
    result = client.chat.completions.create(
        messages=[{"role": "user", "content": user_input}],
        model="gpt-3.5-turbo"
    )
    return result.choices[0].message.content

pipeline("Hello, world!")

See the resulting nested trace ๐ŸŒ here.

LangSmith helps you and your team develop and evaluate language models and intelligent agents. It is compatible with any LLM application.

Cookbook: For tutorials on how to get more value out of LangSmith, check out the Langsmith Cookbook repo.

A typical workflow looks like:

  1. Set up an account with LangSmith.
  2. Log traces while debugging and prototyping.
  3. Run benchmark evaluations and continuously improve with the collected data.

We'll walk through these steps in more detail below.

1. Connect to LangSmith

Sign up for LangSmith using your GitHub, Discord accounts, or an email address and password. If you sign up with an email, make sure to verify your email address before logging in.

Then, create a unique API key on the Settings Page, which is found in the menu at the top right corner of the page.

Note: Save the API Key in a secure location. It will not be shown again.

2. Log Traces

You can log traces natively using the LangSmith SDK or within your LangChain application.

Logging Traces with LangChain

LangSmith seamlessly integrates with the Python LangChain library to record traces from your LLM applications.

  1. Copy the environment variables from the Settings Page and add them to your application.

Tracing can be activated by setting the following environment variables or by manually specifying the LangChainTracer.

import os
os.environ["LANGSMITH_TRACING"] = "true"
os.environ["LANGSMITH_ENDPOINT"] = "https://api.smith.langchain.com"
# os.environ["LANGSMITH_ENDPOINT"] = "https://eu.api.smith.langchain.com" # If signed up in the EU region
os.environ["LANGSMITH_API_KEY"] = "<YOUR-LANGSMITH-API-KEY>"
# os.environ["LANGSMITH_PROJECT"] = "My Project Name" # Optional: "default" is used if not set
# os.environ["LANGSMITH_WORKSPACE_ID"] = "<YOUR-WORKSPACE-ID>" # Required for org-scoped API keys

Tip: Projects are groups of traces. All runs are logged to a project. If not specified, the project is set to default.

  1. Run an Agent, Chain, or Language Model in LangChain

If the environment variables are correctly set, your application will automatically connect to the LangSmith platform.

from langchain_core.runnables import chain

@chain
def add_val(x: dict) -> dict:
    return {"val": x["val"] + 1}

add_val({"val": 1})

Logging Traces Outside LangChain

You can still use the LangSmith development platform without depending on any LangChain code.

  1. Copy the environment variables from the Settings Page and add them to your application.
import os
os.environ["LANGSMITH_ENDPOINT"] = "https://api.smith.langchain.com"
os.environ["LANGSMITH_API_KEY"] = "<YOUR-LANGSMITH-API-KEY>"
# os.environ["LANGSMITH_PROJECT"] = "My Project Name" # Optional: "default" is used if not set
  1. Log traces

The easiest way to log traces using the SDK is via the @traceable decorator. Below is an example.

from datetime import datetime
from typing import List, Optional, Tuple

import openai
from langsmith import traceable
from langsmith.wrappers import wrap_openai

client = wrap_openai(openai.Client())

@traceable
def argument_generator(query: str, additional_description: str = "") -> str:
    return client.chat.completions.create(
        [
            {"role": "system", "content": "You are a debater making an argument on a topic."
             f"{additional_description}"
             f" The current time is {datetime.now()}"},
            {"role": "user", "content": f"The discussion topic is {query}"}
        ]
    ).choices[0].message.content



@traceable
def argument_chain(query: str, additional_description: str = "") -> str:
    argument = argument_generator(query, additional_description)
    # ... Do other processing or call other functions...
    return argument

argument_chain("Why is blue better than orange?")

Alternatively, you can manually log events using the Client directly or using a RunTree, which is what the traceable decorator is meant to manage for you!

A RunTree tracks your application. Each RunTree object is required to have a name and run_type. These and other important attributes are as follows:

  • name: str - used to identify the component's purpose
  • run_type: str - Currently one of "llm", "chain" or "tool"; more options will be added in the future
  • inputs: dict - the inputs to the component
  • outputs: Optional[dict] - the (optional) returned values from the component
  • error: Optional[str] - Any error messages that may have arisen during the call
from langsmith.run_trees import RunTree

parent_run = RunTree(
    name="My Chat Bot",
    run_type="chain",
    inputs={"text": "Summarize this morning's meetings."},
    # project_name= "Defaults to the LANGSMITH_PROJECT env var"
)
parent_run.post()
# .. My Chat Bot calls an LLM
child_llm_run = parent_run.create_child(
    name="My Proprietary LLM",
    run_type="llm",
    inputs={
        "prompts": [
            "You are an AI Assistant. The time is XYZ."
            " Summarize this morning's meetings."
        ]
    },
)
child_llm_run.post()
child_llm_run.end(
    outputs={
        "generations": [
            "I should use the transcript_loader tool"
            " to fetch meeting_transcripts from XYZ"
        ]
    }
)
child_llm_run.patch()
# ..  My Chat Bot takes the LLM output and calls
# a tool / function for fetching transcripts ..
child_tool_run = parent_run.create_child(
    name="transcript_loader",
    run_type="tool",
    inputs={"date": "XYZ", "content_type": "meeting_transcripts"},
)
child_tool_run.post()
# The tool returns meeting notes to the chat bot
child_tool_run.end(outputs={"meetings": ["Meeting1 notes.."]})
child_tool_run.patch()

child_chain_run = parent_run.create_child(
    name="Unreliable Component",
    run_type="tool",
    inputs={"input": "Summarize these notes..."},
)
child_chain_run.post()

try:
    # .... the component does work
    raise ValueError("Something went wrong")
    child_chain_run.end(outputs={"output": "foo"}
    child_chain_run.patch()
except Exception as e:
    child_chain_run.end(error=f"I errored again {e}")
    child_chain_run.patch()
    pass
# .. The chat agent recovers

parent_run.end(outputs={"output": ["The meeting notes are as follows:..."]})
res = parent_run.patch()
res.result()

Create a Dataset from Existing Runs

Once your runs are stored in LangSmith, you can convert them into a dataset. For this example, we will do so using the Client, but you can also do this using the web interface, as explained in the LangSmith docs.

from langsmith import Client

client = Client()
dataset_name = "Example Dataset"
# We will only use examples from the top level AgentExecutor run here,
# and exclude runs that errored.
runs = client.list_runs(
    project_name="my_project",
    execution_order=1,
    error=False,
)

dataset = client.create_dataset(dataset_name, description="An example dataset")
for run in runs:
    client.create_example(
        inputs=run.inputs,
        outputs=run.outputs,
        dataset_id=dataset.id,
    )

Evaluating Runs

Check out the LangSmith Testing & Evaluation dos for up-to-date workflows.

For generating automated feedback on individual runs, you can run evaluations directly using the LangSmith client.

from typing import Optional
from langsmith.evaluation import StringEvaluator


def jaccard_chars(output: str, answer: str) -> float:
    """Naive Jaccard similarity between two strings."""
    prediction_chars = set(output.strip().lower())
    answer_chars = set(answer.strip().lower())
    intersection = prediction_chars.intersection(answer_chars)
    union = prediction_chars.union(answer_chars)
    return len(intersection) / len(union)


def grader(run_input: str, run_output: str, answer: Optional[str]) -> dict:
    """Compute the score and/or label for this run."""
    if answer is None:
        value = "AMBIGUOUS"
        score = 0.5
    else:
        score = jaccard_chars(run_output, answer)
        value = "CORRECT" if score > 0.9 else "INCORRECT"
    return dict(score=score, value=value)

evaluator = StringEvaluator(evaluation_name="Jaccard", grading_function=grader)

runs = client.list_runs(
    project_name="my_project",
    execution_order=1,
    error=False,
)
for run in runs:
    client.evaluate_run(run, evaluator)

Integrations

LangSmith easily integrates with your favorite LLM framework.

OpenAI SDK

We provide a convenient wrapper for the OpenAI SDK.

In order to use, you first need to set your LangSmith API key.

export LANGSMITH_API_KEY=<your-api-key>

Next, you will need to install the LangSmith SDK:

pip install -U langsmith

After that, you can wrap the OpenAI client:

from openai import OpenAI
from langsmith import wrappers

client = wrappers.wrap_openai(OpenAI())

Now, you can use the OpenAI client as you normally would, but now everything is logged to LangSmith!

client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Say this is a test"}],
)

Oftentimes, you use the OpenAI client inside of other functions. You can get nested traces by using this wrapped client and decorating those functions with @traceable. See this documentation for more documentation how to use this decorator

from langsmith import traceable

@traceable(name="Call OpenAI")
def my_function(text: str):
    return client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": f"Say {text}"}],
    )

my_function("hello world")

Instructor

We provide a convenient integration with Instructor, largely by virtue of it essentially just using the OpenAI SDK.

In order to use, you first need to set your LangSmith API key.

export LANGSMITH_API_KEY=<your-api-key>

Next, you will need to install the LangSmith SDK:

pip install -U langsmith

After that, you can wrap the OpenAI client:

from openai import OpenAI
from langsmith import wrappers

client = wrappers.wrap_openai(OpenAI())

After this, you can patch the OpenAI client using instructor:

import instructor

client = instructor.patch(OpenAI())

Now, you can use instructor as you normally would, but now everything is logged to LangSmith!

from pydantic import BaseModel


class UserDetail(BaseModel):
    name: str
    age: int


user = client.chat.completions.create(
    model="gpt-3.5-turbo",
    response_model=UserDetail,
    messages=[
        {"role": "user", "content": "Extract Jason is 25 years old"},
    ]
)

Oftentimes, you use instructor inside of other functions. You can get nested traces by using this wrapped client and decorating those functions with @traceable. See this documentation for more documentation how to use this decorator

@traceable()
def my_function(text: str) -> UserDetail:
    return client.chat.completions.create(
        model="gpt-3.5-turbo",
        response_model=UserDetail,
        messages=[
            {"role": "user", "content": f"Extract {text}"},
        ]
    )


my_function("Jason is 25 years old")

Pytest Plugin

The LangSmith pytest plugin lets Python developers define their datasets and evaluations as pytest test cases. See online docs for more information.

This plugin is installed as part of the LangSmith SDK, and is enabled by default. See also official pytest docs: How to install and use plugins

Additional Documentation

To learn more about the LangSmith platform, check out the docs.

0.4.37 Oct 15, 2025
0.4.36 Oct 15, 2025
0.4.35 Oct 14, 2025
0.4.35rc1 Oct 12, 2025
0.4.34 Oct 09, 2025
0.4.33 Oct 07, 2025
0.4.32 Oct 03, 2025
0.4.32rc0 Sep 25, 2025
0.4.31 Sep 25, 2025
0.4.30 Sep 22, 2025
0.4.29 Sep 18, 2025
0.4.28 Sep 15, 2025
0.4.27 Sep 08, 2025
0.4.26 Sep 08, 2025
0.4.25 Sep 04, 2025
0.4.24 Sep 04, 2025
0.4.23 Sep 02, 2025
0.4.22 Sep 02, 2025
0.4.21 Aug 29, 2025
0.4.20 Aug 28, 2025
0.4.19 Aug 27, 2025
0.4.18 Aug 26, 2025
0.4.17 Aug 26, 2025
0.4.16 Aug 22, 2025
0.4.15 Aug 20, 2025
0.4.14 Aug 12, 2025
0.4.13 Aug 06, 2025
0.4.12 Aug 06, 2025
0.4.11 Aug 05, 2025
0.4.10 Aug 01, 2025
0.4.9 Jul 31, 2025
0.4.8 Jul 18, 2025
0.4.7 Jul 17, 2025
0.4.6 Jul 15, 2025
0.4.5 Jul 10, 2025
0.4.4 Jun 27, 2025
0.4.3 Jun 27, 2025
0.4.2 Jun 25, 2025
0.4.1 Jun 10, 2025
0.4.0 Jun 10, 2025
0.3.45 Jun 05, 2025
0.3.44 Jun 02, 2025
0.3.43 May 29, 2025
0.3.42 May 03, 2025
0.3.41 May 02, 2025
0.3.40 May 02, 2025
0.3.39 Apr 30, 2025
0.3.38 Apr 28, 2025
0.3.37 Apr 25, 2025
0.3.37rc0 Apr 25, 2025
0.3.36 Apr 25, 2025
0.3.35 Apr 25, 2025
0.3.34 Apr 24, 2025
0.3.33 Apr 21, 2025
0.3.32 Apr 17, 2025
0.3.31 Apr 15, 2025
0.3.30 Apr 10, 2025
0.3.29 Apr 10, 2025
0.3.29rc0 Apr 10, 2025
0.3.28 Apr 09, 2025
0.3.28rc2 Apr 09, 2025
0.3.28rc1 Apr 08, 2025
0.3.27 Apr 08, 2025
0.3.27rc1 Apr 08, 2025
0.3.26 Apr 08, 2025
0.3.25 Apr 07, 2025
0.3.25rc2 Apr 04, 2025
0.3.25rc1 Apr 04, 2025
0.3.24 Apr 03, 2025
0.3.23 Apr 02, 2025
0.3.22 Apr 01, 2025
0.3.21 Apr 01, 2025
0.3.20 Mar 31, 2025
0.3.19 Mar 26, 2025
0.3.18 Mar 19, 2025
0.3.18rc1 Mar 19, 2025
0.3.17 Mar 19, 2025
0.3.16 Mar 19, 2025
0.3.15 Mar 14, 2025
0.3.14 Mar 14, 2025
0.3.14rc1 Mar 12, 2025
0.3.14rc0 Mar 12, 2025
0.3.13 Mar 07, 2025
0.3.12 Mar 06, 2025
0.3.11 Feb 25, 2025
0.3.11rc1 Feb 25, 2025
0.3.10 Feb 21, 2025
0.3.9 Feb 21, 2025
0.3.8 Feb 09, 2025
0.3.7 Feb 08, 2025
0.3.6 Feb 05, 2025
0.3.5 Feb 04, 2025
0.3.4 Jan 31, 2025
0.3.3 Jan 30, 2025
0.3.3rc0 Jan 30, 2025
0.3.2 Jan 27, 2025
0.3.1 Jan 22, 2025
0.3.1rc1 Jan 23, 2025
0.3.0 Jan 22, 2025
0.2.11 Jan 17, 2025
0.2.11rc15 Jan 20, 2025
0.2.11rc14 Jan 20, 2025
0.2.11rc13 Jan 19, 2025
0.2.11rc12 Jan 17, 2025
0.2.11rc11 Jan 17, 2025
0.2.11rc10 Jan 17, 2025
0.2.11rc9 Jan 17, 2025
0.2.11rc8 Jan 16, 2025
0.2.11rc7 Jan 15, 2025
0.2.11rc6 Jan 14, 2025
0.2.11rc5 Jan 11, 2025
0.2.11rc4 Jan 10, 2025
0.2.11rc3 Jan 09, 2025
0.2.11rc2 Jan 09, 2025
0.2.11rc1 Jan 08, 2025
0.2.10 Jan 03, 2025
0.2.9 Jan 03, 2025
0.2.8 Jan 03, 2025
0.2.7 Dec 31, 2024
0.2.6 Dec 24, 2024
0.2.4 Dec 19, 2024
0.2.3 Dec 12, 2024
0.2.2 Dec 10, 2024
0.2.1 Dec 06, 2024
0.2.0 Dec 05, 2024
0.1.148rc1 Nov 27, 2024
0.1.147 Nov 27, 2024
0.1.146 Nov 25, 2024
0.1.145 Nov 22, 2024
0.1.144 Nov 20, 2024
0.1.144rc3 Nov 20, 2024
0.1.144rc2 Nov 20, 2024
0.1.144rc1 Nov 19, 2024
0.1.143 Nov 13, 2024
0.1.142 Nov 08, 2024
0.1.141 Nov 08, 2024
0.1.140 Nov 06, 2024
0.1.139 Nov 01, 2024
0.1.139rc2 Nov 01, 2024
0.1.139rc1 Oct 31, 2024
0.1.138 Oct 30, 2024
0.1.138rc2 Oct 29, 2024
0.1.138rc1 Oct 29, 2024
0.1.137 Oct 23, 2024
0.1.136 Oct 17, 2024
0.1.135 Oct 14, 2024
0.1.134 Oct 10, 2024
0.1.133 Oct 10, 2024
0.1.132 Oct 07, 2024
0.1.131 Oct 03, 2024
0.1.130 Oct 02, 2024
0.1.129 Sep 27, 2024
0.1.128 Sep 24, 2024
0.1.127 Sep 24, 2024
0.1.126 Sep 24, 2024
0.1.125 Sep 20, 2024
0.1.124 Sep 20, 2024
0.1.123 Sep 19, 2024
0.1.122 Sep 18, 2024
0.1.121 Sep 16, 2024
0.1.120 Sep 13, 2024
0.1.119 Sep 12, 2024
0.1.118 Sep 11, 2024
0.1.117 Sep 09, 2024
0.1.116 Sep 06, 2024
0.1.116rc1 Sep 05, 2024
0.1.115 Sep 05, 2024
0.1.115rc1 Sep 05, 2024
0.1.115rc0 Sep 05, 2024
0.1.114 Sep 05, 2024
0.1.113 Sep 04, 2024
0.1.112 Sep 04, 2024
0.1.111 Sep 04, 2024
0.1.110 Sep 03, 2024
0.1.109 Sep 03, 2024
0.1.108 Aug 31, 2024
0.1.108rc0 Aug 29, 2024
0.1.107 Aug 29, 2024
0.1.106 Aug 27, 2024
0.1.105 Aug 27, 2024
0.1.104 Aug 23, 2024
0.1.103 Aug 22, 2024
0.1.102 Aug 22, 2024
0.1.101 Aug 21, 2024
0.1.100 Aug 20, 2024
0.1.99 Aug 11, 2024
0.1.99rc1 Aug 06, 2024
0.1.98 Aug 06, 2024
0.1.97 Aug 06, 2024
0.1.96 Aug 02, 2024
0.1.95 Jul 31, 2024
0.1.94 Jul 30, 2024
0.1.93 Jul 20, 2024
0.1.92 Jul 18, 2024
0.1.91 Jul 18, 2024
0.1.90 Jul 17, 2024
0.1.89 Jul 17, 2024
0.1.88 Jul 17, 2024
0.1.87 Jul 16, 2024
0.1.86 Jul 16, 2024
0.1.85 Jul 10, 2024
0.1.84 Jul 08, 2024
0.1.83 Jul 02, 2024
0.1.82 Jun 24, 2024
0.1.81 Jun 19, 2024
0.1.80 Jun 18, 2024
0.1.79 Jun 18, 2024
0.1.78 Jun 17, 2024
0.1.77 Jun 12, 2024
0.1.76 Jun 11, 2024
0.1.75 Jun 06, 2024
0.1.74 Jun 06, 2024
0.1.73 Jun 05, 2024
0.1.72 Jun 05, 2024
0.1.71 Jun 04, 2024
0.1.70 Jun 04, 2024
0.1.69 Jun 04, 2024
0.1.68 Jun 03, 2024
0.1.67 May 31, 2024
0.1.66 May 31, 2024
0.1.65 May 30, 2024
0.1.64 May 30, 2024
0.1.63 May 23, 2024
0.1.62 May 23, 2024
0.1.61 May 22, 2024
0.1.60 May 20, 2024
0.1.59 May 16, 2024
0.1.58 May 15, 2024
0.1.57 May 11, 2024
0.1.56 May 08, 2024
0.1.55 May 07, 2024
0.1.54 May 04, 2024
0.1.53 May 02, 2024
0.1.52 Apr 29, 2024
0.1.51 Apr 25, 2024
0.1.50 Apr 23, 2024
0.1.49 Apr 19, 2024
0.1.48 Apr 15, 2024
0.1.47 Apr 13, 2024
0.1.46 Apr 12, 2024
0.1.46rc1 Apr 12, 2024
0.1.45 Apr 10, 2024
0.1.45rc1 Apr 11, 2024
0.1.44 Apr 10, 2024
0.1.43 Apr 10, 2024
0.1.42 Apr 09, 2024
0.1.41 Apr 09, 2024
0.1.40 Apr 04, 2024
0.1.39 Apr 03, 2024
0.1.38 Mar 29, 2024
0.1.37 Mar 28, 2024
0.1.36 Mar 28, 2024
0.1.35 Mar 27, 2024
0.1.34 Mar 27, 2024
0.1.33 Mar 27, 2024
0.1.32rc8 Mar 26, 2024
0.1.32rc7 Mar 26, 2024
0.1.32rc6 Mar 26, 2024
0.1.32rc5 Mar 26, 2024
0.1.32rc4 Mar 26, 2024
0.1.32rc3 Mar 26, 2024
0.1.32rc2 Mar 26, 2024
0.1.32rc1 Mar 25, 2024
0.1.31 Mar 19, 2024
0.1.30 Mar 19, 2024
0.1.29 Mar 19, 2024
0.1.28 Mar 18, 2024
0.1.27 Mar 16, 2024
0.1.26 Mar 14, 2024
0.1.25 Mar 14, 2024
0.1.24 Mar 12, 2024
0.1.23 Mar 08, 2024
0.1.22 Mar 06, 2024
0.1.21 Mar 05, 2024
0.1.20 Mar 05, 2024
0.1.19 Mar 05, 2024
0.1.18 Mar 05, 2024
0.1.17 Mar 04, 2024
0.1.16 Mar 04, 2024
0.1.15 Mar 04, 2024
0.1.14 Mar 03, 2024
0.1.13 Mar 02, 2024
0.1.12 Mar 01, 2024
0.1.11 Mar 01, 2024
0.1.10 Feb 27, 2024
0.1.9 Feb 26, 2024
0.1.8 Feb 25, 2024
0.1.7 Feb 24, 2024
0.1.6 Feb 23, 2024
0.1.5 Feb 21, 2024
0.1.4 Feb 21, 2024
0.1.3 Feb 20, 2024
0.1.2 Feb 16, 2024
0.1.1 Feb 15, 2024
0.1.0 Feb 15, 2024
0.0.92 Feb 14, 2024
0.0.91 Feb 13, 2024
0.0.90 Feb 10, 2024
0.0.89 Feb 09, 2024
0.0.88 Feb 09, 2024
0.0.87 Feb 07, 2024
0.0.86 Feb 02, 2024
0.0.86rc1 Jan 30, 2024
0.0.85 Jan 30, 2024
0.0.84 Jan 29, 2024
0.0.84rc5 Jan 26, 2024
0.0.84rc4 Jan 25, 2024
0.0.84rc3 Jan 25, 2024
0.0.84rc2 Jan 23, 2024
0.0.84rc1 Jan 22, 2024
0.0.83 Jan 18, 2024
0.0.82 Jan 18, 2024
0.0.81 Jan 16, 2024
0.0.80 Jan 11, 2024
0.0.79 Jan 09, 2024
0.0.78 Jan 08, 2024
0.0.77 Jan 03, 2024
0.0.76 Jan 03, 2024
0.0.75 Dec 22, 2023
0.0.74 Dec 21, 2023
0.0.73 Dec 21, 2023
0.0.72 Dec 18, 2023
0.0.71 Dec 15, 2023
0.0.70 Dec 14, 2023
0.0.69 Dec 03, 2023
0.0.68 Dec 01, 2023
0.0.67 Nov 28, 2023
0.0.66 Nov 20, 2023
0.0.65 Nov 17, 2023
0.0.64 Nov 14, 2023
0.0.63 Nov 09, 2023
0.0.62 Nov 08, 2023
0.0.61 Nov 08, 2023
0.0.60 Nov 07, 2023
0.0.59 Nov 06, 2023
0.0.58 Nov 06, 2023
0.0.57 Nov 03, 2023
0.0.56 Nov 01, 2023
0.0.55 Nov 01, 2023
0.0.54 Oct 30, 2023
0.0.53 Oct 27, 2023
0.0.52 Oct 25, 2023
0.0.51 Oct 25, 2023
0.0.50 Oct 24, 2023
0.0.49 Oct 20, 2023
0.0.48 Oct 20, 2023
0.0.47 Oct 19, 2023
0.0.46 Oct 18, 2023
0.0.45 Oct 18, 2023
0.0.44 Oct 16, 2023
0.0.43 Oct 06, 2023
0.0.42 Oct 05, 2023
0.0.41 Sep 27, 2023
0.0.40 Sep 21, 2023
0.0.39 Sep 21, 2023
0.0.38 Sep 18, 2023
0.0.37 Sep 14, 2023
0.0.36 Sep 12, 2023
0.0.35 Sep 08, 2023
0.0.34 Sep 08, 2023
0.0.33 Sep 02, 2023
0.0.32 Sep 01, 2023
0.0.31 Aug 31, 2023
0.0.30 Aug 30, 2023
0.0.29 Aug 30, 2023
0.0.28 Aug 30, 2023
0.0.27 Aug 27, 2023
0.0.26 Aug 22, 2023
0.0.25 Aug 18, 2023
0.0.24 Aug 17, 2023
0.0.23 Aug 16, 2023
0.0.22 Aug 11, 2023
0.0.21 Aug 10, 2023
0.0.20 Aug 08, 2023
0.0.19 Aug 05, 2023
0.0.18 Aug 03, 2023
0.0.16 Aug 01, 2023
0.0.15 Jul 27, 2023
0.0.14 Jul 21, 2023
0.0.13 Jul 21, 2023
0.0.12 Jul 21, 2023
0.0.11 Jul 19, 2023
0.0.10 Jul 18, 2023
0.0.9 Jul 17, 2023
0.0.8 Jul 17, 2023
0.0.7 Jul 15, 2023
0.0.6 Jul 14, 2023
0.0.5 Jul 12, 2023
0.0.4 Jul 12, 2023
0.0.3 Jul 11, 2023
0.0.2 Jul 08, 2023
0.0.1 Jun 26, 2023
0.0.0rc0 Jun 26, 2023

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras:
Dependencies:
httpx (<1,>=0.23.0)
orjson (>=3.9.14)
packaging (>=23.2)
pydantic (<3,>=1)
requests-toolbelt (>=1.0.0)
requests (>=2.0.0)
zstandard (>=0.23.0)