lmnr 0.7.47


pip install lmnr

  Latest version

Released: Apr 05, 2026

Project Links

Meta
Author: lmnr.ai
Requires Python: >=3.10, <4

Classifiers

License
  • OSI Approved :: Apache Software License

Programming Language
  • Python :: 3
  • Python :: 3.10
  • Python :: 3.11
  • Python :: 3.12
  • Python :: 3.13
  • Python :: 3.14

Laminar Python

Python SDK for Laminar.

Laminar is an open-source platform for engineering LLM products. Trace, evaluate, annotate, and analyze LLM data. Bring LLM applications to production with confidence.

Check our open-source repo and don't forget to star it ⭐

PyPI - Version PyPI - Downloads PyPI - Python Version

Quickstart

First, install the package, specifying the instrumentations you want to use.

For example, to install the package with OpenAI and Anthropic instrumentations:

pip install 'lmnr[anthropic,openai]'

To install all possible instrumentations, use the following command:

pip install 'lmnr[all]'

Initialize Laminar in your code:

from lmnr import Laminar

Laminar.initialize(project_api_key="<PROJECT_API_KEY>")

You can also skip passing the project_api_key, in which case it will be looked in the environment (or local .env file) by the key LMNR_PROJECT_API_KEY.

Note that you need to only initialize Laminar once in your application. You should try to do that as early as possible in your application, e.g. at server startup.

Set-up for self-hosting

If you self-host a Laminar instance, the default connection settings to it are http://localhost:8000 for HTTP and http://localhost:8001 for gRPC. Initialize the SDK accordingly:

from lmnr import Laminar

Laminar.initialize(
    project_api_key="<PROJECT_API_KEY>",
    base_url="http://localhost",
    http_port=8000,
    grpc_port=8001,
)

Instrumentation

Manual instrumentation

To instrument any function in your code, we provide a simple @observe() decorator. This can be useful if you want to trace a request handler or a function which combines multiple LLM calls.

import os
from openai import OpenAI
from lmnr import Laminar

Laminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"])

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

def poem_writer(topic: str):
    prompt = f"write a poem about {topic}"
    messages = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": prompt},
    ]

    # OpenAI calls are still automatically instrumented
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
    )
    poem = response.choices[0].message.content

    return poem

@observe()
def generate_poems():
    poem1 = poem_writer(topic="laminar flow")
    poem2 = poem_writer(topic="turbulence")
    poems = f"{poem1}\n\n---\n\n{poem2}"
    return poems

Also, you can use Laminar.start_as_current_span if you want to record a chunk of your code using with statement.

def handle_user_request(topic: str):
    with Laminar.start_as_current_span(name="poem_writer", input=topic):
        poem = poem_writer(topic=topic)
        # Use set_span_output to record the output of the span
        Laminar.set_span_output(poem)

Automatic instrumentation

Laminar allows you to automatically instrument majority of the most popular LLM, Vector DB, database, requests, and other libraries.

If you want to automatically instrument a default set of libraries, then simply do NOT pass instruments argument to .initialize(). See the full list of available instrumentations in the enum.

If you want to automatically instrument only specific LLM, Vector DB, or other calls with OpenTelemetry-compatible instrumentation, then pass the appropriate instruments to .initialize(). For example, if you want to only instrument OpenAI and Anthropic, then do the following:

from lmnr import Laminar, Instruments

Laminar.initialize(project_api_key=os.environ["LMNR_PROJECT_API_KEY"], instruments={Instruments.OPENAI, Instruments.ANTHROPIC})

If you want to fully disable any kind of autoinstrumentation, pass an empty set as instruments=set() to .initialize().

Autoinstrumentations are provided by Traceloop's OpenLLMetry.

Evaluations

Quickstart

Install the package:

pip install lmnr

Create a file named my_first_eval.py with the following code:

from lmnr import evaluate

def write_poem(data):
    return f"This is a good poem about {data['topic']}"

def contains_poem(output, target):
    return 1 if output in target['poem'] else 0

# Evaluation data
data = [
    {"data": {"topic": "flowers"}, "target": {"poem": "This is a good poem about flowers"}},
    {"data": {"topic": "cars"}, "target": {"poem": "I like cars"}},
]

evaluate(
    data=data,
    executor=write_poem,
    evaluators={
        "containsPoem": contains_poem
    },
    group_id="my_first_feature"
)

Run the following commands:

export LMNR_PROJECT_API_KEY=<YOUR_PROJECT_API_KEY>  # get from Laminar project settings
lmnr eval my_first_eval.py  # run in the virtual environment where lmnr is installed

Visit the URL printed in the console to see the results.

Overview

Bring rigor to the development of your LLM applications with evaluations.

You can run evaluations locally by providing executor (part of the logic used in your application) and evaluators (numeric scoring functions) to evaluate function.

evaluate takes in the following parameters:

  • data – an array of EvaluationDatapoint objects, where each EvaluationDatapoint has two keys: target and data, each containing a key-value object. Alternatively, you can pass in dictionaries, and we will instantiate EvaluationDatapoints with pydantic if possible
  • executor – the logic you want to evaluate. This function must take data as the first argument, and produce any output. It can be both a function or an async function.
  • evaluators – Dictionary which maps evaluator names to evaluators. Functions that take output of executor as the first argument, target as the second argument and produce a numeric scores. Each function can produce either a single number or dict[str, int|float] of scores. Each evaluator can be both a function or an async function.
  • name – optional name for the evaluation. Automatically generated if not provided.
  • group_id – optional group name for the evaluation. Evaluations within the same group can be compared visually side-by-side

* If you already have the outputs of executors you want to evaluate, you can specify the executor as an identity function, that takes in data and returns only needed value(s) from it.

Read the docs to learn more about evaluations.

Client for HTTP operations

Various interactions with Laminar API are available in LaminarClient and its asynchronous version AsyncLaminarClient.

Agent

To run Laminar agent, you can invoke client.agent.run

from lmnr import LaminarClient

client = LaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")

response = client.agent.run(
    prompt="What is the weather in London today?"
)

print(response.result.content)

Streaming

Agent run supports streaming as well.

from lmnr import LaminarClient

client = LaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")

for chunk in client.agent.run(
    prompt="What is the weather in London today?",
    stream=True
):
    if chunk.chunk_type == 'step':
        print(chunk.summary)
    elif chunk.chunk_type == 'finalOutput':
        print(chunk.content.result.content)

Async mode

from lmnr import AsyncLaminarClient

client = AsyncLaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")

response = await client.agent.run(
    prompt="What is the weather in London today?"
)

print(response.result.content)

Async mode with streaming

from lmnr import AsyncLaminarClient

client = AsyncLaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")

# Note that you need to await the operation even though we use `async for` below
response = await client.agent.run(
    prompt="What is the weather in London today?",
    stream=True
)
async for chunk in client.agent.run(
    prompt="What is the weather in London today?",
    stream=True
):
    if chunk.chunk_type == 'step':
        print(chunk.summary)
    elif chunk.chunk_type == 'finalOutput':
        print(chunk.content.result.content)
0.7.47 Apr 05, 2026
0.7.46 Apr 02, 2026
0.7.45 Mar 27, 2026
0.7.44 Feb 27, 2026
0.7.43 Feb 25, 2026
0.7.42 Feb 23, 2026
0.7.41 Feb 13, 2026
0.7.40 Feb 12, 2026
0.7.39 Feb 09, 2026
0.7.38 Feb 05, 2026
0.7.37 Jan 28, 2026
0.7.36 Jan 26, 2026
0.7.35 Jan 24, 2026
0.7.34 Jan 23, 2026
0.7.33 Jan 22, 2026
0.7.32 Jan 21, 2026
0.7.31 Jan 19, 2026
0.7.30 Jan 15, 2026
0.7.29 Jan 14, 2026
0.7.28 Jan 13, 2026
0.7.27 Jan 12, 2026
0.7.26 Jan 07, 2026
0.7.25 Dec 18, 2025
0.7.24 Nov 30, 2025
0.7.23 Nov 18, 2025
0.7.23a2 Nov 17, 2025
0.7.23a1 Nov 17, 2025
0.7.22 Nov 13, 2025
0.7.21 Nov 06, 2025
0.7.21a5 Nov 05, 2025
0.7.21a4 Nov 05, 2025
0.7.21a3 Nov 05, 2025
0.7.21a2 Nov 05, 2025
0.7.21a1 Nov 05, 2025
0.7.20 Nov 04, 2025
0.7.19 Oct 27, 2025
0.7.18 Oct 16, 2025
0.7.17 Oct 02, 2025
0.7.16 Sep 29, 2025
0.7.15 Sep 26, 2025
0.7.14 Sep 19, 2025
0.7.13 Sep 14, 2025
0.7.12 Sep 11, 2025
0.7.11 Sep 05, 2025
0.7.10 Sep 03, 2025
0.7.9 Aug 28, 2025
0.7.8 Aug 27, 2025
0.7.7 Aug 25, 2025
0.7.6 Aug 20, 2025
0.7.5 Aug 19, 2025
0.7.4 Aug 11, 2025
0.7.3 Aug 07, 2025
0.7.2 Aug 04, 2025
0.7.1 Aug 01, 2025
0.7.0 Jul 24, 2025
0.6.21 Jul 16, 2025
0.6.20 Jul 11, 2025
0.6.19 Jul 10, 2025
0.6.18 Jul 07, 2025
0.6.17 Jul 04, 2025
0.6.16 Jun 30, 2025
0.6.15 Jun 27, 2025
0.6.14 Jun 26, 2025
0.6.13 Jun 24, 2025
0.6.12 Jun 23, 2025
0.6.11 Jun 18, 2025
0.6.10 Jun 12, 2025
0.6.9 Jun 10, 2025
0.6.8 Jun 02, 2025
0.6.7 May 31, 2025
0.6.6 May 30, 2025
0.6.5 May 30, 2025
0.6.4 May 28, 2025
0.6.3 May 26, 2025
0.6.2 May 15, 2025
0.6.1 May 14, 2025
0.6.0 May 13, 2025
0.5.3 May 08, 2025
0.5.2 Apr 22, 2025
0.5.1 Apr 04, 2025
0.5.1a0 Apr 14, 2025
0.5.0 Mar 27, 2025
0.4.66 Mar 17, 2025
0.4.65 Mar 16, 2025
0.4.64 Mar 03, 2025
0.4.63 Mar 03, 2025
0.4.63b0 Mar 03, 2025
0.4.62 Feb 21, 2025
0.4.61 Feb 19, 2025
0.4.60 Feb 11, 2025
0.4.59 Feb 04, 2025
0.4.58 Feb 03, 2025
0.4.57 Feb 02, 2025
0.4.56 Jan 28, 2025
0.4.55 Jan 21, 2025
0.4.54 Jan 20, 2025
0.4.53 Jan 19, 2025
0.4.53.dev0 Jan 15, 2025
0.4.52 Jan 14, 2025
0.4.51 Jan 10, 2025
0.4.51b0 Jan 10, 2025
0.4.50 Dec 21, 2024
0.4.49 Dec 18, 2024
0.4.48 Dec 07, 2024
0.4.47 Dec 04, 2024
0.4.46 Dec 04, 2024
0.4.45 Dec 04, 2024
0.4.44 Dec 03, 2024
0.4.43 Dec 01, 2024
0.4.42 Nov 26, 2024
0.4.40 Nov 20, 2024
0.4.39 Nov 20, 2024
0.4.39b1 Nov 20, 2024
0.4.39b0 Nov 20, 2024
0.4.38 Nov 20, 2024
0.4.37 Nov 18, 2024
0.4.36 Nov 16, 2024
0.4.35 Nov 12, 2024
0.4.34 Nov 09, 2024
0.4.33 Nov 08, 2024
0.4.32 Nov 07, 2024
0.4.31 Nov 07, 2024
0.4.30 Nov 05, 2024
0.4.29 Nov 03, 2024
0.4.29b4 Nov 03, 2024
0.4.29b3 Nov 03, 2024
0.4.29b2 Nov 03, 2024
0.4.29b1 Nov 03, 2024
0.4.29b0 Nov 01, 2024
0.4.28 Oct 31, 2024
0.4.27 Oct 29, 2024
0.4.26 Oct 27, 2024
0.4.25 Oct 26, 2024
0.4.24 Oct 25, 2024
0.4.23 Oct 21, 2024
0.4.22 Oct 16, 2024
0.4.21 Oct 11, 2024
0.4.21b0 Oct 11, 2024
0.4.20 Oct 11, 2024
0.4.19 Oct 10, 2024
0.4.18 Oct 08, 2024
0.4.18b0 Oct 08, 2024
0.4.17 Oct 08, 2024
0.4.17b1 Oct 08, 2024
0.4.17b0 Oct 08, 2024
0.4.16 Oct 07, 2024
0.4.16b0 Oct 07, 2024
0.4.15 Oct 06, 2024
0.4.15b2 Oct 06, 2024
0.4.15b1 Oct 05, 2024
0.4.14 Oct 04, 2024
0.4.13 Sep 28, 2024
0.4.12 Sep 24, 2024
0.4.12b4 Sep 24, 2024
0.4.12b3 Sep 23, 2024
0.4.12b2 Sep 23, 2024
0.4.12b1 Sep 23, 2024
0.4.11 Sep 18, 2024
0.4.10 Sep 15, 2024
0.4.9 Sep 15, 2024
0.4.8 Sep 14, 2024
0.4.7 Sep 13, 2024
0.4.6 Sep 10, 2024
0.4.5 Sep 08, 2024
0.4.4 Sep 06, 2024
0.4.3 Sep 06, 2024
0.4.2 Sep 05, 2024
0.4.1 Sep 05, 2024
0.4.0 Sep 04, 2024
0.3.7 Sep 01, 2024
0.3.6 Aug 29, 2024
0.3.5 Aug 29, 2024
0.3.4 Aug 28, 2024
0.3.3 Aug 28, 2024
0.3.2 Aug 27, 2024
0.3.1 Aug 26, 2024
0.3.0 Aug 26, 2024
0.3.0b1 Aug 26, 2024
0.2.15 Aug 13, 2024
0.2.14 Aug 13, 2024
0.2.13 Aug 07, 2024
0.2.12 Aug 06, 2024
0.2.11 Aug 05, 2024
0.2.10 Aug 05, 2024
0.2.9 Jul 28, 2024
0.2.8 Jul 23, 2024
0.2.7 Jul 14, 2024
0.2.6 Jul 11, 2024
0.2.5 Jul 09, 2024
0.2.4 Jul 07, 2024
0.2.3.1 Jul 07, 2024
0.2.3 Jul 07, 2024
0.2.2 Jul 07, 2024
0.2.1 Jul 07, 2024
0.1.1 Jun 14, 2024
0.1.0 Jun 14, 2024

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras:
Dependencies:
httpx (<1.0.0,>=0.24.0)
lmnr-claude-code-proxy (>=0.1.17)
opentelemetry-api (<2.0.0,>=1.39.0)
opentelemetry-sdk (<2.0.0,>=1.39.0)
opentelemetry-exporter-otlp-proto-http (<2.0.0,>=1.39.0)
opentelemetry-exporter-otlp-proto-grpc (<2.0.0,>=1.39.0)
opentelemetry-instrumentation (<1.0.0,>=0.54b0)
opentelemetry-instrumentation-threading (<1.0.0,>=0.54b0)
opentelemetry-semantic-conventions (==0.60b1)
opentelemetry-semantic-conventions-ai (==0.4.13)
orjson (<4.0.0,>=3.0.0)
packaging (>=22.0)
pydantic (<3.0.0,>=2.0.3)
python-dotenv (<2.0,>=1.0)
tqdm (>=4.0)
tenacity (<10.0,>=8.0)
grpcio (>=1)