openinference-instrumentation-openai 0.1.43


pip install openinference-instrumentation-openai

  Latest version

Released: Mar 24, 2026

Project Links

Meta
Author: OpenInference Authors
Requires Python: <3.15,>=3.9

Classifiers

Development Status
  • 5 - Production/Stable

Intended Audience
  • Developers

License
  • OSI Approved :: Apache Software License

Programming Language
  • Python
  • Python :: 3
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11
  • Python :: 3.12
  • Python :: 3.13
  • Python :: 3.14

OpenInference OpenAI Instrumentation

pypi

Python auto-instrumentation library for OpenAI's python SDK.

The traces emitted by this instrumentation are fully OpenTelemetry compatible and can be sent to an OpenTelemetry collector for viewing, such as arize-phoenix

Installation

pip install openinference-instrumentation-openai

Quickstart

In this example we will instrument a small program that uses OpenAI and observe the traces via arize-phoenix.

Install packages.

pip install openinference-instrumentation-openai "openai>=1.26" arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp

Start the phoenix server so that it is ready to collect traces. The Phoenix server runs entirely on your machine and does not send data over the internet.

python -m phoenix.server.main serve

In a python file, setup the OpenAIInstrumentor and configure the tracer to send traces to Phoenix.

import openai
from openinference.instrumentation.openai import OpenAIInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor

endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
# Optionally, you can also print the spans to the console.
tracer_provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))

OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)


if __name__ == "__main__":
    client = openai.OpenAI()
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Write a haiku."}],
        max_tokens=20,
        stream=True,
        stream_options={"include_usage": True},
    )
    for chunk in response:
        if chunk.choices and (content := chunk.choices[0].delta.content):
            print(content, end="")

Since we are using OpenAI, we must set the OPENAI_API_KEY environment variable to authenticate with the OpenAI API.

export OPENAI_API_KEY=your-api-key

Now simply run the python file and observe the traces in Phoenix.

python your_file.py

FAQ

Q: How to get token counts when streaming?

A: To get token counts when streaming, install openai>=1.26 and set stream_options={"include_usage": True} when calling create. See the example shown above. For more info, see here.

More Info

Extras:
Dependencies:
openinference-instrumentation (>=0.1.27)
openinference-semantic-conventions (>=0.1.25)
opentelemetry-api
opentelemetry-instrumentation
opentelemetry-semantic-conventions
typing-extensions
wrapt