An integration package connecting Together AI and LangChain
Project Links
Meta
Requires Python: >=3.9,<4.0
Classifiers
License
- OSI Approved :: MIT License
Programming Language
- Python :: 3
- Python :: 3.9
- Python :: 3.10
- Python :: 3.11
- Python :: 3.12
- Python :: 3.13
langchain-together
This package contains the LangChain integration with Together AI.
Installation
pip install langchain-together
Chat Models
ChatTogether
supports the various models available via the Together API:
from langchain_together import ChatTogether
import os
os.environ["TOGETHER_API_KEY"] = "my-key"
llm = ChatTogether(
model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# api_key="...", # if not set in environment variable
)
Structured Outputs, Function Calls, JSON Mode
ChatTogether
supports structured outputs using Pydantic models, dictionaries, or JSON schemas. This feature allows you to get reliable, structured responses from Together AI models. See here the docs for more info about function calling and structured outputs
from langchain_together import ChatTogether
from pydantic import BaseModel, Field
from typing import Optional
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline of the joke")
rating: Optional[int] = Field(default=None, description="How funny the joke is from 1-10")
# Use a model that supports function calling
llm = ChatTogether(model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo")
structured_llm = llm.with_structured_output(Joke.model_json_schema(), method="json_schema")
result = structured_llm.invoke("Tell me a joke about programming")
print(f"Setup: {result.setup}")
print(f"Punchline: {result.punchline}")
print(f"Rating: {result.rating}")
Function Calling
# Use a model that supports function calling
llm = ChatTogether(model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo")
structured_llm = llm.with_structured_output(Joke, method="function_calling")
result = structured_llm.invoke("Tell me a joke about programming")
print(f"Setup: {result.setup}")
print(f"Punchline: {result.punchline}")
print(f"Rating: {result.rating}")
JSON Mode
For models that support JSON mode, you can also use this method:
from langchain_together import ChatTogether
from pydantic import BaseModel, Field
class Response(BaseModel):
message: str = Field(description="The main message")
category: str = Field(description="Category of the response")
# Use a model that supports JSON mode
llm = ChatTogether(model="meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo")
structured_llm = llm.with_structured_output(Response.model_json_schema(), method="json_mode")
result = structured_llm.invoke(
"Respond with a JSON containing a message about cats and categorize it. "
"Use the exact keys 'message' and 'category'."
)
Embeddings
from langchain_together import TogetherEmbeddings
embeddings = TogetherEmbeddings(model="BAAI/bge-base-en-v1.5")
Aug 01, 2025
0.3.1
Jan 10, 2025
0.3.0
Sep 13, 2024
0.2.0
Jul 31, 2024
0.1.5
Jul 12, 2024
0.1.4
Jun 06, 2024
0.1.3
May 15, 2024
0.1.2
May 07, 2024
0.1.1
Apr 09, 2024
0.1.0
Apr 09, 2024
0.0.2.post2
Jan 12, 2024
0.0.2.post1
Jan 11, 2024
0.0.2
Dec 20, 2023
0.0.1
Wheel compatibility matrix
Files in release
Extras:
None
Dependencies:
(<4.0.0,>=3.9.1)
aiohttp
(<0.4.0,>=0.3.29)
langchain-core
(<0.4,>=0.3)
langchain-openai
(<3,>=2)
requests