clarifai 12.3.5


pip install clarifai

  Latest version

Released: Mar 31, 2026

Project Links

Meta
Author: Clarifai
Requires Python: >=3.9

Classifiers

Topic
  • Scientific/Engineering :: Artificial Intelligence

Programming Language
  • Python :: 3
  • Python :: 3 :: Only
  • Python :: 3.9
  • Python :: 3.10
  • Python :: 3.11
  • Python :: 3.12
  • Python :: Implementation :: CPython

License
  • OSI Approved :: Apache Software License

Operating System
  • OS Independent

Clarifai

Clarifai Python SDK

Discord PyPI - Downloads PyPI - Versions

This is the official Python client for interacting with our powerful API. The Clarifai Python SDK offers a comprehensive set of tools to integrate Clarifai's AI platform to leverage computer vision capabilities like classification , detection ,segementation and natural language capabilities like classification , summarisation , generation , Q&A ,etc into your applications. With just a few lines of code, you can leverage cutting-edge artificial intelligence to unlock valuable insights from visual and textual content.

Website | Schedule Demo | Signup for a Free Account | API Docs | Clarifai Community | Python SDK Docs | Examples | Colab Notebooks | Discord

Give the repo a star ⭐

Table Of Contents

:rocket: Installation

Install from PyPi:

pip install -U clarifai

Install from Source:

git clone https://github.com/Clarifai/clarifai-python.git
cd clarifai-python
python3 -m venv .venv
source .venv/bin/activate
pip install -e .

Linting

For developers, use the precommit hook .pre-commit-config.yaml to automate linting.

pip install -r requirements-dev.txt
pre-commit install

Now every time you run git commit your code will be automatically linted and won't commit if it fails.

You can also manually trigger linting using:

pre-commit run --all-files

:memo: Getting started

Clarifai uses Personal Access Tokens(PATs) to validate requests. You can create and manage PATs under your Clarifai account security settings.

  • πŸ”— Create PAT: Log into Portal β†’ Profile Icon β†’ Security Settings β†’ Create Personal Access Token β†’ Set the scopes β†’ Confirm

  • πŸ”— Get User ID: Log into Portal β†’ Profile Icon β†’ Account β†’ Profile β†’ User-ID

Export your PAT as an environment variable. Then, import and initialize the API Client.

Set PAT as environment variable through terminal:

export CLARIFAI_PAT={your personal access token}
# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.user import User
client = User(user_id="user_id")

# Get all apps
apps_generator = client.list_apps()
apps = list(apps_generator)

OR

PAT can be passed as constructor argument

from clarifai.client.user import User
client = User(user_id="user_id", pat="your personal access token")

:rocket: Compute Orchestration

Clarifai’s Compute Orchestration offers a streamlined solution for managing the infrastructure required for training, deploying, and scaling machine learning models and workflows.

This flexible system supports any compute instance β€” across various hardware providers and deployment methods β€” and provides automatic scaling to match workload demands. More Details

Cluster Operations

from clarifai.client.user import User
client = User(user_id="user_id",base_url="https://api.clarifai.com")

# Create a new compute cluster
compute_cluster = client.create_compute_cluster(compute_cluster_id="demo-id",config_filepath="computer_cluster_config.yaml")

# List Clusters
all_compute_clusters = list(client.list_compute_clusters())
print(all_compute_clusters)
Example Cluster Config

Nodepool Operations

from clarifai.client.compute_cluster import ComputeCluster

# Initialize the ComputeCluster instance
compute_cluster = ComputeCluster(user_id="user_id",compute_cluster_id="demo-id")

# Create a new nodepool
nodepool = compute_cluster.create_nodepool(nodepool_id="demo-nodepool-id",config_filepath="nodepool_config.yaml")

#Get a nodepool
nodepool = compute_cluster.nodepool(nodepool_id="demo-nodepool-id")
print(nodepool)

# List nodepools
all_nodepools = list(compute_cluster.list_nodepools())
print(all_nodepools)
Example Nodepool config

Deployment Operations

from clarifai.client.nodepool import Nodepool

# Initialize the Nodepool instance
nodepool = Nodepool(user_id="user_id",nodepool_id="demo-nodepool-id")

# Create a new deployment
deployment = nodepool.create_deployment(deployment_id="demo-deployment-id",config_filepath="deployment_config.yaml")

#Get a deployment
deployment = nodepool.deployment(nodepool_id="demo-deployment-id")
print(deployment)

# List deployments
all_deployments = list(nodepool.list_deployments())
print(all_deployments)
Example Deployment config

Compute Orchestration CLI Operations

Refer Here: https://github.com/Clarifai/clarifai-python/tree/master/clarifai/cli

:floppy_disk: Interacting with Datasets

Clarifai datasets help in managing datasets used for model training and evaluation. It provides functionalities like creating datasets,uploading datasets, retrying failed uploads from logs and exporting datasets as .zip files.

# Note: CLARIFAI_PAT must be set as env variable.

# Create app and dataset
app = client.create_app(app_id="demo_app", base_workflow="Universal")
dataset = app.create_dataset(dataset_id="demo_dataset")

# execute data upload to Clarifai app dataset
from clarifai.datasets.upload.loaders.coco_detection import COCODetectionDataLoader
coco_dataloader = COCODetectionDataLoader("images_dir", "coco_annotation_filepath")
dataset.upload_dataset(dataloader=coco_dataloader, get_upload_status=True)


#Try upload and record the failed outputs in log file.
from clarifai.datasets.upload.utils import load_module_dataloader
cifar_dataloader = load_module_dataloader('./image_classification/cifar10')
dataset.upload_dataset(dataloader=cifar_dataloader,
                       get_upload_status=True,
                       log_warnings =True)

#Retry upload from logs for `upload_dataset`
# Set retry_duplicates to True if you want to ingest failed inputs due to duplication issues. by default it is set to 'False'.
dataset.retry_upload_from_logs(dataloader=cifar_dataloader, log_file_path='log_file.log',
                               retry_duplicates=True,
                               log_warnings=True)

#upload text from csv
dataset.upload_from_csv(csv_path='csv_path', input_type='text', csv_type='raw', labels=True)

#upload data from folder
dataset.upload_from_folder(folder_path='folder_path', input_type='text', labels=True)

# Export Dataset
dataset.export(save_path='output.zip')

:floppy_disk: Interacting with Inputs

You can use inputs() for adding and interacting with input data. Inputs can be uploaded directly from a URL or a file. You can also view input annotations and concepts.

Input Upload

# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.user import User
app = User(user_id="user_id").app(app_id="app_id")
input_obj = app.inputs()

#input upload from url
input_obj.upload_from_url(input_id = 'demo', image_url='https://samples.clarifai.com/metro-north.jpg')

#input upload from filename
input_obj.upload_from_file(input_id = 'demo', video_file='demo.mp4')

# text upload
input_obj.upload_text(input_id = 'demo', raw_text = 'This is a test')

Input Listing

#listing inputs
input_generator = input_obj.list_inputs(page_no=1,per_page=10,input_type='image')
inputs_list = list(input_generator)

#listing annotations
annotation_generator = input_obj.list_annotations(batch_input=inputs_list)
annotations_list = list(annotation_generator)

#listing concepts
all_concepts = list(app.list_concepts())

Input Download

#listing inputs
input_generator = input_obj.list_inputs(page_no=1,per_page=1,input_type='image')
inputs_list = list(input_generator)

#downloading_inputs
input_bytes = input_obj.download_inputs(inputs_list)
with open('demo.jpg','wb') as f:
  f.write(input_bytes[0])

:brain: Interacting with Models

The Model Class allows you to perform predictions using Clarifai models. You can specify which model to use by providing the model URL or ID. This gives you flexibility in choosing models. The App Class also allows listing of all available Clarifai models for discovery. For greater control over model predictions, you can pass in an output_config to modify the model output as demonstrated below.

Model Predict

# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.model import Model

"""
Get Model information on details of model(description, usecases..etc) and info on training or
# other inference parameters(eg: temperature, top_k, max_tokens..etc for LLMs)
"""
gpt_4_model = Model("https://clarifai.com/openai/chat-completion/models/GPT-4")
print(gpt_4_model)


# Model Predict
model_prediction = Model("https://clarifai.com/anthropic/completion/models/claude-v2").predict_by_bytes(b"Write a tweet on future of AI")

# Customizing Model Inference Output
model_prediction = gpt_4_model.predict_by_bytes(b"Write a tweet on future of AI", inference_params=dict(temperature=str(0.7), max_tokens=30))
# Return predictions having prediction confidence > 0.98
model_prediction = model.predict_by_filepath(filepath="local_filepath", output_config={"min_value": 0.98}) # Supports image, text, audio, video

# Supports prediction by url
model_prediction = model.predict_by_url(url="url") # Supports image, text, audio, video

# Return predictions for specified interval of video
video_input_proto = [input_obj.get_input_from_url("Input_id", video_url=BEER_VIDEO_URL)]
model_prediction = model.predict(video_input_proto, output_config={"sample_ms": 2000})

Model Training

# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.app import App
from clarifai.client.model import Model

"""
Create model with trainable model_type
"""
app = App(user_id="user_id", app_id="app_id")
model = app.create_model(model_id="model_id", model_type_id="visual-classifier")
               (or)
model = Model('url')

"""
List training templates for the model_type
"""
templates = model.list_training_templates()
print(templates)

"""
Get parameters for the model.
"""
params = model.get_params(template='classification_basemodel_v1', save_to='model_params.yaml')

"""
Update the model params yaml and pass it to model.train()
"""
model_version_id = model.train('model_params.yaml')

"""
Training status and saving logs
"""
status = model.training_status(version_id=model_version_id,training_logs=True)
print(status)

Export your trained model

Model Export feature enables you to package your trained model into a model.tar file. This file enables deploying your model within a Triton Inference Server deployment.

from clarifai.client.model import Model

model = Model('url')
model.export('output/folder/')

Evaluate your trained model

When your model is trained and ready, you can evaluate by the following code

from clarifai.client.model import Model

model = Model('url')
model.evaluate(dataset_id='your-dataset-id')

Compare the evaluation results of your models.

from clarifai.client.model import Model
from clarifai.client.dataset import Dataset
from clarifai.utils.evaluation import EvalResultCompare

models = ['model url1', 'model url2'] # or [Model(url1), Model(url2)]
dataset = 'dataset url' # or Dataset(dataset_url)

compare = EvalResultCompare(
  models=models,
  datasets=dataset,
  attempt_evaluate=True # attempt evaluate when the model is not evaluated with the dataset
  )
compare.all('output/folder/')

Models Listing

# Note: CLARIFAI_PAT must be set as env variable.

# List all model versions
all_model_versions = list(model.list_versions())

# Go to specific model version
model_v1 = client.app("app_id").model(model_id="model_id", model_version_id="model_version_id")

# List all models in an app
all_models = list(app.list_models())

# List all models in community filtered by model_type, description
all_llm_community_models = App().list_models(filter_by={"query": "LLM",
                                                        "model_type_id": "text-to-text"}, only_in_app=False)
all_llm_community_models = list(all_llm_community_models)

:fire: Interacting with Workflows

Workflows offer a versatile framework for constructing the inference pipeline, simplifying the integration of diverse models. You can use the Workflow class to create and manage workflows using YAML configuration. For starting or making quick adjustments to existing Clarifai community workflows using an initial YAML configuration, the SDK provides an export feature.

Workflow Predict

# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.workflow import Workflow

# Workflow Predict
workflow = Workflow("workflow_url") # Example: https://clarifai.com/clarifai/main/workflows/Face-Sentiment
workflow_prediction = workflow.predict_by_url(url="url") # Supports image, text, audio, video

# Customizing Workflow Inference Output
workflow = Workflow(user_id="user_id", app_id="app_id", workflow_id="workflow_id",
                  output_config={"min_value": 0.98}) # Return predictions having prediction confidence > 0.98
workflow_prediction = workflow.predict_by_filepath(filepath="local_filepath") # Supports image, text, audio, video

Workflows Listing

# Note: CLARIFAI_PAT must be set as env variable.

# List all workflow versions
all_workflow_versions = list(workflow.list_versions())

# Go to specific workflow version
workflow_v1 = Workflow(workflow_id="workflow_id", workflow_version=dict(id="workflow_version_id"), app_id="app_id", user_id="user_id")

# List all workflow in an app
all_workflow = list(app.list_workflow())

# List all workflow in community filtered by description
all_face_community_workflows = App().list_workflows(filter_by={"query": "face"}, only_in_app=False) # Get all face related workflows
all_face_community_workflows = list(all_face_community_workflows)

Workflow Create

Create a new workflow specified by a yaml config file.

# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.app import App
app = App(app_id="app_id", user_id="user_id")
workflow = app.create_workflow(config_filepath="config.yml")

Workflow Export

Export an existing workflow from Clarifai as a local yaml file.

# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.workflow import Workflow
workflow = Workflow("https://clarifai.com/clarifai/main/workflows/Demographics")
workflow.export('demographics_workflow.yml')

:mag: Search

Smart Image Search

Clarifai's Smart Search feature leverages vector search capabilities to power the search experience. Vector search is a type of search engine that uses vectors to search and retrieve text, images, and videos.

Instead of traditional keyword-based search, where exact matches are sought, vector search allows for searching based on visual and/or semantic similarity by calculating distances between vector embedding representations of the data.

Here is an example of how to use vector search to find similar images:

# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.search import Search
search = Search(user_id="user_id", app_id="app_id", top_k=1, metric="cosine")

# Search by image url
results = search.query(ranks=[{"image_url": "https://samples.clarifai.com/metro-north.jpg"}])

for data in results:
  print(data.hits[0].input.data.image.url)

Smart Text Search

Smart Text Search is our proprietary feature that uses deep learning techniques to sort, rank, and retrieve text data based on their content and semantic similarity.

Here is an example of how to use Smart Text Search to find similar text:

# Note: CLARIFAI_PAT must be set as env variable.

# Search by text
results = search.query(ranks=[{"text_raw": "I love my dog"}])

Filters

You can use filters to narrow down your search results. Filters can be used to filter by concepts, metadata, and Geo Point.

It is possible to add together multiple search parameters to expand your search. You can even combine negated search terms for more advanced tasks.

For example, you can combine two concepts as below.

# query for images that contain concept "deer" or "dog"
results = search.query(ranks=[{"image_url": "https://samples.clarifai.com/metro-north.jpg"}],
                        filters=[{"concepts": [{"name": "deer", "value":1},
                                              {"name": "dog", "value":1}]}])

# query for images that contain concepts "deer" and "dog"
results = search.query(ranks=[{"image_url": "https://samples.clarifai.com/metro-north.jpg"}],
                        filters=[{"concepts": [{"name": "deer", "value":1}],
                                  "concepts": [{"name": "dog", "value":1}]}])

Input filters allows to filter by input_type, status of inputs and by inputs_dataset_id

results = search.query(filters=[{'input_types': ['image', 'text']}])

Pagination

Below is an example of using Search with Pagination.

# Note: CLARIFAI_PAT must be set as env variable.
from clarifai.client.search import Search
search = Search(user_id="user_id", app_id="app_id", metric="cosine", pagination=True)

# Search by image url
results = search.query(ranks=[{"image_url": "https://samples.clarifai.com/metro-north.jpg"}],page_no=2,per_page=5)

for data in results:
  print(data.hits[0].input.data.image.url)

Retrieval Augmented Generation (RAG)

You can setup and start your RAG pipeline in 4 lines of code. The setup method automatically creates a new app and the necessary components under the hood. By default it uses the mistral-7B-Instruct model.

from clarifai.rag import RAG

rag_agent = RAG.setup(user_id="USER_ID")
rag_agent.upload(folder_path="~/docs")
rag_agent.chat(messages=[{"role":"human", "content":"What is Clarifai"}])

If you have previously run the setup method, you can instantiate the RAG class with the prompter workflow URL:

from clarifai.rag import RAG

rag_agent = RAG(workflow_url="WORKFLOW_URL")

:pushpin: More Examples

See many more code examples in this repo. Also see the official Python SDK docs

:open_file_folder: Model Upload

Examples for uploading models and runners have been moved to this repo. Find our official documentation at docs.clarifai.com/compute/models/upload.

Versioning

This project uses CalVer with the format YY.MM.PATCH:

  • YY β€” Clarifai year, counting from the company's founding (e.g. 12 for the 12th year)
  • MM β€” month number, not zero-padded (e.g. 1 for January, 12 for December)
  • PATCH β€” incremental release within that month, starting at 0

Git tags use the same format without a v prefix (e.g. 12.2.0). The version is defined in clarifai/__init__.py.

12.3.5 Mar 31, 2026
12.3.4 Mar 26, 2026
12.3.3 Mar 23, 2026
12.3.2 Mar 11, 2026
12.3.1 Mar 05, 2026
12.3.0 Mar 04, 2026
12.2.2 Feb 26, 2026
12.2.2rc6 Mar 04, 2026
12.2.2rc5 Mar 03, 2026
12.2.2rc4 Mar 03, 2026
12.2.2rc3 Mar 03, 2026
12.2.2rc2 Mar 02, 2026
12.2.2rc1 Mar 02, 2026
12.2.1 Feb 19, 2026
12.2.1rc2 Feb 24, 2026
12.2.1rc1 Feb 24, 2026
12.2.0 Feb 13, 2026
12.1.7 Feb 06, 2026
12.1.6 Jan 26, 2026
12.1.6rc2 Feb 05, 2026
12.1.6rc1 Feb 05, 2026
12.1.5 Jan 21, 2026
12.1.4 Jan 13, 2026
12.1.3 Jan 09, 2026
12.1.2 Jan 09, 2026
12.1.1 Jan 06, 2026
12.1.1rc2 Jan 07, 2026
12.1.1rc1 Jan 06, 2026
12.1.0 Jan 06, 2026
11.12.2 Dec 23, 2025
11.12.1 Dec 03, 2025
11.12.1rc4 Dec 16, 2025
11.12.1rc3 Dec 15, 2025
11.12.1rc2 Dec 09, 2025
11.12.1rc1 Dec 09, 2025
11.12.0 Dec 03, 2025
11.10.3 Nov 27, 2025
11.10.2 Nov 15, 2025
11.10.1 Nov 14, 2025
11.10.0 Nov 11, 2025
11.9.0 Oct 22, 2025
11.8.5 Oct 20, 2025
11.8.4 Oct 19, 2025
11.8.3 Oct 09, 2025
11.8.2 Sep 24, 2025
11.8.1 Sep 12, 2025
11.8.0 Sep 11, 2025
11.7.5 Aug 28, 2025
11.7.5rc1 Sep 10, 2025
11.7.4 Aug 27, 2025
11.7.3 Aug 22, 2025
11.7.2 Aug 19, 2025
11.7.1 Aug 19, 2025
11.7.0 Aug 18, 2025
11.6.8 Aug 05, 2025
11.6.7 Aug 04, 2025
11.6.6 Jul 30, 2025
11.6.5 Jul 23, 2025
11.6.4 Jul 11, 2025
11.6.4rc2 Jul 17, 2025
11.6.4rc1 Jul 17, 2025
11.6.3 Jul 09, 2025
11.6.2 Jul 08, 2025
11.6.1 Jul 07, 2025
11.6.0 Jul 02, 2025
11.5.6 Jun 30, 2025
11.5.5 Jun 28, 2025
11.5.4 Jun 25, 2025
11.5.3 Jun 24, 2025
11.5.3rc2 Jun 25, 2025
11.5.3rc1 Jun 25, 2025
11.5.2 Jun 13, 2025
11.5.1 Jun 13, 2025
11.5.0 Jun 10, 2025
11.4.10 May 30, 2025
11.4.9 May 30, 2025
11.4.8 May 30, 2025
11.4.7 May 29, 2025
11.4.6 May 28, 2025
11.4.5 May 28, 2025
11.4.4 May 26, 2025
11.4.3 May 23, 2025
11.4.3rc1 May 22, 2025
11.4.2 May 21, 2025
11.4.1 May 09, 2025
11.4.0 May 08, 2025
11.3.0 Apr 24, 2025
11.3.0rc3 May 08, 2025
11.3.0rc2 May 05, 2025
11.3.0rc1 Apr 28, 2025
11.2.4rc3 Apr 21, 2025
11.2.4rc2 Apr 17, 2025
11.2.4rc1 Apr 17, 2025
11.2.3 Apr 10, 2025
11.2.3rc9 Apr 15, 2025
11.2.3rc8 Apr 15, 2025
11.2.3rc7 Apr 14, 2025
11.2.3rc6 Apr 14, 2025
11.2.3rc5 Apr 14, 2025
11.2.3rc4 Apr 14, 2025
11.2.3rc3 Apr 14, 2025
11.2.3rc2 Apr 14, 2025
11.2.3rc1 Apr 04, 2025
11.2.2 Mar 28, 2025
11.2.1 Mar 25, 2025
11.2.0 Mar 24, 2025
11.1.7 Mar 07, 2025
11.1.7rc9 Apr 08, 2025
11.1.7rc8 Apr 07, 2025
11.1.7rc7 Apr 03, 2025
11.1.7rc6 Apr 01, 2025
11.1.7rc5 Mar 26, 2025
11.1.7rc4 Mar 26, 2025
11.1.7rc3 Mar 20, 2025
11.1.7rc2 Mar 18, 2025
11.1.7rc1 Mar 13, 2025
11.1.6 Mar 06, 2025
11.1.6rc1 Mar 07, 2025
11.1.5 Feb 21, 2025
11.1.5rc8 Mar 06, 2025
11.1.5rc7 Mar 04, 2025
11.1.5rc6 Mar 04, 2025
11.1.5rc5 Mar 04, 2025
11.1.5rc4 Mar 04, 2025
11.1.5rc3 Mar 04, 2025
11.1.5rc2 Mar 04, 2025
11.1.5rc1 Mar 03, 2025
11.1.4 Feb 12, 2025
11.1.4rc2 Feb 12, 2025
11.1.4rc1 Feb 12, 2025
11.1.3 Feb 11, 2025
11.1.2 Feb 11, 2025
11.1.1 Feb 06, 2025
11.1.0 Feb 05, 2025
11.0.7 Jan 24, 2025
11.0.7rc2 Jan 24, 2025
11.0.7rc1 Jan 24, 2025
11.0.6 Jan 24, 2025
11.0.5 Jan 17, 2025
11.0.4 Jan 17, 2025
11.0.3 Jan 15, 2025
11.0.2 Jan 14, 2025
11.0.1 Jan 13, 2025
11.0.0 Jan 07, 2025
10.11.2rc3 Jan 17, 2025
10.11.2rc2 Jan 17, 2025
10.11.2rc1 Jan 17, 2025
10.11.1 Dec 20, 2024
10.11.0 Dec 03, 2024
10.10.1 Nov 18, 2024
10.10.0 Nov 07, 2024
10.9.5 Oct 29, 2024
10.9.4 Oct 28, 2024
10.9.2 Oct 14, 2024
10.9.1 Oct 09, 2024
10.9.0 Oct 07, 2024
10.8.9 Oct 04, 2024
10.8.8 Oct 03, 2024
10.8.7 Oct 03, 2024
10.8.6 Sep 27, 2024
10.8.5 Sep 27, 2024
10.8.4 Sep 25, 2024
10.8.3 Sep 25, 2024
10.8.2 Sep 19, 2024
10.8.1 Sep 06, 2024
10.8.0 Sep 03, 2024
10.7.0 Aug 06, 2024
10.5.4 Jul 12, 2024
10.5.3 Jun 26, 2024
10.5.2 Jun 20, 2024
10.5.1 Jun 17, 2024
10.5.0 Jun 10, 2024
10.3.3 May 07, 2024
10.3.2 May 03, 2024
10.3.1 Apr 19, 2024
10.3.0 Apr 08, 2024
10.2.1 Mar 19, 2024
10.2.0 Mar 18, 2024
10.1.1 Feb 28, 2024
10.1.0 Feb 13, 2024
10.0.1 Jan 18, 2024
10.0.0 Jan 10, 2024
9.11.1 Dec 29, 2023
9.11.0 Dec 11, 2023
9.10.4 Nov 23, 2023
9.10.3 Nov 23, 2023
9.10.2 Nov 17, 2023
9.10.1 Nov 16, 2023
9.10.0 Nov 06, 2023
9.9.3 Oct 16, 2023
9.9.2 Oct 11, 2023
9.9.1 Oct 10, 2023
9.9.0 Oct 06, 2023
9.8.2 Sep 26, 2023
9.8.1 Sep 12, 2023
9.8.0 Sep 06, 2023
9.7.6 Aug 27, 2023
9.7.5 Aug 25, 2023
9.7.4 Aug 25, 2023
9.7.3 Aug 24, 2023
9.7.2 Aug 24, 2023
9.7.1 Aug 15, 2023
9.7.0 Aug 09, 2023
9.6.3 Jul 31, 2023
9.6.2 Jul 27, 2023
9.6.1 Jul 27, 2023
9.6.0 Jul 17, 2023
9.5.4 Jul 07, 2023
9.5.3 Jun 02, 2023
9.5.2 Jun 02, 2023
9.5.1 Jun 02, 2023
9.5.0 Jun 01, 2023
9.4.0 May 02, 2023
9.3.4 May 01, 2023
9.3.3 Apr 17, 2023
9.3.2 Apr 12, 2023
9.3.1 Apr 11, 2023
9.3.0 Apr 03, 2023
9.2.0 Mar 03, 2023
9.1.2 Feb 13, 2023
9.1.0 Feb 07, 2023
9.0.0 Feb 07, 2023
8.12.0rc6 Jan 07, 2023
8.12.0rc5 Jan 04, 2023
8.12.0rc4 Jan 04, 2023
8.12.0rc3 Jan 04, 2023
8.12.0rc2 Jan 04, 2023
8.12.0rc1 Dec 27, 2022
2.6.2 Jul 16, 2019
2.6.1 Mar 11, 2019
2.6.0 Feb 28, 2019
2.5.2 Jan 14, 2019
2.5.1 Jan 08, 2019
2.5.0 Dec 10, 2018
2.4.2 Nov 20, 2018
2.4.1 Nov 08, 2018
2.4.0 Oct 16, 2018
2.3.2 Sep 25, 2018
2.3.1 Aug 29, 2018
2.3.0 Aug 21, 2018
2.2.3 May 10, 2018
2.2.2 May 03, 2018
2.2.1 Apr 25, 2018
2.2.0 Apr 25, 2018
2.1.0 Apr 10, 2018
2.0.33 Jan 13, 2018
2.0.32 Aug 23, 2017
2.0.31 Aug 02, 2017
2.0.30 Jul 19, 2017
2.0.29 Jun 23, 2017
2.0.28 Jun 23, 2017
2.0.27 Jun 14, 2017
2.0.26 Jun 13, 2017
2.0.25 Jun 12, 2017
2.0.24 Jun 03, 2017
2.0.23 Jun 03, 2017
2.0.22 May 24, 2017
2.0.21 Apr 05, 2017
2.0.20 Feb 24, 2017
2.0.19 Feb 22, 2017
2.0.18 Feb 01, 2017
2.0.17 Jan 11, 2017
2.0.16 Jan 11, 2017
2.0.15 Jan 10, 2017
2.0.14 Nov 28, 2016
2.0.13 Nov 22, 2016
2.0.12 Nov 17, 2016
2.0.11 Oct 26, 2016
2.0.10 Oct 25, 2016
2.0.9 Oct 23, 2016
2.0.8 Oct 22, 2016
2.0.7 Oct 18, 2016
2.0.6 Oct 13, 2016
2.0.5 Oct 13, 2016
2.0.4 Oct 12, 2016
2.0.3 Oct 08, 2016
2.0.2 Oct 05, 2016
2.0.1 Sep 27, 2016
0.2.1 Mar 24, 2016
0.2 May 09, 2015
0.1 Mar 20, 2015

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras:
Dependencies:
clarifai-grpc (>=12.1.0)
clarifai-protocol (<0.1.0,>=0.0.35)
numpy (>=1.22.0)
tqdm (>=4.65.0)
PyYAML (>=6.0.1)
schema (==0.7.5)
Pillow (>=9.5.0)
tabulate (>=0.9.0)
fsspec (>=2024.6.1)
click (>=8.1.7)
requests (>=2.32.5)
aiohttp (>=3.10.0)
uv (==0.7.12)
ruff (==0.11.4)
psutil (==7.0.0)
pygments (>=2.19.2)
pydantic_core (>=2.33.2)
packaging (>=25.0)
tenacity (>=8.2.3)
httpx (>=0.27.0)
openai (>=1.0.0)
huggingface_hub (>=0.16.4)
hf-transfer (>=0.1.9)