mlx-lm 0.28.3


pip install mlx-lm

  Latest version

Released: Oct 17, 2025

Project Links

Meta
Author: MLX Contributors
Requires Python: >=3.8

Classifiers

MLX LM

MLX LM is a Python package for generating text and fine-tuning large language models on Apple silicon with MLX.

Some key features include:

  • Integration with the Hugging Face Hub to easily use thousands of LLMs with a single command.
  • Support for quantizing and uploading models to the Hugging Face Hub.
  • Low-rank and full model fine-tuning with support for quantized models.
  • Distributed inference and fine-tuning with mx.distributed

The easiest way to get started is to install the mlx-lm package:

With pip:

pip install mlx-lm

With conda:

conda install -c conda-forge mlx-lm

Quick Start

To generate text with an LLM use:

mlx_lm.generate --prompt "How tall is Mt Everest?"

To chat with an LLM use:

mlx_lm.chat

This will give you a chat REPL that you can use to interact with the LLM. The chat context is preserved during the lifetime of the REPL.

Commands in mlx-lm typically take command line options which let you specify the model, sampling parameters, and more. Use -h to see a list of available options for a command, e.g.:

mlx_lm.generate -h

The default model for generation and chat is mlx-community/Llama-3.2-3B-Instruct-4bit. You can specify any MLX-compatible model with the --model flag. Thousands are available in the MLX Community Hugging Face organization.

Python API

You can use mlx-lm as a module:

from mlx_lm import load, generate

model, tokenizer = load("mlx-community/Mistral-7B-Instruct-v0.3-4bit")

prompt = "Write a story about Einstein"

messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True
)

text = generate(model, tokenizer, prompt=prompt, verbose=True)

To see a description of all the arguments you can do:

>>> help(generate)

Check out the generation example to see how to use the API in more detail. Check out the batch generation example to see how to efficiently generate continuations for a batch of prompts.

The mlx-lm package also comes with functionality to quantize and optionally upload models to the Hugging Face Hub.

You can convert models using the Python API:

from mlx_lm import convert

repo = "mistralai/Mistral-7B-Instruct-v0.3"
upload_repo = "mlx-community/My-Mistral-7B-Instruct-v0.3-4bit"

convert(repo, quantize=True, upload_repo=upload_repo)

This will generate a 4-bit quantized Mistral 7B and upload it to the repo mlx-community/My-Mistral-7B-Instruct-v0.3-4bit. It will also save the converted model in the path mlx_model by default.

To see a description of all the arguments you can do:

>>> help(convert)

Streaming

For streaming generation, use the stream_generate function. This yields a generation response object.

For example,

from mlx_lm import load, stream_generate

repo = "mlx-community/Mistral-7B-Instruct-v0.3-4bit"
model, tokenizer = load(repo)

prompt = "Write a story about Einstein"

messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True
)

for response in stream_generate(model, tokenizer, prompt, max_tokens=512):
    print(response.text, end="", flush=True)
print()

Sampling

The generate and stream_generate functions accept sampler and logits_processors keyword arguments. A sampler is any callable which accepts a possibly batched logits array and returns an array of sampled tokens. The logits_processors must be a list of callables which take the token history and current logits as input and return the processed logits. The logits processors are applied in order.

Some standard sampling functions and logits processors are provided in mlx_lm.sample_utils.

Command Line

You can also use mlx-lm from the command line with:

mlx_lm.generate --model mistralai/Mistral-7B-Instruct-v0.3 --prompt "hello"

This will download a Mistral 7B model from the Hugging Face Hub and generate text using the given prompt.

For a full list of options run:

mlx_lm.generate --help

To quantize a model from the command line run:

mlx_lm.convert --hf-path mistralai/Mistral-7B-Instruct-v0.3 -q

For more options run:

mlx_lm.convert --help

You can upload new models to Hugging Face by specifying --upload-repo to convert. For example, to upload a quantized Mistral-7B model to the MLX Hugging Face community you can do:

mlx_lm.convert \
    --hf-path mistralai/Mistral-7B-Instruct-v0.3 \
    -q \
    --upload-repo mlx-community/my-4bit-mistral

Models can also be converted and quantized directly in the mlx-my-repo Hugging Face Space.

Long Prompts and Generations

mlx-lm has some tools to scale efficiently to long prompts and generations:

  • A rotating fixed-size key-value cache.
  • Prompt caching

To use the rotating key-value cache pass the argument --max-kv-size n where n can be any integer. Smaller values like 512 will use very little RAM but result in worse quality. Larger values like 4096 or higher will use more RAM but have better quality.

Caching prompts can substantially speedup reusing the same long context with different queries. To cache a prompt use mlx_lm.cache_prompt. For example:

cat prompt.txt | mlx_lm.cache_prompt \
  --model mistralai/Mistral-7B-Instruct-v0.3 \
  --prompt - \
  --prompt-cache-file mistral_prompt.safetensors

Then use the cached prompt with mlx_lm.generate:

mlx_lm.generate \
    --prompt-cache-file mistral_prompt.safetensors \
    --prompt "\nSummarize the above text."

The cached prompt is treated as a prefix to the supplied prompt. Also notice when using a cached prompt, the model to use is read from the cache and need not be supplied explicitly.

Prompt caching can also be used in the Python API in order to avoid recomputing the prompt. This is useful in multi-turn dialogues or across requests that use the same context. See the example for more usage details.

Supported Models

mlx-lm supports thousands of Hugging Face format LLMs. If the model you want to run is not supported, file an issue or better yet, submit a pull request.

Here are a few examples of Hugging Face models that work with this example:

Most Mistral, Llama, Phi-2, and Mixtral style models should work out of the box.

For some models (such as Qwen and plamo) the tokenizer requires you to enable the trust_remote_code option. You can do this by passing --trust-remote-code in the command line. If you don't specify the flag explicitly, you will be prompted to trust remote code in the terminal when running the model.

For Qwen models you must also specify the eos_token. You can do this by passing --eos-token "<|endoftext|>" in the command line.

These options can also be set in the Python API. For example:

model, tokenizer = load(
    "qwen/Qwen-7B",
    tokenizer_config={"eos_token": "<|endoftext|>", "trust_remote_code": True},
)

Large Models

[!NOTE] This requires macOS 15.0 or higher to work.

Models which are large relative to the total RAM available on the machine can be slow. mlx-lm will attempt to make them faster by wiring the memory occupied by the model and cache. This requires macOS 15 or higher to work.

If you see the following warning message:

[WARNING] Generating with a model that requires ...

then the model will likely be slow on the given machine. If the model fits in RAM then it can often be sped up by increasing the system wired memory limit. To increase the limit, set the following sysctl:

sudo sysctl iogpu.wired_limit_mb=N

The value N should be larger than the size of the model in megabytes but smaller than the memory size of the machine.

0.28.3 Oct 17, 2025
0.28.2 Oct 02, 2025
0.28.1 Sep 27, 2025
0.28.0 Sep 17, 2025
0.27.1 Sep 04, 2025
0.27.0 Aug 29, 2025
0.26.4 Aug 25, 2025
0.26.3 Aug 06, 2025
0.26.2 Jul 30, 2025
0.26.1 Jul 26, 2025
0.26.0 Jul 08, 2025
0.25.3 Jul 01, 2025
0.25.2 Jun 09, 2025
0.25.1 Jun 07, 2025
0.25.0 Jun 02, 2025
0.24.1 May 14, 2025
0.24.0 Apr 28, 2025
0.23.2 Apr 22, 2025
0.23.1 Apr 20, 2025
0.23.0 Apr 18, 2025
0.22.5 Apr 11, 2025
0.22.4 Apr 06, 2025
0.22.3 Apr 03, 2025
0.22.2 Mar 21, 2025
0.22.1 Mar 18, 2025
0.22.0 Mar 13, 2025
0.21.5 Feb 27, 2025
0.21.4 Feb 08, 2025
0.21.3 Feb 07, 2025
0.21.2 Feb 05, 2025
0.21.1 Jan 16, 2025
0.21.0 Jan 10, 2025
0.20.6 Jan 03, 2025
0.20.5 Dec 23, 2024
0.20.4 Dec 13, 2024
0.20.3 Dec 11, 2024
0.20.2 Dec 08, 2024
0.20.1 Nov 25, 2024
0.19.3 Nov 04, 2024
0.19.2 Oct 23, 2024
0.19.1 Oct 14, 2024
0.19.0 Oct 02, 2024
0.18.2 Sep 19, 2024
0.18.1 Aug 30, 2024
0.17.1 Aug 24, 2024
0.17.0 Aug 17, 2024
0.16.1 Jul 23, 2024
0.16.0 Jul 22, 2024
0.15.3 Jul 17, 2024
0.15.2 Jul 08, 2024
0.15.1 Jul 07, 2024
0.15.0 Jun 27, 2024
0.14.3 Jun 03, 2024
0.14.2 Jun 02, 2024
0.14.1 May 31, 2024
0.14.0 May 24, 2024
0.13.1 May 17, 2024
0.13.0 May 10, 2024
0.12.1 Apr 30, 2024
0.12.0 Apr 26, 2024
0.11.0 Apr 23, 2024
0.10.0 Apr 19, 2024
0.9.0 Apr 11, 2024
0.8.0 Apr 08, 2024
0.7.0 Apr 05, 2024
0.6.0 Apr 02, 2024
0.5.0 Mar 25, 2024
0.4.0 Mar 21, 2024
0.3.0 Mar 13, 2024
0.2.0 Mar 13, 2024
0.1.0 Mar 08, 2024
0.0.14 Mar 04, 2024
0.0.13 Feb 21, 2024
0.0.12 Feb 20, 2024
0.0.11 Feb 18, 2024
0.0.10 Feb 13, 2024
0.0.9 Feb 08, 2024
0.0.8 Feb 06, 2024
0.0.7 Feb 04, 2024
0.0.6 Jan 26, 2024
0.0.5 Jan 24, 2024
0.0.3 Jan 15, 2024
0.0.2 Jan 12, 2024
0.0.1 Jan 12, 2024

Wheel compatibility matrix

Platform Python 3
any

Files in release

Extras:
Dependencies:
mlx (>=0.29.2)
numpy
transformers (>=4.39.3)
protobuf
pyyaml
jinja2