mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-06 00:54:23 +00:00
Enhance documentation for MLX backend support
This commit is contained in:
@@ -4,11 +4,14 @@ This guide will walk you through installing Llamactl on your system.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Backend Dependencies
|
||||
|
||||
llamactl supports multiple backends. Install at least one:
|
||||
|
||||
**For llama.cpp backend (all platforms):**
|
||||
|
||||
You need `llama-server` from [llama.cpp](https://github.com/ggml-org/llama.cpp) installed:
|
||||
|
||||
|
||||
**Quick install methods:**
|
||||
|
||||
```bash
|
||||
# Homebrew (macOS/Linux)
|
||||
brew install llama.cpp
|
||||
@@ -18,6 +21,22 @@ winget install llama.cpp
|
||||
|
||||
Or build from source - see llama.cpp docs
|
||||
|
||||
**For MLX backend (macOS only):**
|
||||
|
||||
MLX provides optimized inference on Apple Silicon. Install MLX-LM:
|
||||
|
||||
```bash
|
||||
# Install via pip (requires Python 3.8+)
|
||||
pip install mlx-lm
|
||||
|
||||
# Or in a virtual environment (recommended)
|
||||
python -m venv mlx-env
|
||||
source mlx-env/bin/activate
|
||||
pip install mlx-lm
|
||||
```
|
||||
|
||||
Note: MLX backend is only available on macOS with Apple Silicon (M1, M2, M3, etc.)
|
||||
|
||||
## Installation Methods
|
||||
|
||||
### Option 1: Download Binary (Recommended)
|
||||
|
||||
Reference in New Issue
Block a user