mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-06 00:54:23 +00:00
Initial support for docker
This commit is contained in:
25
README.md
25
README.md
@@ -95,7 +95,26 @@ sudo mv llamactl /usr/local/bin/
|
||||
# Windows - Download from releases page
|
||||
```
|
||||
|
||||
### Option 2: Build from Source
|
||||
### Option 2: Docker (No local backend installation required)
|
||||
|
||||
```bash
|
||||
# Clone repository and build Docker images
|
||||
git clone https://github.com/lordmathis/llamactl.git
|
||||
cd llamactl
|
||||
mkdir -p data/llamacpp data/vllm models
|
||||
|
||||
# Build and start llamactl with llama.cpp CUDA backend
|
||||
docker-compose up llamactl-llamacpp -d
|
||||
|
||||
# Build and start llamactl with vLLM CUDA backend
|
||||
docker-compose up llamactl-vllm -d
|
||||
```
|
||||
|
||||
**Features:** Full CUDA support, automatic latest release installation, no backend dependencies.
|
||||
|
||||
For detailed Docker setup and configuration, see the [Installation Guide](docs/getting-started/installation.md).
|
||||
|
||||
### Option 3: Build from Source
|
||||
Requires Go 1.24+ and Node.js 22+
|
||||
```bash
|
||||
git clone https://github.com/lordmathis/llamactl.git
|
||||
@@ -147,9 +166,9 @@ pip install vllm
|
||||
# Or use Docker - no local installation required
|
||||
```
|
||||
|
||||
## Docker Support
|
||||
## Backend Docker Support
|
||||
|
||||
llamactl supports running backends in Docker containers - perfect for production deployments without local backend installation. Simply enable Docker in your configuration:
|
||||
llamactl can run backends in Docker containers:
|
||||
|
||||
```yaml
|
||||
backends:
|
||||
|
||||
Reference in New Issue
Block a user