Initial support for docker

This commit is contained in:
2025-09-28 17:51:16 +02:00
parent 92cb57e816
commit b940b38e46
6 changed files with 237 additions and 4 deletions

View File

@@ -95,7 +95,26 @@ sudo mv llamactl /usr/local/bin/
# Windows - Download from releases page
```
### Option 2: Build from Source
### Option 2: Docker (No local backend installation required)
```bash
# Clone repository and build Docker images
git clone https://github.com/lordmathis/llamactl.git
cd llamactl
mkdir -p data/llamacpp data/vllm models
# Build and start llamactl with llama.cpp CUDA backend
docker-compose up llamactl-llamacpp -d
# Build and start llamactl with vLLM CUDA backend
docker-compose up llamactl-vllm -d
```
**Features:** Full CUDA support, automatic latest release installation, no backend dependencies.
For detailed Docker setup and configuration, see the [Installation Guide](docs/getting-started/installation.md).
### Option 3: Build from Source
Requires Go 1.24+ and Node.js 22+
```bash
git clone https://github.com/lordmathis/llamactl.git
@@ -147,9 +166,9 @@ pip install vllm
# Or use Docker - no local installation required
```
## Docker Support
## Backend Docker Support
llamactl supports running backends in Docker containers - perfect for production deployments without local backend installation. Simply enable Docker in your configuration:
llamactl can run backends in Docker containers:
```yaml
backends: