Refactor Docker support section in README for clarity and conciseness

This commit is contained in:
2025-09-28 15:31:50 +02:00
parent 97a7c9a4e3
commit 9a7255a52d

View File

@@ -149,49 +149,21 @@ pip install vllm
## Docker Support ## Docker Support
llamactl supports running backends in Docker containers with identical behavior to native execution. This is particularly useful for: llamactl supports running backends in Docker containers - perfect for production deployments without local backend installation. Simply enable Docker in your configuration:
- Production deployments without local backend installation
- Isolating backend dependencies
- GPU-accelerated inference using official Docker images
### Docker Configuration
Enable Docker support using the new structured backend configuration:
```yaml ```yaml
backends: backends:
llama-cpp: llama-cpp:
command: "llama-server"
environment: {} # Environment variables for the backend process
docker: docker:
enabled: true enabled: true
image: "ghcr.io/ggml-org/llama.cpp:server"
args: ["run", "--rm", "--network", "host", "--gpus", "all"]
environment: {} # Environment variables for the container
vllm: vllm:
command: "vllm"
args: ["serve"]
environment: {} # Environment variables for the backend process
docker: docker:
enabled: true enabled: true
image: "vllm/vllm-openai:latest"
args: ["run", "--rm", "--network", "host", "--gpus", "all", "--shm-size", "1g"]
environment: {} # Environment variables for the container
``` ```
### Key Features **Requirements:** Docker installed and running. For GPU support: nvidia-docker2 (Linux) or Docker Desktop with GPU support.
- **Host Networking**: Uses `--network host` for seamless port management For detailed Docker configuration options, see the [Configuration Guide](docs/getting-started/configuration.md).
- **GPU Support**: Includes `--gpus all` for GPU acceleration
- **Environment Variables**: Configure container environment as needed
- **Flexible Configuration**: Per-backend Docker settings with sensible defaults
### Requirements
- Docker installed and running
- For GPU support: nvidia-docker2 (Linux) or Docker Desktop with GPU support
- No local backend installation required when using Docker
## Configuration ## Configuration