Refactor documentation headings

This commit is contained in:
2025-10-26 14:50:42 +01:00
parent 85e21596d9
commit 781921fc5a
4 changed files with 27 additions and 27 deletions

View File

@@ -82,7 +82,7 @@ llamactl provides Dockerfiles for creating Docker images with backends pre-insta
**Note:** These Dockerfiles are configured for CUDA. For other platforms (CPU, ROCm, Vulkan, etc.), adapt the base image. For llama.cpp, see available tags at [llama.cpp Docker docs](https://github.com/ggml-org/llama.cpp/blob/master/docs/docker.md). For vLLM, check [vLLM docs](https://docs.vllm.ai/en/v0.6.5/serving/deploying_with_docker.html).
#### Using Docker Compose
**Using Docker Compose**
```bash
# Clone the repository
@@ -103,9 +103,9 @@ Access the dashboard at:
- llamactl with llama.cpp: http://localhost:8080
- llamactl with vLLM: http://localhost:8081
#### Using Docker Build and Run
**Using Docker Build and Run**
**llamactl with llama.cpp CUDA:**
1. llamactl with llama.cpp CUDA:
```bash
docker build -f docker/Dockerfile.llamacpp -t llamactl:llamacpp-cuda .
docker run -d \
@@ -116,7 +116,7 @@ docker run -d \
llamactl:llamacpp-cuda
```
**llamactl with vLLM CUDA:**
2. llamactl with vLLM CUDA:
```bash
docker build -f docker/Dockerfile.vllm -t llamactl:vllm-cuda .
docker run -d \
@@ -127,7 +127,7 @@ docker run -d \
llamactl:vllm-cuda
```
**llamactl built from source:**
3. llamactl built from source:
```bash
docker build -f docker/Dockerfile.source -t llamactl:source .
docker run -d \