Refactor installation and troubleshooting documentation for clarity and completeness

This commit is contained in:
2025-09-03 21:11:26 +02:00
parent 56756192e3
commit 969b4b14e1
4 changed files with 99 additions and 488 deletions

View File

@@ -6,19 +6,17 @@ This guide will walk you through installing Llamactl on your system.
You need `llama-server` from [llama.cpp](https://github.com/ggml-org/llama.cpp) installed:
```bash
# Quick install methods:
# Homebrew (macOS)
brew install llama.cpp
# Or build from source - see llama.cpp docs
**Quick install methods:**
```bash
# Homebrew (macOS/Linux)
brew install llama.cpp
# Winget (Windows)
winget install llama.cpp
```
Additional requirements for building from source:
- Go 1.24 or later
- Node.js 22 or later
- Git
- Sufficient disk space for your models
Or build from source - see llama.cpp docs
## Installation Methods
@@ -40,6 +38,11 @@ sudo mv llamactl /usr/local/bin/
### Option 2: Build from Source
Requirements:
- Go 1.24 or later
- Node.js 22 or later
- Git
If you prefer to build from source:
```bash

View File

@@ -20,6 +20,8 @@ Open your web browser and navigate to:
http://localhost:8080
```
Login with the management API key. By default it is generated during server startup. Copy it from the terminal output.
You should see the Llamactl web interface.
## Step 3: Create Your First Instance