Fix typos and consistent naming for Llamactl across documentation

This commit is contained in:
2025-08-31 14:28:17 +02:00
parent bd31c03f4a
commit 0b264c8015
12 changed files with 45 additions and 45 deletions

View File

@@ -1,10 +1,10 @@
# Backends
LlamaCtl supports multiple backends for running large language models. This guide covers the available backends and their configuration.
Llamactl supports multiple backends for running large language models. This guide covers the available backends and their configuration.
## Llama.cpp Backend
The primary backend for LlamaCtl, providing robust support for GGUF models.
The primary backend for Llamactl, providing robust support for GGUF models.
### Features
@@ -107,7 +107,7 @@ options:
## Future Backends
LlamaCtl is designed to support multiple backends. Planned additions:
Llamactl is designed to support multiple backends. Planned additions:
### vLLM Backend
@@ -137,7 +137,7 @@ Integration with Ollama for easy model management:
### Automatic Detection
LlamaCtl can automatically detect the best backend:
Llamactl can automatically detect the best backend:
```yaml
backends:

View File

@@ -1,10 +1,10 @@
# Monitoring
Comprehensive monitoring setup for LlamaCtl in production environments.
Comprehensive monitoring setup for Llamactl in production environments.
## Overview
Effective monitoring of LlamaCtl involves tracking:
Effective monitoring of Llamactl involves tracking:
- Instance health and performance
- System resource usage
@@ -15,7 +15,7 @@ Effective monitoring of LlamaCtl involves tracking:
### Health Checks
LlamaCtl provides built-in health monitoring:
Llamactl provides built-in health monitoring:
```bash
# Check overall system health
@@ -45,7 +45,7 @@ curl http://localhost:8080/metrics
### Configuration
Add LlamaCtl as a Prometheus target:
Add Llamactl as a Prometheus target:
```yaml
# prometheus.yml
@@ -59,7 +59,7 @@ scrape_configs:
### Custom Metrics
Enable additional metrics in LlamaCtl:
Enable additional metrics in Llamactl:
```yaml
# config.yaml
@@ -76,7 +76,7 @@ monitoring:
## Grafana Dashboards
### LlamaCtl Dashboard
### Llamactl Dashboard
Import the official Grafana dashboard:
@@ -135,7 +135,7 @@ groups:
labels:
severity: critical
annotations:
summary: "LlamaCtl instance {{ $labels.instance_name }} is down"
summary: "Llamactl instance {{ $labels.instance_name }} is down"
- alert: HighMemoryUsage
expr: llamactl_instance_memory_percent > 90
@@ -170,7 +170,7 @@ receivers:
slack_configs:
- api_url: 'https://hooks.slack.com/services/...'
channel: '#alerts'
title: 'LlamaCtl Alert'
title: 'Llamactl Alert'
text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}'
```
@@ -373,7 +373,7 @@ curl http://localhost:8080/metrics | grep rate_limit
**Metrics not appearing:**
1. Check Prometheus configuration
2. Verify network connectivity
3. Review LlamaCtl logs for errors
3. Review Llamactl logs for errors
**High memory usage:**
1. Check for memory leaks in profiles

View File

@@ -1,6 +1,6 @@
# Troubleshooting
Common issues and solutions for LlamaCtl deployment and operation.
Common issues and solutions for Llamactl deployment and operation.
## Installation Issues
@@ -29,7 +29,7 @@ Common issues and solutions for LlamaCtl deployment and operation.
### Permission Denied
**Problem:** Permission errors when starting LlamaCtl
**Problem:** Permission errors when starting Llamactl
**Solutions:**
1. Check file permissions:
@@ -269,7 +269,7 @@ Common issues and solutions for LlamaCtl deployment and operation.
### High CPU Usage
**Problem:** LlamaCtl consuming excessive CPU
**Problem:** Llamactl consuming excessive CPU
**Diagnostic Steps:**
1. Identify CPU-intensive processes:
@@ -302,7 +302,7 @@ Common issues and solutions for LlamaCtl deployment and operation.
### Connection Refused
**Problem:** Cannot connect to LlamaCtl web interface
**Problem:** Cannot connect to Llamactl web interface
**Diagnostic Steps:**
1. Check if service is running:

View File

@@ -1,6 +1,6 @@
# Building from Source
This guide covers building LlamaCtl from source code for development and production deployment.
This guide covers building Llamactl from source code for development and production deployment.
## Prerequisites
@@ -261,7 +261,7 @@ LDFLAGS := -s -w -X main.version=$(VERSION) -X main.buildTime=$(BUILD_TIME)
.PHONY: build clean test install
build:
@echo "Building LlamaCtl..."
@echo "Building Llamactl..."
@cd webui && npm run build
@go build -ldflags="$(LDFLAGS)" -o llamactl cmd/server/main.go
@@ -423,7 +423,7 @@ Create a systemd service:
```ini
# /etc/systemd/system/llamactl.service
[Unit]
Description=LlamaCtl Server
Description=Llamactl Server
After=network.target
[Service]

View File

@@ -1,6 +1,6 @@
# Contributing
Thank you for your interest in contributing to LlamaCtl! This guide will help you get started with development and contribution.
Thank you for your interest in contributing to Llamactl! This guide will help you get started with development and contribution.
## Development Setup
@@ -370,4 +370,4 @@ Contributors are recognized in:
- Documentation credits
- Annual contributor highlights
Thank you for contributing to LlamaCtl!
Thank you for contributing to Llamactl!

View File

@@ -1,6 +1,6 @@
# Configuration
LlamaCtl can be configured through various methods to suit your needs.
Llamactl can be configured through various methods to suit your needs.
## Configuration File
@@ -39,7 +39,7 @@ limits:
## Environment Variables
You can also configure LlamaCtl using environment variables:
You can also configure Llamactl using environment variables:
```bash
# Server settings

View File

@@ -1,10 +1,10 @@
# Installation
This guide will walk you through installing LlamaCtl on your system.
This guide will walk you through installing Llamactl on your system.
## Prerequisites
Before installing LlamaCtl, ensure you have:
Before installing Llamactl, ensure you have:
- Go 1.19 or later
- Git
@@ -52,4 +52,4 @@ llamactl --version
## Next Steps
Now that LlamaCtl is installed, continue to the [Quick Start](quick-start.md) guide to get your first instance running!
Now that Llamactl is installed, continue to the [Quick Start](quick-start.md) guide to get your first instance running!

View File

@@ -1,16 +1,16 @@
# Quick Start
This guide will help you get LlamaCtl up and running in just a few minutes.
This guide will help you get Llamactl up and running in just a few minutes.
## Step 1: Start LlamaCtl
## Step 1: Start Llamactl
Start the LlamaCtl server:
Start the Llamactl server:
```bash
llamactl
```
By default, LlamaCtl will start on `http://localhost:8080`.
By default, Llamactl will start on `http://localhost:8080`.
## Step 2: Access the Web UI
@@ -20,7 +20,7 @@ Open your web browser and navigate to:
http://localhost:8080
```
You should see the LlamaCtl web interface.
You should see the Llamactl web interface.
## Step 3: Create Your First Instance

View File

@@ -1,10 +1,10 @@
# LlamaCtl Documentation
# Llamactl Documentation
Welcome to the LlamaCtl documentation! LlamaCtl is a powerful management tool for Llama.cpp instances that provides both a web interface and REST API for managing large language models.
Welcome to the Llamactl documentation! Llamactl is a powerful management tool for Llama.cpp instances that provides both a web interface and REST API for managing large language models.
## What is LlamaCtl?
## What is Llamactl?
LlamaCtl is designed to simplify the deployment and management of Llama.cpp instances. It provides:
Llamactl is designed to simplify the deployment and management of Llama.cpp instances. It provides:
- **Instance Management**: Start, stop, and monitor multiple Llama.cpp instances
- **Web UI**: User-friendly interface for managing your models
@@ -23,8 +23,8 @@ LlamaCtl is designed to simplify the deployment and management of Llama.cpp inst
## Quick Links
- [Installation Guide](getting-started/installation.md) - Get LlamaCtl up and running
- [Quick Start](getting-started/quick-start.md) - Your first steps with LlamaCtl
- [Installation Guide](getting-started/installation.md) - Get Llamactl up and running
- [Quick Start](getting-started/quick-start.md) - Your first steps with Llamactl
- [Web UI Guide](user-guide/web-ui.md) - Learn to use the web interface
- [API Reference](user-guide/api-reference.md) - Complete API documentation
@@ -34,7 +34,7 @@ If you need help or have questions:
- Check the [Troubleshooting](advanced/troubleshooting.md) guide
- Visit our [GitHub repository](https://github.com/lordmathis/llamactl)
- Read the [Contributing guide](development/contributing.md) to help improve LlamaCtl
- Read the [Contributing guide](development/contributing.md) to help improve Llamactl
---

View File

@@ -1,6 +1,6 @@
# API Reference
Complete reference for the LlamaCtl REST API.
Complete reference for the Llamactl REST API.
## Base URL
@@ -314,7 +314,7 @@ GET /api/system/info
### Get Configuration
Get current LlamaCtl configuration.
Get current Llamactl configuration.
```http
GET /api/config
@@ -322,7 +322,7 @@ GET /api/config
### Update Configuration
Update LlamaCtl configuration (requires restart).
Update Llamactl configuration (requires restart).
```http
PUT /api/config

View File

@@ -1,6 +1,6 @@
# Managing Instances
Learn how to effectively manage your Llama.cpp instances with LlamaCtl.
Learn how to effectively manage your Llama.cpp instances with Llamactl.
## Instance Lifecycle

View File

@@ -1,6 +1,6 @@
# Web UI Guide
The LlamaCtl Web UI provides an intuitive interface for managing your Llama.cpp instances.
The Llamactl Web UI provides an intuitive interface for managing your Llama.cpp instances.
## Overview
@@ -169,7 +169,7 @@ If authentication is enabled:
### Common UI Issues
**Page won't load:**
- Check if LlamaCtl server is running
- Check if Llamactl server is running
- Verify the correct URL and port
- Check browser console for errors