From 0b264c8015ed2d4520b530812f4df84b65e683ee Mon Sep 17 00:00:00 2001 From: LordMathis Date: Sun, 31 Aug 2025 14:28:17 +0200 Subject: [PATCH] Fix typos and consistent naming for Llamactl across documentation --- docs/advanced/backends.md | 8 ++++---- docs/advanced/monitoring.md | 18 +++++++++--------- docs/advanced/troubleshooting.md | 8 ++++---- docs/development/building.md | 6 +++--- docs/development/contributing.md | 4 ++-- docs/getting-started/configuration.md | 4 ++-- docs/getting-started/installation.md | 6 +++--- docs/getting-started/quick-start.md | 10 +++++----- docs/index.md | 14 +++++++------- docs/user-guide/api-reference.md | 6 +++--- docs/user-guide/managing-instances.md | 2 +- docs/user-guide/web-ui.md | 4 ++-- 12 files changed, 45 insertions(+), 45 deletions(-) diff --git a/docs/advanced/backends.md b/docs/advanced/backends.md index e2542ea..0491bc4 100644 --- a/docs/advanced/backends.md +++ b/docs/advanced/backends.md @@ -1,10 +1,10 @@ # Backends -LlamaCtl supports multiple backends for running large language models. This guide covers the available backends and their configuration. +Llamactl supports multiple backends for running large language models. This guide covers the available backends and their configuration. ## Llama.cpp Backend -The primary backend for LlamaCtl, providing robust support for GGUF models. +The primary backend for Llamactl, providing robust support for GGUF models. ### Features @@ -107,7 +107,7 @@ options: ## Future Backends -LlamaCtl is designed to support multiple backends. Planned additions: +Llamactl is designed to support multiple backends. Planned additions: ### vLLM Backend @@ -137,7 +137,7 @@ Integration with Ollama for easy model management: ### Automatic Detection -LlamaCtl can automatically detect the best backend: +Llamactl can automatically detect the best backend: ```yaml backends: diff --git a/docs/advanced/monitoring.md b/docs/advanced/monitoring.md index 7c3c76e..71051e6 100644 --- a/docs/advanced/monitoring.md +++ b/docs/advanced/monitoring.md @@ -1,10 +1,10 @@ # Monitoring -Comprehensive monitoring setup for LlamaCtl in production environments. +Comprehensive monitoring setup for Llamactl in production environments. ## Overview -Effective monitoring of LlamaCtl involves tracking: +Effective monitoring of Llamactl involves tracking: - Instance health and performance - System resource usage @@ -15,7 +15,7 @@ Effective monitoring of LlamaCtl involves tracking: ### Health Checks -LlamaCtl provides built-in health monitoring: +Llamactl provides built-in health monitoring: ```bash # Check overall system health @@ -45,7 +45,7 @@ curl http://localhost:8080/metrics ### Configuration -Add LlamaCtl as a Prometheus target: +Add Llamactl as a Prometheus target: ```yaml # prometheus.yml @@ -59,7 +59,7 @@ scrape_configs: ### Custom Metrics -Enable additional metrics in LlamaCtl: +Enable additional metrics in Llamactl: ```yaml # config.yaml @@ -76,7 +76,7 @@ monitoring: ## Grafana Dashboards -### LlamaCtl Dashboard +### Llamactl Dashboard Import the official Grafana dashboard: @@ -135,7 +135,7 @@ groups: labels: severity: critical annotations: - summary: "LlamaCtl instance {{ $labels.instance_name }} is down" + summary: "Llamactl instance {{ $labels.instance_name }} is down" - alert: HighMemoryUsage expr: llamactl_instance_memory_percent > 90 @@ -170,7 +170,7 @@ receivers: slack_configs: - api_url: 'https://hooks.slack.com/services/...' channel: '#alerts' - title: 'LlamaCtl Alert' + title: 'Llamactl Alert' text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}' ``` @@ -373,7 +373,7 @@ curl http://localhost:8080/metrics | grep rate_limit **Metrics not appearing:** 1. Check Prometheus configuration 2. Verify network connectivity -3. Review LlamaCtl logs for errors +3. Review Llamactl logs for errors **High memory usage:** 1. Check for memory leaks in profiles diff --git a/docs/advanced/troubleshooting.md b/docs/advanced/troubleshooting.md index 58b85a7..1d070d3 100644 --- a/docs/advanced/troubleshooting.md +++ b/docs/advanced/troubleshooting.md @@ -1,6 +1,6 @@ # Troubleshooting -Common issues and solutions for LlamaCtl deployment and operation. +Common issues and solutions for Llamactl deployment and operation. ## Installation Issues @@ -29,7 +29,7 @@ Common issues and solutions for LlamaCtl deployment and operation. ### Permission Denied -**Problem:** Permission errors when starting LlamaCtl +**Problem:** Permission errors when starting Llamactl **Solutions:** 1. Check file permissions: @@ -269,7 +269,7 @@ Common issues and solutions for LlamaCtl deployment and operation. ### High CPU Usage -**Problem:** LlamaCtl consuming excessive CPU +**Problem:** Llamactl consuming excessive CPU **Diagnostic Steps:** 1. Identify CPU-intensive processes: @@ -302,7 +302,7 @@ Common issues and solutions for LlamaCtl deployment and operation. ### Connection Refused -**Problem:** Cannot connect to LlamaCtl web interface +**Problem:** Cannot connect to Llamactl web interface **Diagnostic Steps:** 1. Check if service is running: diff --git a/docs/development/building.md b/docs/development/building.md index 6215854..a102915 100644 --- a/docs/development/building.md +++ b/docs/development/building.md @@ -1,6 +1,6 @@ # Building from Source -This guide covers building LlamaCtl from source code for development and production deployment. +This guide covers building Llamactl from source code for development and production deployment. ## Prerequisites @@ -261,7 +261,7 @@ LDFLAGS := -s -w -X main.version=$(VERSION) -X main.buildTime=$(BUILD_TIME) .PHONY: build clean test install build: - @echo "Building LlamaCtl..." + @echo "Building Llamactl..." @cd webui && npm run build @go build -ldflags="$(LDFLAGS)" -o llamactl cmd/server/main.go @@ -423,7 +423,7 @@ Create a systemd service: ```ini # /etc/systemd/system/llamactl.service [Unit] -Description=LlamaCtl Server +Description=Llamactl Server After=network.target [Service] diff --git a/docs/development/contributing.md b/docs/development/contributing.md index c2c146f..3b27d90 100644 --- a/docs/development/contributing.md +++ b/docs/development/contributing.md @@ -1,6 +1,6 @@ # Contributing -Thank you for your interest in contributing to LlamaCtl! This guide will help you get started with development and contribution. +Thank you for your interest in contributing to Llamactl! This guide will help you get started with development and contribution. ## Development Setup @@ -370,4 +370,4 @@ Contributors are recognized in: - Documentation credits - Annual contributor highlights -Thank you for contributing to LlamaCtl! +Thank you for contributing to Llamactl! diff --git a/docs/getting-started/configuration.md b/docs/getting-started/configuration.md index 6c8ae7f..e9ba2d3 100644 --- a/docs/getting-started/configuration.md +++ b/docs/getting-started/configuration.md @@ -1,6 +1,6 @@ # Configuration -LlamaCtl can be configured through various methods to suit your needs. +Llamactl can be configured through various methods to suit your needs. ## Configuration File @@ -39,7 +39,7 @@ limits: ## Environment Variables -You can also configure LlamaCtl using environment variables: +You can also configure Llamactl using environment variables: ```bash # Server settings diff --git a/docs/getting-started/installation.md b/docs/getting-started/installation.md index 7a71629..9be575e 100644 --- a/docs/getting-started/installation.md +++ b/docs/getting-started/installation.md @@ -1,10 +1,10 @@ # Installation -This guide will walk you through installing LlamaCtl on your system. +This guide will walk you through installing Llamactl on your system. ## Prerequisites -Before installing LlamaCtl, ensure you have: +Before installing Llamactl, ensure you have: - Go 1.19 or later - Git @@ -52,4 +52,4 @@ llamactl --version ## Next Steps -Now that LlamaCtl is installed, continue to the [Quick Start](quick-start.md) guide to get your first instance running! +Now that Llamactl is installed, continue to the [Quick Start](quick-start.md) guide to get your first instance running! diff --git a/docs/getting-started/quick-start.md b/docs/getting-started/quick-start.md index 2d77e2e..a882b10 100644 --- a/docs/getting-started/quick-start.md +++ b/docs/getting-started/quick-start.md @@ -1,16 +1,16 @@ # Quick Start -This guide will help you get LlamaCtl up and running in just a few minutes. +This guide will help you get Llamactl up and running in just a few minutes. -## Step 1: Start LlamaCtl +## Step 1: Start Llamactl -Start the LlamaCtl server: +Start the Llamactl server: ```bash llamactl ``` -By default, LlamaCtl will start on `http://localhost:8080`. +By default, Llamactl will start on `http://localhost:8080`. ## Step 2: Access the Web UI @@ -20,7 +20,7 @@ Open your web browser and navigate to: http://localhost:8080 ``` -You should see the LlamaCtl web interface. +You should see the Llamactl web interface. ## Step 3: Create Your First Instance diff --git a/docs/index.md b/docs/index.md index f1fd69f..b45cae2 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,10 +1,10 @@ -# LlamaCtl Documentation +# Llamactl Documentation -Welcome to the LlamaCtl documentation! LlamaCtl is a powerful management tool for Llama.cpp instances that provides both a web interface and REST API for managing large language models. +Welcome to the Llamactl documentation! Llamactl is a powerful management tool for Llama.cpp instances that provides both a web interface and REST API for managing large language models. -## What is LlamaCtl? +## What is Llamactl? -LlamaCtl is designed to simplify the deployment and management of Llama.cpp instances. It provides: +Llamactl is designed to simplify the deployment and management of Llama.cpp instances. It provides: - **Instance Management**: Start, stop, and monitor multiple Llama.cpp instances - **Web UI**: User-friendly interface for managing your models @@ -23,8 +23,8 @@ LlamaCtl is designed to simplify the deployment and management of Llama.cpp inst ## Quick Links -- [Installation Guide](getting-started/installation.md) - Get LlamaCtl up and running -- [Quick Start](getting-started/quick-start.md) - Your first steps with LlamaCtl +- [Installation Guide](getting-started/installation.md) - Get Llamactl up and running +- [Quick Start](getting-started/quick-start.md) - Your first steps with Llamactl - [Web UI Guide](user-guide/web-ui.md) - Learn to use the web interface - [API Reference](user-guide/api-reference.md) - Complete API documentation @@ -34,7 +34,7 @@ If you need help or have questions: - Check the [Troubleshooting](advanced/troubleshooting.md) guide - Visit our [GitHub repository](https://github.com/lordmathis/llamactl) -- Read the [Contributing guide](development/contributing.md) to help improve LlamaCtl +- Read the [Contributing guide](development/contributing.md) to help improve Llamactl --- diff --git a/docs/user-guide/api-reference.md b/docs/user-guide/api-reference.md index 813aa06..56c274d 100644 --- a/docs/user-guide/api-reference.md +++ b/docs/user-guide/api-reference.md @@ -1,6 +1,6 @@ # API Reference -Complete reference for the LlamaCtl REST API. +Complete reference for the Llamactl REST API. ## Base URL @@ -314,7 +314,7 @@ GET /api/system/info ### Get Configuration -Get current LlamaCtl configuration. +Get current Llamactl configuration. ```http GET /api/config @@ -322,7 +322,7 @@ GET /api/config ### Update Configuration -Update LlamaCtl configuration (requires restart). +Update Llamactl configuration (requires restart). ```http PUT /api/config diff --git a/docs/user-guide/managing-instances.md b/docs/user-guide/managing-instances.md index fcb3455..fa1102e 100644 --- a/docs/user-guide/managing-instances.md +++ b/docs/user-guide/managing-instances.md @@ -1,6 +1,6 @@ # Managing Instances -Learn how to effectively manage your Llama.cpp instances with LlamaCtl. +Learn how to effectively manage your Llama.cpp instances with Llamactl. ## Instance Lifecycle diff --git a/docs/user-guide/web-ui.md b/docs/user-guide/web-ui.md index 5207556..9b1ba29 100644 --- a/docs/user-guide/web-ui.md +++ b/docs/user-guide/web-ui.md @@ -1,6 +1,6 @@ # Web UI Guide -The LlamaCtl Web UI provides an intuitive interface for managing your Llama.cpp instances. +The Llamactl Web UI provides an intuitive interface for managing your Llama.cpp instances. ## Overview @@ -169,7 +169,7 @@ If authentication is enabled: ### Common UI Issues **Page won't load:** -- Check if LlamaCtl server is running +- Check if Llamactl server is running - Verify the correct URL and port - Check browser console for errors