mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-05 16:44:22 +00:00
1 line
51 KiB
JSON
1 line
51 KiB
JSON
{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Llamactl Documentation","text":"<p>Welcome to the Llamactl documentation! </p> <p></p>"},{"location":"#what-is-llamactl","title":"What is Llamactl?","text":"<p>Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.</p>"},{"location":"#features","title":"Features","text":""},{"location":"#easy-model-management","title":"\ud83d\ude80 Easy Model Management","text":"<ul> <li>Multiple Model Serving: Run different models simultaneously (7B for speed, 70B for quality) </li> <li>On-Demand Instance Start: Automatically launch instances upon receiving API requests </li> <li>State Persistence: Ensure instances remain intact across server restarts </li> </ul>"},{"location":"#universal-compatibility","title":"\ud83d\udd17 Universal Compatibility","text":"<ul> <li>OpenAI API Compatible: Drop-in replacement - route requests by instance name </li> <li>Multi-Backend Support: Native support for llama.cpp, MLX (Apple Silicon optimized), and vLLM </li> <li>Docker Support: Run backends in containers </li> </ul>"},{"location":"#user-friendly-interface","title":"\ud83c\udf10 User-Friendly Interface","text":"<ul> <li>Web Dashboard: Modern React UI for visual management (unlike CLI-only tools) </li> <li>API Key Authentication: Separate keys for management vs inference access </li> </ul>"},{"location":"#smart-operations","title":"\u26a1 Smart Operations","text":"<ul> <li>Instance Monitoring: Health checks, auto-restart, log management </li> <li>Smart Resource Management: Idle timeout, LRU eviction, and configurable instance limits </li> <li>Environment Variables: Set custom environment variables per instance for advanced configuration </li> </ul>"},{"location":"#quick-links","title":"Quick Links","text":"<ul> <li>Installation Guide - Get Llamactl up and running</li> <li>Configuration Guide - Detailed configuration options</li> <li>Quick Start - Your first steps with Llamactl</li> <li>Managing Instances - Instance lifecycle management</li> <li>API Reference - Complete API documentation</li> </ul>"},{"location":"#getting-help","title":"Getting Help","text":"<p>If you need help or have questions:</p> <ul> <li>Check the Troubleshooting guide</li> <li>Visit the GitHub repository</li> <li>Review the Configuration Guide for advanced settings</li> </ul>"},{"location":"#license","title":"License","text":"<p>MIT License - see the LICENSE file.</p>"},{"location":"getting-started/configuration/","title":"Configuration","text":"<p>llamactl can be configured via configuration files or environment variables. Configuration is loaded in the following order of precedence:</p> <pre><code>Defaults < Configuration file < Environment variables\n</code></pre> <p>llamactl works out of the box with sensible defaults, but you can customize the behavior to suit your needs.</p>"},{"location":"getting-started/configuration/#default-configuration","title":"Default Configuration","text":"<p>Here's the default configuration with all available options:</p> <pre><code>server:\n host: \"0.0.0.0\" # Server host to bind to\n port: 8080 # Server port to bind to\n allowed_origins: [\"*\"] # Allowed CORS origins (default: all)\n enable_swagger: false # Enable Swagger UI for API docs\n\nbackends:\n llama-cpp:\n command: \"llama-server\"\n args: []\n environment: {} # Environment variables for the backend process\n docker:\n enabled: false\n image: \"ghcr.io/ggml-org/llama.cpp:server\"\n args: [\"run\", \"--rm\", \"--network\", \"host\", \"--gpus\", \"all\"]\n environment: {}\n\n vllm:\n command: \"vllm\"\n args: [\"serve\"]\n environment: {} # Environment variables for the backend process\n docker:\n enabled: false\n image: \"vllm/vllm-openai:latest\"\n args: [\"run\", \"--rm\", \"--network\", \"host\", \"--gpus\", \"all\", \"--shm-size\", \"1g\"]\n environment: {}\n\n mlx:\n command: \"mlx_lm.server\"\n args: []\n environment: {} # Environment variables for the backend process\n\ninstances:\n port_range: [8000, 9000] # Port range for instances\n data_dir: ~/.local/share/llamactl # Data directory (platform-specific, see below)\n configs_dir: ~/.local/share/llamactl/instances # Instance configs directory\n logs_dir: ~/.local/share/llamactl/logs # Logs directory\n auto_create_dirs: true # Auto-create data/config/logs dirs if missing\n max_instances: -1 # Max instances (-1 = unlimited)\n max_running_instances: -1 # Max running instances (-1 = unlimited)\n enable_lru_eviction: true # Enable LRU eviction for idle instances\n default_auto_restart: true # Auto-restart new instances by default\n default_max_restarts: 3 # Max restarts for new instances\n default_restart_delay: 5 # Restart delay (seconds) for new instances\n default_on_demand_start: true # Default on-demand start setting\n on_demand_start_timeout: 120 # Default on-demand start timeout in seconds\n timeout_check_interval: 5 # Idle instance timeout check in minutes\n\nauth:\n require_inference_auth: true # Require auth for inference endpoints\n inference_keys: [] # Keys for inference endpoints\n require_management_auth: true # Require auth for management endpoints\n management_keys: [] # Keys for management endpoints\n</code></pre>"},{"location":"getting-started/configuration/#configuration-files","title":"Configuration Files","text":""},{"location":"getting-started/configuration/#configuration-file-locations","title":"Configuration File Locations","text":"<p>Configuration files are searched in the following locations (in order of precedence):</p> <p>Linux: - <code>./llamactl.yaml</code> or <code>./config.yaml</code> (current directory) - <code>$HOME/.config/llamactl/config.yaml</code> - <code>/etc/llamactl/config.yaml</code> </p> <p>macOS: - <code>./llamactl.yaml</code> or <code>./config.yaml</code> (current directory) - <code>$HOME/Library/Application Support/llamactl/config.yaml</code> - <code>/Library/Application Support/llamactl/config.yaml</code> </p> <p>Windows: - <code>./llamactl.yaml</code> or <code>./config.yaml</code> (current directory) - <code>%APPDATA%\\llamactl\\config.yaml</code> - <code>%USERPROFILE%\\llamactl\\config.yaml</code> - <code>%PROGRAMDATA%\\llamactl\\config.yaml</code> </p> <p>You can specify the path to config file with <code>LLAMACTL_CONFIG_PATH</code> environment variable.</p>"},{"location":"getting-started/configuration/#configuration-options","title":"Configuration Options","text":""},{"location":"getting-started/configuration/#server-configuration","title":"Server Configuration","text":"<pre><code>server:\n host: \"0.0.0.0\" # Server host to bind to (default: \"0.0.0.0\")\n port: 8080 # Server port to bind to (default: 8080)\n allowed_origins: [\"*\"] # CORS allowed origins (default: [\"*\"])\n enable_swagger: false # Enable Swagger UI (default: false)\n</code></pre> <p>Environment Variables: - <code>LLAMACTL_HOST</code> - Server host - <code>LLAMACTL_PORT</code> - Server port - <code>LLAMACTL_ALLOWED_ORIGINS</code> - Comma-separated CORS origins - <code>LLAMACTL_ENABLE_SWAGGER</code> - Enable Swagger UI (true/false)</p>"},{"location":"getting-started/configuration/#backend-configuration","title":"Backend Configuration","text":"<pre><code>backends:\n llama-cpp:\n command: \"llama-server\"\n args: []\n environment: {} # Environment variables for the backend process\n docker:\n enabled: false # Enable Docker runtime (default: false)\n image: \"ghcr.io/ggml-org/llama.cpp:server\"\n args: [\"run\", \"--rm\", \"--network\", \"host\", \"--gpus\", \"all\"]\n environment: {}\n\n vllm:\n command: \"vllm\"\n args: [\"serve\"]\n environment: {} # Environment variables for the backend process\n docker:\n enabled: false\n image: \"vllm/vllm-openai:latest\"\n args: [\"run\", \"--rm\", \"--network\", \"host\", \"--gpus\", \"all\", \"--shm-size\", \"1g\"]\n environment: {}\n\n mlx:\n command: \"mlx_lm.server\"\n args: []\n environment: {} # Environment variables for the backend process\n # MLX does not support Docker\n</code></pre> <p>Backend Configuration Fields: - <code>command</code>: Executable name/path for the backend - <code>args</code>: Default arguments prepended to all instances - <code>environment</code>: Environment variables for the backend process (optional) - <code>docker</code>: Docker-specific configuration (optional) - <code>enabled</code>: Boolean flag to enable Docker runtime - <code>image</code>: Docker image to use - <code>args</code>: Additional arguments passed to <code>docker run</code> - <code>environment</code>: Environment variables for the container (optional)</p> <p>Environment Variables:</p> <p>LlamaCpp Backend: - <code>LLAMACTL_LLAMACPP_COMMAND</code> - LlamaCpp executable command - <code>LLAMACTL_LLAMACPP_ARGS</code> - Space-separated default arguments - <code>LLAMACTL_LLAMACPP_ENV</code> - Environment variables in format \"KEY1=value1,KEY2=value2\" - <code>LLAMACTL_LLAMACPP_DOCKER_ENABLED</code> - Enable Docker runtime (true/false) - <code>LLAMACTL_LLAMACPP_DOCKER_IMAGE</code> - Docker image to use - <code>LLAMACTL_LLAMACPP_DOCKER_ARGS</code> - Space-separated Docker arguments - <code>LLAMACTL_LLAMACPP_DOCKER_ENV</code> - Docker environment variables in format \"KEY1=value1,KEY2=value2\"</p> <p>VLLM Backend: - <code>LLAMACTL_VLLM_COMMAND</code> - VLLM executable command - <code>LLAMACTL_VLLM_ARGS</code> - Space-separated default arguments - <code>LLAMACTL_VLLM_ENV</code> - Environment variables in format \"KEY1=value1,KEY2=value2\" - <code>LLAMACTL_VLLM_DOCKER_ENABLED</code> - Enable Docker runtime (true/false) - <code>LLAMACTL_VLLM_DOCKER_IMAGE</code> - Docker image to use - <code>LLAMACTL_VLLM_DOCKER_ARGS</code> - Space-separated Docker arguments - <code>LLAMACTL_VLLM_DOCKER_ENV</code> - Docker environment variables in format \"KEY1=value1,KEY2=value2\"</p> <p>MLX Backend: - <code>LLAMACTL_MLX_COMMAND</code> - MLX executable command - <code>LLAMACTL_MLX_ARGS</code> - Space-separated default arguments - <code>LLAMACTL_MLX_ENV</code> - Environment variables in format \"KEY1=value1,KEY2=value2\"</p>"},{"location":"getting-started/configuration/#instance-configuration","title":"Instance Configuration","text":"<pre><code>instances:\n port_range: [8000, 9000] # Port range for instances (default: [8000, 9000])\n data_dir: \"~/.local/share/llamactl\" # Directory for all llamactl data (default varies by OS)\n configs_dir: \"~/.local/share/llamactl/instances\" # Directory for instance configs (default: data_dir/instances)\n logs_dir: \"~/.local/share/llamactl/logs\" # Directory for instance logs (default: data_dir/logs)\n auto_create_dirs: true # Automatically create data/config/logs directories (default: true)\n max_instances: -1 # Maximum instances (-1 = unlimited)\n max_running_instances: -1 # Maximum running instances (-1 = unlimited)\n enable_lru_eviction: true # Enable LRU eviction for idle instances\n default_auto_restart: true # Default auto-restart setting\n default_max_restarts: 3 # Default maximum restart attempts\n default_restart_delay: 5 # Default restart delay in seconds\n default_on_demand_start: true # Default on-demand start setting\n on_demand_start_timeout: 120 # Default on-demand start timeout in seconds\n timeout_check_interval: 5 # Default instance timeout check interval in minutes\n</code></pre> <p>Environment Variables: - <code>LLAMACTL_INSTANCE_PORT_RANGE</code> - Port range (format: \"8000-9000\" or \"8000,9000\") - <code>LLAMACTL_DATA_DIRECTORY</code> - Data directory path - <code>LLAMACTL_INSTANCES_DIR</code> - Instance configs directory path - <code>LLAMACTL_LOGS_DIR</code> - Log directory path - <code>LLAMACTL_AUTO_CREATE_DATA_DIR</code> - Auto-create data/config/logs directories (true/false) - <code>LLAMACTL_MAX_INSTANCES</code> - Maximum number of instances - <code>LLAMACTL_MAX_RUNNING_INSTANCES</code> - Maximum number of running instances - <code>LLAMACTL_ENABLE_LRU_EVICTION</code> - Enable LRU eviction for idle instances - <code>LLAMACTL_DEFAULT_AUTO_RESTART</code> - Default auto-restart setting (true/false) - <code>LLAMACTL_DEFAULT_MAX_RESTARTS</code> - Default maximum restarts - <code>LLAMACTL_DEFAULT_RESTART_DELAY</code> - Default restart delay in seconds - <code>LLAMACTL_DEFAULT_ON_DEMAND_START</code> - Default on-demand start setting (true/false) - <code>LLAMACTL_ON_DEMAND_START_TIMEOUT</code> - Default on-demand start timeout in seconds - <code>LLAMACTL_TIMEOUT_CHECK_INTERVAL</code> - Default instance timeout check interval in minutes </p>"},{"location":"getting-started/configuration/#authentication-configuration","title":"Authentication Configuration","text":"<pre><code>auth:\n require_inference_auth: true # Require API key for OpenAI endpoints (default: true)\n inference_keys: [] # List of valid inference API keys\n require_management_auth: true # Require API key for management endpoints (default: true)\n management_keys: [] # List of valid management API keys\n</code></pre> <p>Environment Variables: - <code>LLAMACTL_REQUIRE_INFERENCE_AUTH</code> - Require auth for OpenAI endpoints (true/false) - <code>LLAMACTL_INFERENCE_KEYS</code> - Comma-separated inference API keys - <code>LLAMACTL_REQUIRE_MANAGEMENT_AUTH</code> - Require auth for management endpoints (true/false) - <code>LLAMACTL_MANAGEMENT_KEYS</code> - Comma-separated management API keys </p>"},{"location":"getting-started/configuration/#command-line-options","title":"Command Line Options","text":"<p>View all available command line options:</p> <pre><code>llamactl --help\n</code></pre> <p>You can also override configuration using command line flags when starting llamactl.</p>"},{"location":"getting-started/installation/","title":"Installation","text":"<p>This guide will walk you through installing Llamactl on your system.</p>"},{"location":"getting-started/installation/#prerequisites","title":"Prerequisites","text":""},{"location":"getting-started/installation/#backend-dependencies","title":"Backend Dependencies","text":"<p>llamactl supports multiple backends. Install at least one:</p> <p>For llama.cpp backend (all platforms):</p> <p>You need <code>llama-server</code> from llama.cpp installed:</p> <pre><code># Homebrew (macOS/Linux)\nbrew install llama.cpp\n# Winget (Windows)\nwinget install llama.cpp\n</code></pre> <p>Or build from source - see llama.cpp docs</p> <p>For MLX backend (macOS only):</p> <p>MLX provides optimized inference on Apple Silicon. Install MLX-LM:</p> <pre><code># Install via pip (requires Python 3.8+)\npip install mlx-lm\n\n# Or in a virtual environment (recommended)\npython -m venv mlx-env\nsource mlx-env/bin/activate\npip install mlx-lm\n</code></pre> <p>Note: MLX backend is only available on macOS with Apple Silicon (M1, M2, M3, etc.)</p> <p>For vLLM backend:</p> <p>vLLM provides high-throughput distributed serving for LLMs. Install vLLM:</p> <pre><code># Install via pip (requires Python 3.8+, GPU required)\npip install vllm\n\n# Or in a virtual environment (recommended)\npython -m venv vllm-env\nsource vllm-env/bin/activate\npip install vllm\n\n# For production deployments, consider container-based installation\n</code></pre>"},{"location":"getting-started/installation/#installation-methods","title":"Installation Methods","text":""},{"location":"getting-started/installation/#option-1-download-binary-recommended","title":"Option 1: Download Binary (Recommended)","text":"<p>Download the latest release from the GitHub releases page:</p> <pre><code># Linux/macOS - Get latest version and download\nLATEST_VERSION=$(curl -s https://api.github.com/repos/lordmathis/llamactl/releases/latest | grep '\"tag_name\":' | sed -E 's/.*\"([^\"]+)\".*/\\1/')\ncurl -L https://github.com/lordmathis/llamactl/releases/download/${LATEST_VERSION}/llamactl-${LATEST_VERSION}-$(uname -s | tr '[:upper:]' '[:lower:]')-$(uname -m).tar.gz | tar -xz\nsudo mv llamactl /usr/local/bin/\n\n# Or download manually from:\n# https://github.com/lordmathis/llamactl/releases/latest\n\n# Windows - Download from releases page\n</code></pre>"},{"location":"getting-started/installation/#option-2-build-from-source","title":"Option 2: Build from Source","text":"<p>Requirements: - Go 1.24 or later - Node.js 22 or later - Git</p> <p>If you prefer to build from source:</p> <pre><code># Clone the repository\ngit clone https://github.com/lordmathis/llamactl.git\ncd llamactl\n\n# Build the web UI\ncd webui && npm ci && npm run build && cd ..\n\n# Build the application\ngo build -o llamactl ./cmd/server\n</code></pre>"},{"location":"getting-started/installation/#verification","title":"Verification","text":"<p>Verify your installation by checking the version:</p> <pre><code>llamactl --version\n</code></pre>"},{"location":"getting-started/installation/#next-steps","title":"Next Steps","text":"<p>Now that Llamactl is installed, continue to the Quick Start guide to get your first instance running!</p>"},{"location":"getting-started/quick-start/","title":"Quick Start","text":"<p>This guide will help you get Llamactl up and running in just a few minutes.</p>"},{"location":"getting-started/quick-start/#step-1-start-llamactl","title":"Step 1: Start Llamactl","text":"<p>Start the Llamactl server:</p> <pre><code>llamactl\n</code></pre> <p>By default, Llamactl will start on <code>http://localhost:8080</code>.</p>"},{"location":"getting-started/quick-start/#step-2-access-the-web-ui","title":"Step 2: Access the Web UI","text":"<p>Open your web browser and navigate to:</p> <pre><code>http://localhost:8080\n</code></pre> <p>Login with the management API key. By default it is generated during server startup. Copy it from the terminal output.</p> <p>You should see the Llamactl web interface.</p>"},{"location":"getting-started/quick-start/#step-3-create-your-first-instance","title":"Step 3: Create Your First Instance","text":"<ol> <li>Click the \"Add Instance\" button</li> <li>Fill in the instance configuration:</li> <li>Name: Give your instance a descriptive name</li> <li>Backend Type: Choose from llama.cpp, MLX, or vLLM</li> <li>Model: Model path or identifier for your chosen backend</li> <li> <p>Additional Options: Backend-specific parameters</p> </li> <li> <p>Click \"Create Instance\"</p> </li> </ol>"},{"location":"getting-started/quick-start/#step-4-start-your-instance","title":"Step 4: Start Your Instance","text":"<p>Once created, you can:</p> <ul> <li>Start the instance by clicking the start button</li> <li>Monitor its status in real-time</li> <li>View logs by clicking the logs button</li> <li>Stop the instance when needed</li> </ul>"},{"location":"getting-started/quick-start/#example-configurations","title":"Example Configurations","text":"<p>Here are basic example configurations for each backend:</p> <p>llama.cpp backend: <pre><code>{\n \"name\": \"llama2-7b\",\n \"backend_type\": \"llama_cpp\",\n \"backend_options\": {\n \"model\": \"/path/to/llama-2-7b-chat.gguf\",\n \"threads\": 4,\n \"ctx_size\": 2048,\n \"gpu_layers\": 32\n }\n}\n</code></pre></p> <p>MLX backend (macOS only): <pre><code>{\n \"name\": \"mistral-mlx\",\n \"backend_type\": \"mlx_lm\",\n \"backend_options\": {\n \"model\": \"mlx-community/Mistral-7B-Instruct-v0.3-4bit\",\n \"temp\": 0.7,\n \"max_tokens\": 2048\n }\n}\n</code></pre></p> <p>vLLM backend: <pre><code>{\n \"name\": \"dialogpt-vllm\",\n \"backend_type\": \"vllm\",\n \"backend_options\": {\n \"model\": \"microsoft/DialoGPT-medium\",\n \"tensor_parallel_size\": 2,\n \"gpu_memory_utilization\": 0.9\n }\n}\n</code></pre></p>"},{"location":"getting-started/quick-start/#docker-support","title":"Docker Support","text":"<p>Llamactl can run backends in Docker containers. To enable Docker for a backend, add a <code>docker</code> section to that backend in your YAML configuration file (e.g. <code>config.yaml</code>) as shown below:</p> <pre><code>backends:\n vllm:\n command: \"vllm\"\n args: [\"serve\"]\n docker:\n enabled: true\n image: \"vllm/vllm-openai:latest\"\n args: [\"run\", \"--rm\", \"--network\", \"host\", \"--gpus\", \"all\", \"--shm-size\", \"1g\"]\n</code></pre>"},{"location":"getting-started/quick-start/#using-the-api","title":"Using the API","text":"<p>You can also manage instances via the REST API:</p> <pre><code># List all instances\ncurl http://localhost:8080/api/instances\n\n# Create a new llama.cpp instance\ncurl -X POST http://localhost:8080/api/instances/my-model \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"backend_type\": \"llama_cpp\",\n \"backend_options\": {\n \"model\": \"/path/to/model.gguf\"\n }\n }'\n\n# Start an instance\ncurl -X POST http://localhost:8080/api/instances/my-model/start\n</code></pre>"},{"location":"getting-started/quick-start/#openai-compatible-api","title":"OpenAI Compatible API","text":"<p>Llamactl provides OpenAI-compatible endpoints, making it easy to integrate with existing OpenAI client libraries and tools.</p>"},{"location":"getting-started/quick-start/#chat-completions","title":"Chat Completions","text":"<p>Once you have an instance running, you can use it with the OpenAI-compatible chat completions endpoint:</p> <pre><code>curl -X POST http://localhost:8080/v1/chat/completions \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"my-model\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"Hello! Can you help me write a Python function?\"\n }\n ],\n \"max_tokens\": 150,\n \"temperature\": 0.7\n }'\n</code></pre>"},{"location":"getting-started/quick-start/#using-with-python-openai-client","title":"Using with Python OpenAI Client","text":"<p>You can also use the official OpenAI Python client:</p> <pre><code>from openai import OpenAI\n\n# Point the client to your Llamactl server\nclient = OpenAI(\n base_url=\"http://localhost:8080/v1\",\n api_key=\"not-needed\" # Llamactl doesn't require API keys by default\n)\n\n# Create a chat completion\nresponse = client.chat.completions.create(\n model=\"my-model\", # Use the name of your instance\n messages=[\n {\"role\": \"user\", \"content\": \"Explain quantum computing in simple terms\"}\n ],\n max_tokens=200,\n temperature=0.7\n)\n\nprint(response.choices[0].message.content)\n</code></pre>"},{"location":"getting-started/quick-start/#list-available-models","title":"List Available Models","text":"<p>Get a list of running instances (models) in OpenAI-compatible format:</p> <pre><code>curl http://localhost:8080/v1/models\n</code></pre>"},{"location":"getting-started/quick-start/#next-steps","title":"Next Steps","text":"<ul> <li>Manage instances Managing Instances</li> <li>Explore the API Reference</li> <li>Configure advanced settings in the Configuration guide</li> </ul>"},{"location":"user-guide/api-reference/","title":"API Reference","text":"<p>Complete reference for the Llamactl REST API.</p>"},{"location":"user-guide/api-reference/#base-url","title":"Base URL","text":"<p>All API endpoints are relative to the base URL:</p> <pre><code>http://localhost:8080/api/v1\n</code></pre>"},{"location":"user-guide/api-reference/#authentication","title":"Authentication","text":"<p>Llamactl supports API key authentication. If authentication is enabled, include the API key in the Authorization header:</p> <pre><code>curl -H \"Authorization: Bearer <your-api-key>\" \\\n http://localhost:8080/api/v1/instances\n</code></pre> <p>The server supports two types of API keys: - Management API Keys: Required for instance management operations (CRUD operations on instances) - Inference API Keys: Required for OpenAI-compatible inference endpoints</p>"},{"location":"user-guide/api-reference/#system-endpoints","title":"System Endpoints","text":""},{"location":"user-guide/api-reference/#get-llamactl-version","title":"Get Llamactl Version","text":"<p>Get the version information of the llamactl server.</p> <pre><code>GET /api/v1/version\n</code></pre> <p>Response: <pre><code>Version: 1.0.0\nCommit: abc123\nBuild Time: 2024-01-15T10:00:00Z\n</code></pre></p>"},{"location":"user-guide/api-reference/#get-llama-server-help","title":"Get Llama Server Help","text":"<p>Get help text for the llama-server command.</p> <pre><code>GET /api/v1/server/help\n</code></pre> <p>Response: Plain text help output from <code>llama-server --help</code></p>"},{"location":"user-guide/api-reference/#get-llama-server-version","title":"Get Llama Server Version","text":"<p>Get version information of the llama-server binary.</p> <pre><code>GET /api/v1/server/version\n</code></pre> <p>Response: Plain text version output from <code>llama-server --version</code></p>"},{"location":"user-guide/api-reference/#list-available-devices","title":"List Available Devices","text":"<p>List available devices for llama-server.</p> <pre><code>GET /api/v1/server/devices\n</code></pre> <p>Response: Plain text device list from <code>llama-server --list-devices</code></p>"},{"location":"user-guide/api-reference/#instances","title":"Instances","text":""},{"location":"user-guide/api-reference/#list-all-instances","title":"List All Instances","text":"<p>Get a list of all instances.</p> <pre><code>GET /api/v1/instances\n</code></pre> <p>Response: <pre><code>[\n {\n \"name\": \"llama2-7b\",\n \"status\": \"running\",\n \"created\": 1705312200\n }\n]\n</code></pre></p>"},{"location":"user-guide/api-reference/#get-instance-details","title":"Get Instance Details","text":"<p>Get detailed information about a specific instance.</p> <pre><code>GET /api/v1/instances/{name}\n</code></pre> <p>Response: <pre><code>{\n \"name\": \"llama2-7b\",\n \"status\": \"running\",\n \"created\": 1705312200\n}\n</code></pre></p>"},{"location":"user-guide/api-reference/#create-instance","title":"Create Instance","text":"<p>Create and start a new instance.</p> <pre><code>POST /api/v1/instances/{name}\n</code></pre> <p>Request Body: JSON object with instance configuration. Common fields include:</p> <ul> <li><code>backend_type</code>: Backend type (<code>llama_cpp</code>, <code>mlx_lm</code>, or <code>vllm</code>)</li> <li><code>backend_options</code>: Backend-specific configuration</li> <li><code>auto_restart</code>: Enable automatic restart on failure</li> <li><code>max_restarts</code>: Maximum restart attempts</li> <li><code>restart_delay</code>: Delay between restarts in seconds</li> <li><code>on_demand_start</code>: Start instance when receiving requests</li> <li><code>idle_timeout</code>: Idle timeout in minutes</li> <li><code>environment</code>: Environment variables as key-value pairs</li> </ul> <p>See Managing Instances for complete configuration options.</p> <p>Response: <pre><code>{\n \"name\": \"llama2-7b\",\n \"status\": \"running\",\n \"created\": 1705312200\n}\n</code></pre></p>"},{"location":"user-guide/api-reference/#update-instance","title":"Update Instance","text":"<p>Update an existing instance configuration. See Managing Instances for available configuration options.</p> <pre><code>PUT /api/v1/instances/{name}\n</code></pre> <p>Request Body: JSON object with configuration fields to update.</p> <p>Response: <pre><code>{\n \"name\": \"llama2-7b\",\n \"status\": \"running\",\n \"created\": 1705312200\n}\n</code></pre></p>"},{"location":"user-guide/api-reference/#delete-instance","title":"Delete Instance","text":"<p>Stop and remove an instance.</p> <pre><code>DELETE /api/v1/instances/{name}\n</code></pre> <p>Response: <code>204 No Content</code></p>"},{"location":"user-guide/api-reference/#instance-operations","title":"Instance Operations","text":""},{"location":"user-guide/api-reference/#start-instance","title":"Start Instance","text":"<p>Start a stopped instance.</p> <pre><code>POST /api/v1/instances/{name}/start\n</code></pre> <p>Response: <pre><code>{\n \"name\": \"llama2-7b\",\n \"status\": \"running\",\n \"created\": 1705312200\n}\n</code></pre></p> <p>Error Responses: - <code>409 Conflict</code>: Maximum number of running instances reached - <code>500 Internal Server Error</code>: Failed to start instance</p>"},{"location":"user-guide/api-reference/#stop-instance","title":"Stop Instance","text":"<p>Stop a running instance.</p> <pre><code>POST /api/v1/instances/{name}/stop\n</code></pre> <p>Response: <pre><code>{\n \"name\": \"llama2-7b\",\n \"status\": \"stopped\",\n \"created\": 1705312200\n}\n</code></pre></p>"},{"location":"user-guide/api-reference/#restart-instance","title":"Restart Instance","text":"<p>Restart an instance (stop then start).</p> <pre><code>POST /api/v1/instances/{name}/restart\n</code></pre> <p>Response: <pre><code>{\n \"name\": \"llama2-7b\",\n \"status\": \"running\",\n \"created\": 1705312200\n}\n</code></pre></p>"},{"location":"user-guide/api-reference/#get-instance-logs","title":"Get Instance Logs","text":"<p>Retrieve instance logs.</p> <pre><code>GET /api/v1/instances/{name}/logs\n</code></pre> <p>Query Parameters: - <code>lines</code>: Number of lines to return (default: all lines, use -1 for all)</p> <p>Response: Plain text log output</p> <p>Example: <pre><code>curl \"http://localhost:8080/api/v1/instances/my-instance/logs?lines=100\"\n</code></pre></p>"},{"location":"user-guide/api-reference/#proxy-to-instance","title":"Proxy to Instance","text":"<p>Proxy HTTP requests directly to the llama-server instance.</p> <pre><code>GET /api/v1/instances/{name}/proxy/*\nPOST /api/v1/instances/{name}/proxy/*\n</code></pre> <p>This endpoint forwards all requests to the underlying llama-server instance running on its configured port. The proxy strips the <code>/api/v1/instances/{name}/proxy</code> prefix and forwards the remaining path to the instance.</p> <p>Example - Check Instance Health: <pre><code>curl -H \"Authorization: Bearer your-api-key\" \\\n http://localhost:8080/api/v1/instances/my-model/proxy/health\n</code></pre></p> <p>This forwards the request to <code>http://instance-host:instance-port/health</code> on the actual llama-server instance.</p> <p>Error Responses: - <code>503 Service Unavailable</code>: Instance is not running</p>"},{"location":"user-guide/api-reference/#openai-compatible-api","title":"OpenAI-Compatible API","text":"<p>Llamactl provides OpenAI-compatible endpoints for inference operations.</p>"},{"location":"user-guide/api-reference/#list-models","title":"List Models","text":"<p>List all instances in OpenAI-compatible format.</p> <pre><code>GET /v1/models\n</code></pre> <p>Response: <pre><code>{\n \"object\": \"list\",\n \"data\": [\n {\n \"id\": \"llama2-7b\",\n \"object\": \"model\",\n \"created\": 1705312200,\n \"owned_by\": \"llamactl\"\n }\n ]\n}\n</code></pre></p>"},{"location":"user-guide/api-reference/#chat-completions-completions-embeddings","title":"Chat Completions, Completions, Embeddings","text":"<p>All OpenAI-compatible inference endpoints are available:</p> <pre><code>POST /v1/chat/completions\nPOST /v1/completions\nPOST /v1/embeddings\nPOST /v1/rerank\nPOST /v1/reranking\n</code></pre> <p>Request Body: Standard OpenAI format with <code>model</code> field specifying the instance name</p> <p>Example: <pre><code>{\n \"model\": \"llama2-7b\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"Hello, how are you?\"\n }\n ]\n}\n</code></pre></p> <p>The server routes requests to the appropriate instance based on the <code>model</code> field in the request body. Instances with on-demand starting enabled will be automatically started if not running. For configuration details, see Managing Instances.</p> <p>Error Responses: - <code>400 Bad Request</code>: Invalid request body or missing instance name - <code>503 Service Unavailable</code>: Instance is not running and on-demand start is disabled - <code>409 Conflict</code>: Cannot start instance due to maximum instances limit</p>"},{"location":"user-guide/api-reference/#instance-status-values","title":"Instance Status Values","text":"<p>Instances can have the following status values: - <code>stopped</code>: Instance is not running - <code>running</code>: Instance is running and ready to accept requests - <code>failed</code>: Instance failed to start or crashed </p>"},{"location":"user-guide/api-reference/#error-responses","title":"Error Responses","text":"<p>All endpoints may return error responses in the following format:</p> <pre><code>{\n \"error\": \"Error message description\"\n}\n</code></pre>"},{"location":"user-guide/api-reference/#common-http-status-codes","title":"Common HTTP Status Codes","text":"<ul> <li><code>200</code>: Success</li> <li><code>201</code>: Created</li> <li><code>204</code>: No Content (successful deletion)</li> <li><code>400</code>: Bad Request (invalid parameters or request body)</li> <li><code>401</code>: Unauthorized (missing or invalid API key)</li> <li><code>403</code>: Forbidden (insufficient permissions)</li> <li><code>404</code>: Not Found (instance not found)</li> <li><code>409</code>: Conflict (instance already exists, max instances reached)</li> <li><code>500</code>: Internal Server Error</li> <li><code>503</code>: Service Unavailable (instance not running)</li> </ul>"},{"location":"user-guide/api-reference/#examples","title":"Examples","text":""},{"location":"user-guide/api-reference/#complete-instance-lifecycle","title":"Complete Instance Lifecycle","text":"<pre><code># Create and start instance\ncurl -X POST http://localhost:8080/api/v1/instances/my-model \\\n -H \"Content-Type: application/json\" \\\n -H \"Authorization: Bearer your-api-key\" \\\n -d '{\n \"backend_type\": \"llama_cpp\",\n \"backend_options\": {\n \"model\": \"/models/llama-2-7b.gguf\",\n \"gpu_layers\": 32\n },\n \"environment\": {\n \"CUDA_VISIBLE_DEVICES\": \"0\",\n \"OMP_NUM_THREADS\": \"8\"\n }\n }'\n\n# Check instance status\ncurl -H \"Authorization: Bearer your-api-key\" \\\n http://localhost:8080/api/v1/instances/my-model\n\n# Get instance logs\ncurl -H \"Authorization: Bearer your-api-key\" \\\n \"http://localhost:8080/api/v1/instances/my-model/logs?lines=50\"\n\n# Use OpenAI-compatible chat completions\ncurl -X POST http://localhost:8080/v1/chat/completions \\\n -H \"Content-Type: application/json\" \\\n -H \"Authorization: Bearer your-inference-api-key\" \\\n -d '{\n \"model\": \"my-model\",\n \"messages\": [\n {\"role\": \"user\", \"content\": \"Hello!\"}\n ],\n \"max_tokens\": 100\n }'\n\n# Stop instance\ncurl -X POST -H \"Authorization: Bearer your-api-key\" \\\n http://localhost:8080/api/v1/instances/my-model/stop\n\n# Delete instance\ncurl -X DELETE -H \"Authorization: Bearer your-api-key\" \\\n http://localhost:8080/api/v1/instances/my-model\n</code></pre>"},{"location":"user-guide/api-reference/#using-the-proxy-endpoint","title":"Using the Proxy Endpoint","text":"<p>You can also directly proxy requests to the llama-server instance:</p> <pre><code># Direct proxy to instance (bypasses OpenAI compatibility layer)\ncurl -X POST http://localhost:8080/api/v1/instances/my-model/proxy/completion \\\n -H \"Content-Type: application/json\" \\\n -H \"Authorization: Bearer your-api-key\" \\\n -d '{\n \"prompt\": \"Hello, world!\",\n \"n_predict\": 50\n }'\n</code></pre>"},{"location":"user-guide/api-reference/#backend-specific-endpoints","title":"Backend-Specific Endpoints","text":""},{"location":"user-guide/api-reference/#parse-commands","title":"Parse Commands","text":"<p>Llamactl provides endpoints to parse command strings from different backends into instance configuration options.</p>"},{"location":"user-guide/api-reference/#parse-llamacpp-command","title":"Parse Llama.cpp Command","text":"<p>Parse a llama-server command string into instance options.</p> <pre><code>POST /api/v1/backends/llama-cpp/parse-command\n</code></pre> <p>Request Body: <pre><code>{\n \"command\": \"llama-server -m /path/to/model.gguf -c 2048 --port 8080\"\n}\n</code></pre></p> <p>Response: <pre><code>{\n \"backend_type\": \"llama_cpp\",\n \"llama_server_options\": {\n \"model\": \"/path/to/model.gguf\",\n \"ctx_size\": 2048,\n \"port\": 8080\n }\n}\n</code></pre></p>"},{"location":"user-guide/api-reference/#parse-mlx-lm-command","title":"Parse MLX-LM Command","text":"<p>Parse an MLX-LM server command string into instance options.</p> <pre><code>POST /api/v1/backends/mlx/parse-command\n</code></pre> <p>Request Body: <pre><code>{\n \"command\": \"mlx_lm.server --model /path/to/model --port 8080\"\n}\n</code></pre></p> <p>Response: <pre><code>{\n \"backend_type\": \"mlx_lm\",\n \"mlx_server_options\": {\n \"model\": \"/path/to/model\",\n \"port\": 8080\n }\n}\n</code></pre></p>"},{"location":"user-guide/api-reference/#parse-vllm-command","title":"Parse vLLM Command","text":"<p>Parse a vLLM serve command string into instance options.</p> <pre><code>POST /api/v1/backends/vllm/parse-command\n</code></pre> <p>Request Body: <pre><code>{\n \"command\": \"vllm serve /path/to/model --port 8080\"\n}\n</code></pre></p> <p>Response: <pre><code>{\n \"backend_type\": \"vllm\",\n \"vllm_server_options\": {\n \"model\": \"/path/to/model\",\n \"port\": 8080\n }\n}\n</code></pre></p> <p>Error Responses for Parse Commands: - <code>400 Bad Request</code>: Invalid request body, empty command, or parse error - <code>500 Internal Server Error</code>: Encoding error</p>"},{"location":"user-guide/api-reference/#auto-generated-documentation","title":"Auto-Generated Documentation","text":"<p>The API documentation is automatically generated from code annotations using Swagger/OpenAPI. To regenerate the documentation:</p> <ol> <li>Install the swag tool: <code>go install github.com/swaggo/swag/cmd/swag@latest</code></li> <li>Generate docs: <code>swag init -g cmd/server/main.go -o apidocs</code></li> </ol>"},{"location":"user-guide/api-reference/#swagger-documentation","title":"Swagger Documentation","text":"<p>If swagger documentation is enabled in the server configuration, you can access the interactive API documentation at:</p> <pre><code>http://localhost:8080/swagger/\n</code></pre> <p>This provides a complete interactive interface for testing all API endpoints.</p>"},{"location":"user-guide/managing-instances/","title":"Managing Instances","text":"<p>Learn how to effectively manage your llama.cpp, MLX, and vLLM instances with Llamactl through both the Web UI and API.</p>"},{"location":"user-guide/managing-instances/#overview","title":"Overview","text":"<p>Llamactl provides two ways to manage instances:</p> <ul> <li>Web UI: Accessible at <code>http://localhost:8080</code> with an intuitive dashboard</li> <li>REST API: Programmatic access for automation and integration</li> </ul> <p></p>"},{"location":"user-guide/managing-instances/#authentication","title":"Authentication","text":"<p>If authentication is enabled: 1. Navigate to the web UI 2. Enter your credentials 3. Bearer token is stored for the session</p>"},{"location":"user-guide/managing-instances/#theme-support","title":"Theme Support","text":"<ul> <li>Switch between light and dark themes</li> <li>Setting is remembered across sessions</li> </ul>"},{"location":"user-guide/managing-instances/#instance-cards","title":"Instance Cards","text":"<p>Each instance is displayed as a card showing:</p> <ul> <li>Instance name</li> <li>Health status badge (unknown, ready, error, failed)</li> <li>Action buttons (start, stop, edit, logs, delete)</li> </ul>"},{"location":"user-guide/managing-instances/#create-instance","title":"Create Instance","text":""},{"location":"user-guide/managing-instances/#via-web-ui","title":"Via Web UI","text":"<ol> <li>Click the \"Create Instance\" button on the dashboard</li> <li>Enter a unique Name for your instance (only required field)</li> <li>Choose Backend Type:<ul> <li>llama.cpp: For GGUF models using llama-server</li> <li>MLX: For MLX-optimized models (macOS only)</li> <li>vLLM: For distributed serving and high-throughput inference</li> </ul> </li> <li>Configure model source:<ul> <li>For llama.cpp: GGUF model path or HuggingFace repo</li> <li>For MLX: MLX model path or identifier (e.g., <code>mlx-community/Mistral-7B-Instruct-v0.3-4bit</code>)</li> <li>For vLLM: HuggingFace model identifier (e.g., <code>microsoft/DialoGPT-medium</code>)</li> </ul> </li> <li>Configure optional instance management settings:<ul> <li>Auto Restart: Automatically restart instance on failure</li> <li>Max Restarts: Maximum number of restart attempts</li> <li>Restart Delay: Delay in seconds between restart attempts</li> <li>On Demand Start: Start instance when receiving a request to the OpenAI compatible endpoint</li> <li>Idle Timeout: Minutes before stopping idle instance (set to 0 to disable)</li> <li>Environment Variables: Set custom environment variables for the instance process</li> </ul> </li> <li>Configure backend-specific options:<ul> <li>llama.cpp: Threads, context size, GPU layers, port, etc.</li> <li>MLX: Temperature, top-p, adapter path, Python environment, etc.</li> <li>vLLM: Tensor parallel size, GPU memory utilization, quantization, etc.</li> </ul> </li> <li>Click \"Create\" to save the instance </li> </ol>"},{"location":"user-guide/managing-instances/#via-api","title":"Via API","text":"<pre><code># Create llama.cpp instance with local model file\ncurl -X POST http://localhost:8080/api/instances/my-llama-instance \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"backend_type\": \"llama_cpp\",\n \"backend_options\": {\n \"model\": \"/path/to/model.gguf\",\n \"threads\": 8,\n \"ctx_size\": 4096,\n \"gpu_layers\": 32\n }\n }'\n\n# Create MLX instance (macOS only)\ncurl -X POST http://localhost:8080/api/instances/my-mlx-instance \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"backend_type\": \"mlx_lm\",\n \"backend_options\": {\n \"model\": \"mlx-community/Mistral-7B-Instruct-v0.3-4bit\",\n \"temp\": 0.7,\n \"top_p\": 0.9,\n \"max_tokens\": 2048\n },\n \"auto_restart\": true,\n \"max_restarts\": 3\n }'\n\n# Create vLLM instance\ncurl -X POST http://localhost:8080/api/instances/my-vllm-instance \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"backend_type\": \"vllm\",\n \"backend_options\": {\n \"model\": \"microsoft/DialoGPT-medium\",\n \"tensor_parallel_size\": 2,\n \"gpu_memory_utilization\": 0.9\n },\n \"auto_restart\": true,\n \"on_demand_start\": true,\n \"environment\": {\n \"CUDA_VISIBLE_DEVICES\": \"0,1\",\n \"NCCL_DEBUG\": \"INFO\",\n \"PYTHONPATH\": \"/custom/path\"\n }\n }'\n\n# Create llama.cpp instance with HuggingFace model\ncurl -X POST http://localhost:8080/api/instances/gemma-3-27b \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"backend_type\": \"llama_cpp\",\n \"backend_options\": {\n \"hf_repo\": \"unsloth/gemma-3-27b-it-GGUF\",\n \"hf_file\": \"gemma-3-27b-it-GGUF.gguf\",\n \"gpu_layers\": 32\n }\n }'\n</code></pre>"},{"location":"user-guide/managing-instances/#start-instance","title":"Start Instance","text":""},{"location":"user-guide/managing-instances/#via-web-ui_1","title":"Via Web UI","text":"<ol> <li>Click the \"Start\" button on an instance card</li> <li>Watch the status change to \"Unknown\"</li> <li>Monitor progress in the logs</li> <li>Instance status changes to \"Ready\" when ready</li> </ol>"},{"location":"user-guide/managing-instances/#via-api_1","title":"Via API","text":"<pre><code>curl -X POST http://localhost:8080/api/instances/{name}/start\n</code></pre>"},{"location":"user-guide/managing-instances/#stop-instance","title":"Stop Instance","text":""},{"location":"user-guide/managing-instances/#via-web-ui_2","title":"Via Web UI","text":"<ol> <li>Click the \"Stop\" button on an instance card</li> <li>Instance gracefully shuts down</li> </ol>"},{"location":"user-guide/managing-instances/#via-api_2","title":"Via API","text":"<pre><code>curl -X POST http://localhost:8080/api/instances/{name}/stop\n</code></pre>"},{"location":"user-guide/managing-instances/#edit-instance","title":"Edit Instance","text":""},{"location":"user-guide/managing-instances/#via-web-ui_3","title":"Via Web UI","text":"<ol> <li>Click the \"Edit\" button on an instance card</li> <li>Modify settings in the configuration dialog</li> <li>Changes require instance restart to take effect</li> <li>Click \"Update & Restart\" to apply changes</li> </ol>"},{"location":"user-guide/managing-instances/#via-api_3","title":"Via API","text":"<p>Modify instance settings:</p> <pre><code>curl -X PUT http://localhost:8080/api/instances/{name} \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"backend_options\": {\n \"threads\": 8,\n \"context_size\": 4096\n }\n }'\n</code></pre> <p>Note</p> <p>Configuration changes require restarting the instance to take effect.</p>"},{"location":"user-guide/managing-instances/#view-logs","title":"View Logs","text":""},{"location":"user-guide/managing-instances/#via-web-ui_4","title":"Via Web UI","text":"<ol> <li>Click the \"Logs\" button on any instance card</li> <li>Real-time log viewer opens</li> </ol>"},{"location":"user-guide/managing-instances/#via-api_4","title":"Via API","text":"<p>Check instance status in real-time:</p> <pre><code># Get instance details\ncurl http://localhost:8080/api/instances/{name}/logs\n</code></pre>"},{"location":"user-guide/managing-instances/#delete-instance","title":"Delete Instance","text":""},{"location":"user-guide/managing-instances/#via-web-ui_5","title":"Via Web UI","text":"<ol> <li>Click the \"Delete\" button on an instance card</li> <li>Only stopped instances can be deleted</li> <li>Confirm deletion in the dialog</li> </ol>"},{"location":"user-guide/managing-instances/#via-api_5","title":"Via API","text":"<pre><code>curl -X DELETE http://localhost:8080/api/instances/{name}\n</code></pre>"},{"location":"user-guide/managing-instances/#instance-proxy","title":"Instance Proxy","text":"<p>Llamactl proxies all requests to the underlying backend instances (llama-server, MLX, or vLLM).</p> <pre><code># Get instance details\ncurl http://localhost:8080/api/instances/{name}/proxy/\n</code></pre> <p>All backends provide OpenAI-compatible endpoints. Check the respective documentation: - llama-server docs - MLX-LM docs - vLLM docs</p>"},{"location":"user-guide/managing-instances/#instance-health","title":"Instance Health","text":""},{"location":"user-guide/managing-instances/#via-web-ui_6","title":"Via Web UI","text":"<ol> <li>The health status badge is displayed on each instance card</li> </ol>"},{"location":"user-guide/managing-instances/#via-api_6","title":"Via API","text":"<p>Check the health status of your instances:</p> <pre><code>curl http://localhost:8080/api/instances/{name}/proxy/health\n</code></pre>"},{"location":"user-guide/troubleshooting/","title":"Troubleshooting","text":"<p>Issues specific to Llamactl deployment and operation.</p>"},{"location":"user-guide/troubleshooting/#configuration-issues","title":"Configuration Issues","text":""},{"location":"user-guide/troubleshooting/#invalid-configuration","title":"Invalid Configuration","text":"<p>Problem: Invalid configuration preventing startup</p> <p>Solutions: 1. Use minimal configuration: <pre><code>server:\n host: \"0.0.0.0\"\n port: 8080\ninstances:\n port_range: [8000, 9000]\n</code></pre></p> <ol> <li>Check data directory permissions: <pre><code># Ensure data directory is writable (default: ~/.local/share/llamactl)\nmkdir -p ~/.local/share/llamactl/{instances,logs}\n</code></pre></li> </ol>"},{"location":"user-guide/troubleshooting/#instance-management-issues","title":"Instance Management Issues","text":""},{"location":"user-guide/troubleshooting/#model-loading-failures","title":"Model Loading Failures","text":"<p>Problem: Instance fails to start with model loading errors</p> <p>Common Solutions: - llama-server not found: Ensure <code>llama-server</code> binary is in PATH - Wrong model format: Ensure model is in GGUF format - Insufficient memory: Use smaller model or reduce context size - Path issues: Use absolute paths to model files </p>"},{"location":"user-guide/troubleshooting/#memory-issues","title":"Memory Issues","text":"<p>Problem: Out of memory errors or system becomes unresponsive</p> <p>Solutions: 1. Reduce context size: <pre><code>{\n \"n_ctx\": 1024\n}\n</code></pre></p> <ol> <li>Use quantized models: </li> <li>Try Q4_K_M instead of higher precision models </li> <li>Use smaller model variants (7B instead of 13B) </li> </ol>"},{"location":"user-guide/troubleshooting/#gpu-configuration","title":"GPU Configuration","text":"<p>Problem: GPU not being used effectively</p> <p>Solutions: 1. Configure GPU layers: <pre><code>{\n \"n_gpu_layers\": 35\n}\n</code></pre></p>"},{"location":"user-guide/troubleshooting/#advanced-instance-issues","title":"Advanced Instance Issues","text":"<p>Problem: Complex model loading, performance, or compatibility issues</p> <p>Since llamactl uses <code>llama-server</code> under the hood, many instance-related issues are actually llama.cpp issues. For advanced troubleshooting:</p> <p>Resources: - llama.cpp Documentation: https://github.com/ggml/llama.cpp - llama.cpp Issues: https://github.com/ggml/llama.cpp/issues - llama.cpp Discussions: https://github.com/ggml/llama.cpp/discussions </p> <p>Testing directly with llama-server: <pre><code># Test your model and parameters directly with llama-server\nllama-server --model /path/to/model.gguf --port 8081 --n-gpu-layers 35\n</code></pre></p> <p>This helps determine if the issue is with llamactl or with the underlying llama.cpp/llama-server.</p>"},{"location":"user-guide/troubleshooting/#api-and-network-issues","title":"API and Network Issues","text":""},{"location":"user-guide/troubleshooting/#cors-errors","title":"CORS Errors","text":"<p>Problem: Web UI shows CORS errors in browser console</p> <p>Solutions: 1. Configure allowed origins: <pre><code>server:\n allowed_origins:\n - \"http://localhost:3000\"\n - \"https://yourdomain.com\"\n</code></pre></p>"},{"location":"user-guide/troubleshooting/#authentication-issues","title":"Authentication Issues","text":"<p>Problem: API requests failing with authentication errors</p> <p>Solutions: 1. Disable authentication temporarily: <pre><code>auth:\n require_management_auth: false\n require_inference_auth: false\n</code></pre></p> <ol> <li> <p>Configure API keys: <pre><code>auth:\n management_keys:\n - \"your-management-key\"\n inference_keys:\n - \"your-inference-key\"\n</code></pre></p> </li> <li> <p>Use correct Authorization header: <pre><code>curl -H \"Authorization: Bearer your-api-key\" \\\n http://localhost:8080/api/v1/instances\n</code></pre></p> </li> </ol>"},{"location":"user-guide/troubleshooting/#debugging-and-logs","title":"Debugging and Logs","text":""},{"location":"user-guide/troubleshooting/#viewing-instance-logs","title":"Viewing Instance Logs","text":"<pre><code># Get instance logs via API\ncurl http://localhost:8080/api/v1/instances/{name}/logs\n\n# Or check log files directly\ntail -f ~/.local/share/llamactl/logs/{instance-name}.log\n</code></pre>"},{"location":"user-guide/troubleshooting/#enable-debug-logging","title":"Enable Debug Logging","text":"<pre><code>export LLAMACTL_LOG_LEVEL=debug\nllamactl\n</code></pre>"},{"location":"user-guide/troubleshooting/#getting-help","title":"Getting Help","text":"<p>When reporting issues, include:</p> <ol> <li> <p>System information: <pre><code>llamactl --version\n</code></pre></p> </li> <li> <p>Configuration file (remove sensitive keys)</p> </li> <li> <p>Relevant log output</p> </li> <li> <p>Steps to reproduce the issue</p> </li> </ol>"}]} |