diff --git a/docs/images/create_instance.png b/docs/images/create_instance.png new file mode 100644 index 0000000..c1ce856 Binary files /dev/null and b/docs/images/create_instance.png differ diff --git a/docs/index.md b/docs/index.md index 8dc6b1c..d3e7bb9 100644 --- a/docs/index.md +++ b/docs/index.md @@ -2,13 +2,13 @@ Welcome to the Llamactl documentation! **Management server and proxy for multiple llama.cpp instances with OpenAI-compatible API routing.** -![Dashboard Screenshot](images/screenshot.png) +![Dashboard Screenshot](images/dashboard.png) ## What is Llamactl? Llamactl is designed to simplify the deployment and management of llama-server instances. It provides a modern solution for running multiple large language models with centralized management. -## Why llamactl? +## Features 🚀 **Multiple Model Serving**: Run different models simultaneously (7B for speed, 70B for quality) 🔗 **OpenAI API Compatible**: Drop-in replacement - route requests by model name @@ -19,19 +19,6 @@ Llamactl is designed to simplify the deployment and management of llama-server i 💡 **On-Demand Instance Start**: Automatically launch instances upon receiving OpenAI-compatible API requests 💾 **State Persistence**: Ensure instances remain intact across server restarts -**Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances -**Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations -**Choose LM Studio if**: You prefer a polished desktop GUI experience with easy model management - -## Key Features - -- 🚀 **Easy Setup**: Quick installation and configuration -- 🌐 **Web Interface**: Intuitive web UI for model management -- 🔧 **REST API**: Full API access for automation -- 📊 **Monitoring**: Real-time health and status monitoring -- 🔒 **Security**: Authentication and access control -- 📱 **Responsive**: Works on desktop and mobile devices - ## Quick Links - [Installation Guide](getting-started/installation.md) - Get Llamactl up and running diff --git a/docs/user-guide/managing-instances.md b/docs/user-guide/managing-instances.md index 0ee2171..90e4552 100644 --- a/docs/user-guide/managing-instances.md +++ b/docs/user-guide/managing-instances.md @@ -9,6 +9,8 @@ Llamactl provides two ways to manage instances: - **Web UI**: Accessible at `http://localhost:8080` with an intuitive dashboard - **REST API**: Programmatic access for automation and integration +![Dashboard Screenshot](../images/dashboard.png) + ### Authentication If authentication is enabled: @@ -33,7 +35,9 @@ Each instance is displayed as a card showing: ### Via Web UI -1. Click the **"Add Instance"** button on the dashboard +![Create Instance Screenshot](../images/create_instance.png) + +1. Click the **"Create Instance"** button on the dashboard 2. Enter a unique **Name** for your instance (only required field) 3. Configure model source (choose one): - **Model Path**: Full path to your downloaded GGUF model file