mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-05 16:44:22 +00:00
Add create instance screenshot and update managing instances documentation
This commit is contained in:
BIN
docs/images/create_instance.png
Normal file
BIN
docs/images/create_instance.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 69 KiB |
@@ -2,13 +2,13 @@
|
|||||||
|
|
||||||
Welcome to the Llamactl documentation! **Management server and proxy for multiple llama.cpp instances with OpenAI-compatible API routing.**
|
Welcome to the Llamactl documentation! **Management server and proxy for multiple llama.cpp instances with OpenAI-compatible API routing.**
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## What is Llamactl?
|
## What is Llamactl?
|
||||||
|
|
||||||
Llamactl is designed to simplify the deployment and management of llama-server instances. It provides a modern solution for running multiple large language models with centralized management.
|
Llamactl is designed to simplify the deployment and management of llama-server instances. It provides a modern solution for running multiple large language models with centralized management.
|
||||||
|
|
||||||
## Why llamactl?
|
## Features
|
||||||
|
|
||||||
🚀 **Multiple Model Serving**: Run different models simultaneously (7B for speed, 70B for quality)
|
🚀 **Multiple Model Serving**: Run different models simultaneously (7B for speed, 70B for quality)
|
||||||
🔗 **OpenAI API Compatible**: Drop-in replacement - route requests by model name
|
🔗 **OpenAI API Compatible**: Drop-in replacement - route requests by model name
|
||||||
@@ -19,19 +19,6 @@ Llamactl is designed to simplify the deployment and management of llama-server i
|
|||||||
💡 **On-Demand Instance Start**: Automatically launch instances upon receiving OpenAI-compatible API requests
|
💡 **On-Demand Instance Start**: Automatically launch instances upon receiving OpenAI-compatible API requests
|
||||||
💾 **State Persistence**: Ensure instances remain intact across server restarts
|
💾 **State Persistence**: Ensure instances remain intact across server restarts
|
||||||
|
|
||||||
**Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances
|
|
||||||
**Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations
|
|
||||||
**Choose LM Studio if**: You prefer a polished desktop GUI experience with easy model management
|
|
||||||
|
|
||||||
## Key Features
|
|
||||||
|
|
||||||
- 🚀 **Easy Setup**: Quick installation and configuration
|
|
||||||
- 🌐 **Web Interface**: Intuitive web UI for model management
|
|
||||||
- 🔧 **REST API**: Full API access for automation
|
|
||||||
- 📊 **Monitoring**: Real-time health and status monitoring
|
|
||||||
- 🔒 **Security**: Authentication and access control
|
|
||||||
- 📱 **Responsive**: Works on desktop and mobile devices
|
|
||||||
|
|
||||||
## Quick Links
|
## Quick Links
|
||||||
|
|
||||||
- [Installation Guide](getting-started/installation.md) - Get Llamactl up and running
|
- [Installation Guide](getting-started/installation.md) - Get Llamactl up and running
|
||||||
|
|||||||
@@ -9,6 +9,8 @@ Llamactl provides two ways to manage instances:
|
|||||||
- **Web UI**: Accessible at `http://localhost:8080` with an intuitive dashboard
|
- **Web UI**: Accessible at `http://localhost:8080` with an intuitive dashboard
|
||||||
- **REST API**: Programmatic access for automation and integration
|
- **REST API**: Programmatic access for automation and integration
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
### Authentication
|
### Authentication
|
||||||
|
|
||||||
If authentication is enabled:
|
If authentication is enabled:
|
||||||
@@ -33,7 +35,9 @@ Each instance is displayed as a card showing:
|
|||||||
|
|
||||||
### Via Web UI
|
### Via Web UI
|
||||||
|
|
||||||
1. Click the **"Add Instance"** button on the dashboard
|

|
||||||
|
|
||||||
|
1. Click the **"Create Instance"** button on the dashboard
|
||||||
2. Enter a unique **Name** for your instance (only required field)
|
2. Enter a unique **Name** for your instance (only required field)
|
||||||
3. Configure model source (choose one):
|
3. Configure model source (choose one):
|
||||||
- **Model Path**: Full path to your downloaded GGUF model file
|
- **Model Path**: Full path to your downloaded GGUF model file
|
||||||
|
|||||||
Reference in New Issue
Block a user