mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-05 16:44:22 +00:00
Update README to include dashboard screenshot
This commit is contained in:
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
  
|
  
|
||||||
|
|
||||||
**Management server for multiple llama.cpp instances with OpenAI-compatible API routing.**
|
**Management server and proxy for multiple llama.cpp instances with OpenAI-compatible API routing.**
|
||||||
|
|
||||||
## Why llamactl?
|
## Why llamactl?
|
||||||
|
|
||||||
@@ -13,6 +13,8 @@
|
|||||||
📊 **Instance Monitoring**: Health checks, auto-restart, log management
|
📊 **Instance Monitoring**: Health checks, auto-restart, log management
|
||||||
⚡ **Persistent State**: Instances survive server restarts
|
⚡ **Persistent State**: Instances survive server restarts
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
**Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances
|
**Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances
|
||||||
**Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations
|
**Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations
|
||||||
**Choose LM Studio if**: You prefer a polished desktop GUI experience with easy model management
|
**Choose LM Studio if**: You prefer a polished desktop GUI experience with easy model management
|
||||||
|
|||||||
BIN
docs/images/screenshot.png
Normal file
BIN
docs/images/screenshot.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 47 KiB |
Reference in New Issue
Block a user