mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-05 16:44:22 +00:00
Update README to include dashboard screenshot
This commit is contained in:
@@ -2,7 +2,7 @@
|
||||
|
||||
  
|
||||
|
||||
**Management server for multiple llama.cpp instances with OpenAI-compatible API routing.**
|
||||
**Management server and proxy for multiple llama.cpp instances with OpenAI-compatible API routing.**
|
||||
|
||||
## Why llamactl?
|
||||
|
||||
@@ -13,6 +13,8 @@
|
||||
📊 **Instance Monitoring**: Health checks, auto-restart, log management
|
||||
⚡ **Persistent State**: Instances survive server restarts
|
||||
|
||||

|
||||
|
||||
**Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances
|
||||
**Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations
|
||||
**Choose LM Studio if**: You prefer a polished desktop GUI experience with easy model management
|
||||
|
||||
Reference in New Issue
Block a user