mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-06 00:54:23 +00:00
Replace main screenshot
This commit is contained in:
@@ -15,7 +15,7 @@
|
|||||||
💡 **On-Demand Instance Start**: Automatically launch instances upon receiving OpenAI-compatible API requests
|
💡 **On-Demand Instance Start**: Automatically launch instances upon receiving OpenAI-compatible API requests
|
||||||
💾 **State Persistence**: Ensure instances remain intact across server restarts
|
💾 **State Persistence**: Ensure instances remain intact across server restarts
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
**Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances
|
**Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances
|
||||||
**Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations
|
**Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations
|
||||||
|
|||||||
BIN
docs/images/dashboard.png
Normal file
BIN
docs/images/dashboard.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 44 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 47 KiB |
Reference in New Issue
Block a user