diff --git a/README.md b/README.md index 47c13a6..e944e5f 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ ![Build and Release](https://github.com/lordmathis/llamactl/actions/workflows/release.yaml/badge.svg) ![Go Tests](https://github.com/lordmathis/llamactl/actions/workflows/go_test.yaml/badge.svg) ![WebUI Tests](https://github.com/lordmathis/llamactl/actions/workflows/webui_test.yaml/badge.svg) -**Management server for multiple llama.cpp instances with OpenAI-compatible API routing.** +**Management server and proxy for multiple llama.cpp instances with OpenAI-compatible API routing.** ## Why llamactl? @@ -13,6 +13,8 @@ 📊 **Instance Monitoring**: Health checks, auto-restart, log management ⚡ **Persistent State**: Instances survive server restarts +![Dashboard Screenshot](docs/images/screenshot.png) + **Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances **Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations **Choose LM Studio if**: You prefer a polished desktop GUI experience with easy model management diff --git a/docs/images/screenshot.png b/docs/images/screenshot.png new file mode 100644 index 0000000..1c77ed2 Binary files /dev/null and b/docs/images/screenshot.png differ