mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-05 16:44:22 +00:00
Update documentation and add README synchronization
This commit is contained in:
@@ -2,7 +2,7 @@
|
||||
|
||||
  
|
||||
|
||||
**Unified management and routing for llama.cpp and MLX models with web dashboard.**
|
||||
**Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.**
|
||||
|
||||
## Features
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
- **State Persistence**: Ensure instances remain intact across server restarts
|
||||
|
||||
### 🔗 Universal Compatibility
|
||||
- **OpenAI API Compatible**: Drop-in replacement - route requests by model name
|
||||
- **OpenAI API Compatible**: Drop-in replacement - route requests by instance name
|
||||
- **Multi-Backend Support**: Native support for llama.cpp, MLX (Apple Silicon optimized), and vLLM
|
||||
|
||||
### 🌐 User-Friendly Interface
|
||||
|
||||
Reference in New Issue
Block a user