mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-12-22 17:14:22 +00:00
Clarify port and api key assignments
This commit is contained in:
@@ -138,7 +138,7 @@ go build -o llamactl ./cmd/server
|
||||
1. Open http://localhost:8080
|
||||
2. Click "Create Instance"
|
||||
3. Choose backend type (llama.cpp, MLX, or vLLM)
|
||||
4. Configure your model and options
|
||||
4. Configure your model and options (ports and API keys are auto-assigned)
|
||||
5. Start the instance and use it with any OpenAI-compatible client
|
||||
|
||||
## Configuration
|
||||
|
||||
Reference in New Issue
Block a user