mirror of
https://github.com/lordmathis/llamactl.git
synced 2025-11-06 17:14:28 +00:00
Merge pull request #11 from lordmathis/feat/state-persistance
feat: Persist instances configs across app restarts
This commit is contained in:
75
README.md
75
README.md
@@ -11,10 +11,12 @@ A control server for managing multiple Llama Server instances with a web-based d
|
|||||||
- **Auto-restart**: Configurable automatic restart on instance failure
|
- **Auto-restart**: Configurable automatic restart on instance failure
|
||||||
- **Instance Monitoring**: Real-time health checks and status monitoring
|
- **Instance Monitoring**: Real-time health checks and status monitoring
|
||||||
- **Log Management**: View, search, and download instance logs
|
- **Log Management**: View, search, and download instance logs
|
||||||
|
- **Data Persistence**: Persistent storage of instance state.
|
||||||
- **REST API**: Full API for programmatic control
|
- **REST API**: Full API for programmatic control
|
||||||
- **OpenAI Compatible**: Route requests to instances by instance name
|
- **OpenAI Compatible**: Route requests to instances by instance name
|
||||||
- **Configuration Management**: Comprehensive llama-server parameter support
|
- **Configuration Management**: Comprehensive llama-server parameter support
|
||||||
- **System Information**: View llama-server version, devices, and help
|
- **System Information**: View llama-server version, devices, and help
|
||||||
|
- **API Key Authentication**: Secure access with separate management and inference keys
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
@@ -79,41 +81,30 @@ go build -o llamactl ./cmd/server
|
|||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
|
|
||||||
llamactl can be configured via configuration files or environment variables. Configuration is loaded in the following order of precedence:
|
llamactl can be configured via configuration files or environment variables. Configuration is loaded in the following order of precedence:
|
||||||
|
|
||||||
1. Hardcoded defaults
|
1. Hardcoded defaults
|
||||||
2. Configuration file
|
2. Configuration file
|
||||||
3. Environment variables
|
3. Environment variables
|
||||||
|
|
||||||
|
|
||||||
### Configuration Files
|
### Configuration Files
|
||||||
|
|
||||||
Configuration files are searched in the following locations:
|
#### Configuration File Locations
|
||||||
|
|
||||||
|
Configuration files are searched in the following locations (in order of precedence):
|
||||||
|
|
||||||
**Linux/macOS:**
|
**Linux/macOS:**
|
||||||
- `./llamactl.yaml` or `./config.yaml` (current directory)
|
- `./llamactl.yaml` or `./config.yaml` (current directory)
|
||||||
- `~/.config/llamactl/config.yaml`
|
- `$HOME/.config/llamactl/config.yaml`
|
||||||
- `/etc/llamactl/config.yaml`
|
- `/etc/llamactl/config.yaml`
|
||||||
|
|
||||||
**Windows:**
|
**Windows:**
|
||||||
- `./llamactl.yaml` or `./config.yaml` (current directory)
|
- `./llamactl.yaml` or `./config.yaml` (current directory)
|
||||||
- `%APPDATA%\llamactl\config.yaml`
|
- `%APPDATA%\llamactl\config.yaml`
|
||||||
|
- `%USERPROFILE%\llamactl\config.yaml`
|
||||||
- `%PROGRAMDATA%\llamactl\config.yaml`
|
- `%PROGRAMDATA%\llamactl\config.yaml`
|
||||||
|
|
||||||
You can specify the path to config file with `LLAMACTL_CONFIG_PATH` environment variable
|
You can specify the path to config file with `LLAMACTL_CONFIG_PATH` environment variable.
|
||||||
|
|
||||||
## API Key Authentication
|
|
||||||
|
|
||||||
llamactl now supports API Key authentication for both management and inference (OpenAI-compatible) endpoints. The are separate keys for management and inference APIs. Management keys grant full access; inference keys grant access to OpenAI-compatible endpoints
|
|
||||||
|
|
||||||
**How to Use:**
|
|
||||||
- Pass your API key in requests using one of:
|
|
||||||
- `Authorization: Bearer <key>` header
|
|
||||||
- `X-API-Key: <key>` header
|
|
||||||
- `api_key=<key>` query parameter
|
|
||||||
|
|
||||||
**Auto-generated keys**: If no keys are set and authentication is required, a key will be generated and printed to the terminal at startup. For production, set your own keys in config or environment variables.
|
|
||||||
|
|
||||||
### Configuration Options
|
### Configuration Options
|
||||||
|
|
||||||
@@ -137,8 +128,11 @@ server:
|
|||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
instances:
|
instances:
|
||||||
port_range: [8000, 9000] # Port range for instances
|
port_range: [8000, 9000] # Port range for instances (default: [8000, 9000])
|
||||||
log_directory: "/tmp/llamactl" # Directory for instance logs
|
data_dir: "~/.local/share/llamactl" # Directory for all llamactl data (default varies by OS)
|
||||||
|
configs_dir: "~/.local/share/llamactl/instances" # Directory for instance configs (default: data_dir/instances)
|
||||||
|
logs_dir: "~/.local/share/llamactl/logs" # Directory for instance logs (default: data_dir/logs)
|
||||||
|
auto_create_dirs: true # Automatically create data/config/logs directories (default: true)
|
||||||
max_instances: -1 # Maximum instances (-1 = unlimited)
|
max_instances: -1 # Maximum instances (-1 = unlimited)
|
||||||
llama_executable: "llama-server" # Path to llama-server executable
|
llama_executable: "llama-server" # Path to llama-server executable
|
||||||
default_auto_restart: true # Default auto-restart setting
|
default_auto_restart: true # Default auto-restart setting
|
||||||
@@ -148,14 +142,17 @@ instances:
|
|||||||
|
|
||||||
**Environment Variables:**
|
**Environment Variables:**
|
||||||
- `LLAMACTL_INSTANCE_PORT_RANGE` - Port range (format: "8000-9000" or "8000,9000")
|
- `LLAMACTL_INSTANCE_PORT_RANGE` - Port range (format: "8000-9000" or "8000,9000")
|
||||||
- `LLAMACTL_LOG_DIR` - Log directory path
|
- `LLAMACTL_DATA_DIRECTORY` - Data directory path
|
||||||
|
- `LLAMACTL_INSTANCES_DIR` - Instance configs directory path
|
||||||
|
- `LLAMACTL_LOGS_DIR` - Log directory path
|
||||||
|
- `LLAMACTL_AUTO_CREATE_DATA_DIR` - Auto-create data/config/logs directories (true/false)
|
||||||
- `LLAMACTL_MAX_INSTANCES` - Maximum number of instances
|
- `LLAMACTL_MAX_INSTANCES` - Maximum number of instances
|
||||||
- `LLAMACTL_LLAMA_EXECUTABLE` - Path to llama-server executable
|
- `LLAMACTL_LLAMA_EXECUTABLE` - Path to llama-server executable
|
||||||
- `LLAMACTL_DEFAULT_AUTO_RESTART` - Default auto-restart setting (true/false)
|
- `LLAMACTL_DEFAULT_AUTO_RESTART` - Default auto-restart setting (true/false)
|
||||||
- `LLAMACTL_DEFAULT_MAX_RESTARTS` - Default maximum restarts
|
- `LLAMACTL_DEFAULT_MAX_RESTARTS` - Default maximum restarts
|
||||||
- `LLAMACTL_DEFAULT_RESTART_DELAY` - Default restart delay in seconds
|
- `LLAMACTL_DEFAULT_RESTART_DELAY` - Default restart delay in seconds
|
||||||
|
|
||||||
#### Auth Configuration
|
#### Authentication Configuration
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
auth:
|
auth:
|
||||||
@@ -180,13 +177,16 @@ server:
|
|||||||
|
|
||||||
instances:
|
instances:
|
||||||
port_range: [8001, 8100]
|
port_range: [8001, 8100]
|
||||||
log_directory: "/var/log/llamactl"
|
data_dir: "/var/lib/llamactl"
|
||||||
|
configs_dir: "/var/lib/llamactl/instances"
|
||||||
|
logs_dir: "/var/log/llamactl"
|
||||||
|
auto_create_dirs: true
|
||||||
max_instances: 10
|
max_instances: 10
|
||||||
llama_executable: "/usr/local/bin/llama-server"
|
llama_executable: "/usr/local/bin/llama-server"
|
||||||
default_auto_restart: true
|
default_auto_restart: true
|
||||||
default_max_restarts: 5
|
default_max_restarts: 5
|
||||||
default_restart_delay: 10
|
default_restart_delay: 10
|
||||||
|
|
||||||
auth:
|
auth:
|
||||||
require_inference_auth: true
|
require_inference_auth: true
|
||||||
inference_keys: ["sk-inference-abc123"]
|
inference_keys: ["sk-inference-abc123"]
|
||||||
@@ -209,6 +209,22 @@ LLAMACTL_CONFIG_PATH=/path/to/config.yaml ./llamactl
|
|||||||
LLAMACTL_PORT=9090 LLAMACTL_LOG_DIR=/custom/logs ./llamactl
|
LLAMACTL_PORT=9090 LLAMACTL_LOG_DIR=/custom/logs ./llamactl
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
llamactl supports API Key authentication for both management and inference (OpenAI-compatible) endpoints. There are separate keys for management and inference APIs:
|
||||||
|
|
||||||
|
- **Management keys** grant full access to instance management
|
||||||
|
- **Inference keys** grant access to OpenAI-compatible endpoints
|
||||||
|
- Management keys also work for inference endpoints (higher privilege)
|
||||||
|
|
||||||
|
**How to Use:**
|
||||||
|
Pass your API key in requests using one of:
|
||||||
|
- `Authorization: Bearer <key>` header
|
||||||
|
- `X-API-Key: <key>` header
|
||||||
|
- `api_key=<key>` query parameter
|
||||||
|
|
||||||
|
**Auto-generated keys**: If no keys are set and authentication is required, a key will be generated and printed to the terminal at startup. For production, set your own keys in config or environment variables.
|
||||||
|
|
||||||
### Web Dashboard
|
### Web Dashboard
|
||||||
|
|
||||||
Open your browser and navigate to `http://localhost:8080` to access the web dashboard.
|
Open your browser and navigate to `http://localhost:8080` to access the web dashboard.
|
||||||
@@ -222,6 +238,7 @@ The REST API is available at `http://localhost:8080/api/v1`. See the Swagger doc
|
|||||||
```bash
|
```bash
|
||||||
curl -X POST http://localhost:8080/api/v1/instances/my-instance \
|
curl -X POST http://localhost:8080/api/v1/instances/my-instance \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer sk-management-your-key" \
|
||||||
-d '{
|
-d '{
|
||||||
"model": "/path/to/model.gguf",
|
"model": "/path/to/model.gguf",
|
||||||
"gpu_layers": 32,
|
"gpu_layers": 32,
|
||||||
@@ -232,17 +249,22 @@ curl -X POST http://localhost:8080/api/v1/instances/my-instance \
|
|||||||
#### List Instances
|
#### List Instances
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl http://localhost:8080/api/v1/instances
|
curl -H "Authorization: Bearer sk-management-your-key" \
|
||||||
|
http://localhost:8080/api/v1/instances
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Start/Stop Instance
|
#### Start/Stop Instance
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Start
|
# Start
|
||||||
curl -X POST http://localhost:8080/api/v1/instances/my-instance/start
|
curl -X POST \
|
||||||
|
-H "Authorization: Bearer sk-management-your-key" \
|
||||||
|
http://localhost:8080/api/v1/instances/my-instance/start
|
||||||
|
|
||||||
# Stop
|
# Stop
|
||||||
curl -X POST http://localhost:8080/api/v1/instances/my-instance/stop
|
curl -X POST \
|
||||||
|
-H "Authorization: Bearer sk-management-your-key" \
|
||||||
|
http://localhost:8080/api/v1/instances/my-instance/stop
|
||||||
```
|
```
|
||||||
|
|
||||||
### OpenAI Compatible Endpoints
|
### OpenAI Compatible Endpoints
|
||||||
@@ -252,6 +274,7 @@ Route requests to instances by including the instance name as the model paramete
|
|||||||
```bash
|
```bash
|
||||||
curl -X POST http://localhost:8080/v1/chat/completions \
|
curl -X POST http://localhost:8080/v1/chat/completions \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer sk-inference-your-key" \
|
||||||
-d '{
|
-d '{
|
||||||
"model": "my-instance",
|
"model": "my-instance",
|
||||||
"messages": [{"role": "user", "content": "Hello!"}]
|
"messages": [{"role": "user", "content": "Hello!"}]
|
||||||
|
|||||||
@@ -17,17 +17,24 @@ import (
|
|||||||
// @basePath /api/v1
|
// @basePath /api/v1
|
||||||
func main() {
|
func main() {
|
||||||
|
|
||||||
config, err := llamactl.LoadConfig("")
|
configPath := os.Getenv("LLAMACTL_CONFIG_PATH")
|
||||||
|
config, err := llamactl.LoadConfig(configPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Printf("Error loading config: %v\n", err)
|
fmt.Printf("Error loading config: %v\n", err)
|
||||||
fmt.Println("Using default configuration.")
|
fmt.Println("Using default configuration.")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create the log directory if it doesn't exist
|
// Create the data directory if it doesn't exist
|
||||||
err = os.MkdirAll(config.Instances.LogDirectory, 0755)
|
if config.Instances.AutoCreateDirs {
|
||||||
if err != nil {
|
if err := os.MkdirAll(config.Instances.InstancesDir, 0755); err != nil {
|
||||||
fmt.Printf("Error creating log directory: %v\n", err)
|
fmt.Printf("Error creating config directory %s: %v\n", config.Instances.InstancesDir, err)
|
||||||
return
|
fmt.Println("Persistence will not be available.")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.MkdirAll(config.Instances.LogsDir, 0755); err != nil {
|
||||||
|
fmt.Printf("Error creating log directory %s: %v\n", config.Instances.LogsDir, err)
|
||||||
|
fmt.Println("Instance logs will not be available.")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Initialize the instance manager
|
// Initialize the instance manager
|
||||||
|
|||||||
112
pkg/config.go
112
pkg/config.go
@@ -37,8 +37,17 @@ type InstancesConfig struct {
|
|||||||
// Port range for instances (e.g., 8000,9000)
|
// Port range for instances (e.g., 8000,9000)
|
||||||
PortRange [2]int `yaml:"port_range"`
|
PortRange [2]int `yaml:"port_range"`
|
||||||
|
|
||||||
// Directory where instance logs will be stored
|
// Directory where all llamactl data will be stored (instances.json, logs, etc.)
|
||||||
LogDirectory string `yaml:"log_directory"`
|
DataDir string `yaml:"data_dir"`
|
||||||
|
|
||||||
|
// Instance config directory override
|
||||||
|
InstancesDir string `yaml:"configs_dir"`
|
||||||
|
|
||||||
|
// Logs directory override
|
||||||
|
LogsDir string `yaml:"logs_dir"`
|
||||||
|
|
||||||
|
// Automatically create the data directory if it doesn't exist
|
||||||
|
AutoCreateDirs bool `yaml:"auto_create_dirs"`
|
||||||
|
|
||||||
// Maximum number of instances that can be created
|
// Maximum number of instances that can be created
|
||||||
MaxInstances int `yaml:"max_instances"`
|
MaxInstances int `yaml:"max_instances"`
|
||||||
@@ -87,7 +96,10 @@ func LoadConfig(configPath string) (Config, error) {
|
|||||||
},
|
},
|
||||||
Instances: InstancesConfig{
|
Instances: InstancesConfig{
|
||||||
PortRange: [2]int{8000, 9000},
|
PortRange: [2]int{8000, 9000},
|
||||||
LogDirectory: "/tmp/llamactl",
|
DataDir: getDefaultDataDirectory(),
|
||||||
|
InstancesDir: filepath.Join(getDefaultDataDirectory(), "instances"),
|
||||||
|
LogsDir: filepath.Join(getDefaultDataDirectory(), "logs"),
|
||||||
|
AutoCreateDirs: true,
|
||||||
MaxInstances: -1, // -1 means unlimited
|
MaxInstances: -1, // -1 means unlimited
|
||||||
LlamaExecutable: "llama-server",
|
LlamaExecutable: "llama-server",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
@@ -157,15 +169,28 @@ func loadEnvVars(cfg *Config) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Data config
|
||||||
|
if dataDir := os.Getenv("LLAMACTL_DATA_DIRECTORY"); dataDir != "" {
|
||||||
|
cfg.Instances.DataDir = dataDir
|
||||||
|
}
|
||||||
|
if instancesDir := os.Getenv("LLAMACTL_INSTANCES_DIR"); instancesDir != "" {
|
||||||
|
cfg.Instances.InstancesDir = instancesDir
|
||||||
|
}
|
||||||
|
if logsDir := os.Getenv("LLAMACTL_LOGS_DIR"); logsDir != "" {
|
||||||
|
cfg.Instances.LogsDir = logsDir
|
||||||
|
}
|
||||||
|
if autoCreate := os.Getenv("LLAMACTL_AUTO_CREATE_DATA_DIR"); autoCreate != "" {
|
||||||
|
if b, err := strconv.ParseBool(autoCreate); err == nil {
|
||||||
|
cfg.Instances.AutoCreateDirs = b
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Instance config
|
// Instance config
|
||||||
if portRange := os.Getenv("LLAMACTL_INSTANCE_PORT_RANGE"); portRange != "" {
|
if portRange := os.Getenv("LLAMACTL_INSTANCE_PORT_RANGE"); portRange != "" {
|
||||||
if ports := ParsePortRange(portRange); ports != [2]int{0, 0} {
|
if ports := ParsePortRange(portRange); ports != [2]int{0, 0} {
|
||||||
cfg.Instances.PortRange = ports
|
cfg.Instances.PortRange = ports
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if logDir := os.Getenv("LLAMACTL_LOG_DIR"); logDir != "" {
|
|
||||||
cfg.Instances.LogDirectory = logDir
|
|
||||||
}
|
|
||||||
if maxInstances := os.Getenv("LLAMACTL_MAX_INSTANCES"); maxInstances != "" {
|
if maxInstances := os.Getenv("LLAMACTL_MAX_INSTANCES"); maxInstances != "" {
|
||||||
if m, err := strconv.Atoi(maxInstances); err == nil {
|
if m, err := strconv.Atoi(maxInstances); err == nil {
|
||||||
cfg.Instances.MaxInstances = m
|
cfg.Instances.MaxInstances = m
|
||||||
@@ -231,64 +256,63 @@ func ParsePortRange(s string) [2]int {
|
|||||||
return [2]int{0, 0} // Invalid format
|
return [2]int{0, 0} // Invalid format
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getDefaultDataDirectory returns platform-specific default data directory
|
||||||
|
func getDefaultDataDirectory() string {
|
||||||
|
switch runtime.GOOS {
|
||||||
|
case "windows":
|
||||||
|
// Try PROGRAMDATA first (system-wide), fallback to LOCALAPPDATA (user)
|
||||||
|
if programData := os.Getenv("PROGRAMDATA"); programData != "" {
|
||||||
|
return filepath.Join(programData, "llamactl")
|
||||||
|
}
|
||||||
|
if localAppData := os.Getenv("LOCALAPPDATA"); localAppData != "" {
|
||||||
|
return filepath.Join(localAppData, "llamactl")
|
||||||
|
}
|
||||||
|
return "C:\\ProgramData\\llamactl" // Final fallback
|
||||||
|
|
||||||
|
case "darwin":
|
||||||
|
// For macOS, use user's Application Support directory
|
||||||
|
if homeDir, _ := os.UserHomeDir(); homeDir != "" {
|
||||||
|
return filepath.Join(homeDir, "Library", "Application Support", "llamactl")
|
||||||
|
}
|
||||||
|
return "/usr/local/var/llamactl" // Fallback
|
||||||
|
|
||||||
|
default:
|
||||||
|
// Linux and other Unix-like systems
|
||||||
|
if homeDir, _ := os.UserHomeDir(); homeDir != "" {
|
||||||
|
return filepath.Join(homeDir, ".local", "share", "llamactl")
|
||||||
|
}
|
||||||
|
return "/var/lib/llamactl" // Final fallback
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// getDefaultConfigLocations returns platform-specific config file locations
|
// getDefaultConfigLocations returns platform-specific config file locations
|
||||||
func getDefaultConfigLocations() []string {
|
func getDefaultConfigLocations() []string {
|
||||||
var locations []string
|
var locations []string
|
||||||
|
|
||||||
// Current directory (cross-platform)
|
|
||||||
locations = append(locations,
|
|
||||||
"./llamactl.yaml",
|
|
||||||
"./config.yaml",
|
|
||||||
)
|
|
||||||
|
|
||||||
homeDir, _ := os.UserHomeDir()
|
homeDir, _ := os.UserHomeDir()
|
||||||
|
|
||||||
switch runtime.GOOS {
|
switch runtime.GOOS {
|
||||||
case "windows":
|
case "windows":
|
||||||
// Windows: Use APPDATA and ProgramData
|
// Windows: Use APPDATA if available, else user home, fallback to ProgramData
|
||||||
if appData := os.Getenv("APPDATA"); appData != "" {
|
if appData := os.Getenv("APPDATA"); appData != "" {
|
||||||
locations = append(locations, filepath.Join(appData, "llamactl", "config.yaml"))
|
locations = append(locations, filepath.Join(appData, "llamactl", "config.yaml"))
|
||||||
}
|
} else if homeDir != "" {
|
||||||
if programData := os.Getenv("PROGRAMDATA"); programData != "" {
|
|
||||||
locations = append(locations, filepath.Join(programData, "llamactl", "config.yaml"))
|
|
||||||
}
|
|
||||||
// Fallback to user home
|
|
||||||
if homeDir != "" {
|
|
||||||
locations = append(locations, filepath.Join(homeDir, "llamactl", "config.yaml"))
|
locations = append(locations, filepath.Join(homeDir, "llamactl", "config.yaml"))
|
||||||
}
|
}
|
||||||
|
locations = append(locations, filepath.Join(os.Getenv("PROGRAMDATA"), "llamactl", "config.yaml"))
|
||||||
|
|
||||||
case "darwin":
|
case "darwin":
|
||||||
// macOS: Use proper Application Support directories
|
// macOS: Use Application Support in user home, fallback to /Library/Application Support
|
||||||
if homeDir != "" {
|
if homeDir != "" {
|
||||||
locations = append(locations,
|
locations = append(locations, filepath.Join(homeDir, "Library", "Application Support", "llamactl", "config.yaml"))
|
||||||
filepath.Join(homeDir, "Library", "Application Support", "llamactl", "config.yaml"),
|
|
||||||
filepath.Join(homeDir, ".config", "llamactl", "config.yaml"), // XDG fallback
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
locations = append(locations, "/Library/Application Support/llamactl/config.yaml")
|
locations = append(locations, "/Library/Application Support/llamactl/config.yaml")
|
||||||
locations = append(locations, "/etc/llamactl/config.yaml") // Unix fallback
|
|
||||||
|
|
||||||
default:
|
default:
|
||||||
// User config: $XDG_CONFIG_HOME/llamactl/config.yaml or ~/.config/llamactl/config.yaml
|
// Linux/Unix: Use ~/.config/llamactl/config.yaml, fallback to /etc/llamactl/config.yaml
|
||||||
configHome := os.Getenv("XDG_CONFIG_HOME")
|
if homeDir != "" {
|
||||||
if configHome == "" && homeDir != "" {
|
locations = append(locations, filepath.Join(homeDir, ".config", "llamactl", "config.yaml"))
|
||||||
configHome = filepath.Join(homeDir, ".config")
|
|
||||||
}
|
}
|
||||||
if configHome != "" {
|
|
||||||
locations = append(locations, filepath.Join(configHome, "llamactl", "config.yaml"))
|
|
||||||
}
|
|
||||||
|
|
||||||
// System config: /etc/llamactl/config.yaml
|
|
||||||
locations = append(locations, "/etc/llamactl/config.yaml")
|
locations = append(locations, "/etc/llamactl/config.yaml")
|
||||||
|
|
||||||
// Additional system locations
|
|
||||||
if xdgConfigDirs := os.Getenv("XDG_CONFIG_DIRS"); xdgConfigDirs != "" {
|
|
||||||
for dir := range strings.SplitSeq(xdgConfigDirs, ":") {
|
|
||||||
if dir != "" {
|
|
||||||
locations = append(locations, filepath.Join(dir, "llamactl", "config.yaml"))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return locations
|
return locations
|
||||||
|
|||||||
@@ -22,12 +22,24 @@ func TestLoadConfig_Defaults(t *testing.T) {
|
|||||||
if cfg.Server.Port != 8080 {
|
if cfg.Server.Port != 8080 {
|
||||||
t.Errorf("Expected default port to be 8080, got %d", cfg.Server.Port)
|
t.Errorf("Expected default port to be 8080, got %d", cfg.Server.Port)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
homedir, err := os.UserHomeDir()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to get user home directory: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.Instances.InstancesDir != filepath.Join(homedir, ".local", "share", "llamactl", "instances") {
|
||||||
|
t.Errorf("Expected default instances directory '%s', got %q", filepath.Join(homedir, ".local", "share", "llamactl", "instances"), cfg.Instances.InstancesDir)
|
||||||
|
}
|
||||||
|
if cfg.Instances.LogsDir != filepath.Join(homedir, ".local", "share", "llamactl", "logs") {
|
||||||
|
t.Errorf("Expected default logs directory '%s', got %q", filepath.Join(homedir, ".local", "share", "llamactl", "logs"), cfg.Instances.LogsDir)
|
||||||
|
}
|
||||||
|
if !cfg.Instances.AutoCreateDirs {
|
||||||
|
t.Error("Expected default instances auto-create to be true")
|
||||||
|
}
|
||||||
if cfg.Instances.PortRange != [2]int{8000, 9000} {
|
if cfg.Instances.PortRange != [2]int{8000, 9000} {
|
||||||
t.Errorf("Expected default port range [8000, 9000], got %v", cfg.Instances.PortRange)
|
t.Errorf("Expected default port range [8000, 9000], got %v", cfg.Instances.PortRange)
|
||||||
}
|
}
|
||||||
if cfg.Instances.LogDirectory != "/tmp/llamactl" {
|
|
||||||
t.Errorf("Expected default log directory '/tmp/llamactl', got %q", cfg.Instances.LogDirectory)
|
|
||||||
}
|
|
||||||
if cfg.Instances.MaxInstances != -1 {
|
if cfg.Instances.MaxInstances != -1 {
|
||||||
t.Errorf("Expected default max instances -1, got %d", cfg.Instances.MaxInstances)
|
t.Errorf("Expected default max instances -1, got %d", cfg.Instances.MaxInstances)
|
||||||
}
|
}
|
||||||
@@ -56,7 +68,7 @@ server:
|
|||||||
port: 9090
|
port: 9090
|
||||||
instances:
|
instances:
|
||||||
port_range: [7000, 8000]
|
port_range: [7000, 8000]
|
||||||
log_directory: "/custom/logs"
|
logs_dir: "/custom/logs"
|
||||||
max_instances: 5
|
max_instances: 5
|
||||||
llama_executable: "/usr/bin/llama-server"
|
llama_executable: "/usr/bin/llama-server"
|
||||||
default_auto_restart: false
|
default_auto_restart: false
|
||||||
@@ -84,8 +96,8 @@ instances:
|
|||||||
if cfg.Instances.PortRange != [2]int{7000, 8000} {
|
if cfg.Instances.PortRange != [2]int{7000, 8000} {
|
||||||
t.Errorf("Expected port range [7000, 8000], got %v", cfg.Instances.PortRange)
|
t.Errorf("Expected port range [7000, 8000], got %v", cfg.Instances.PortRange)
|
||||||
}
|
}
|
||||||
if cfg.Instances.LogDirectory != "/custom/logs" {
|
if cfg.Instances.LogsDir != "/custom/logs" {
|
||||||
t.Errorf("Expected log directory '/custom/logs', got %q", cfg.Instances.LogDirectory)
|
t.Errorf("Expected logs directory '/custom/logs', got %q", cfg.Instances.LogsDir)
|
||||||
}
|
}
|
||||||
if cfg.Instances.MaxInstances != 5 {
|
if cfg.Instances.MaxInstances != 5 {
|
||||||
t.Errorf("Expected max instances 5, got %d", cfg.Instances.MaxInstances)
|
t.Errorf("Expected max instances 5, got %d", cfg.Instances.MaxInstances)
|
||||||
@@ -110,7 +122,7 @@ func TestLoadConfig_EnvironmentOverrides(t *testing.T) {
|
|||||||
"LLAMACTL_HOST": "0.0.0.0",
|
"LLAMACTL_HOST": "0.0.0.0",
|
||||||
"LLAMACTL_PORT": "3000",
|
"LLAMACTL_PORT": "3000",
|
||||||
"LLAMACTL_INSTANCE_PORT_RANGE": "5000-6000",
|
"LLAMACTL_INSTANCE_PORT_RANGE": "5000-6000",
|
||||||
"LLAMACTL_LOG_DIR": "/env/logs",
|
"LLAMACTL_LOGS_DIR": "/env/logs",
|
||||||
"LLAMACTL_MAX_INSTANCES": "20",
|
"LLAMACTL_MAX_INSTANCES": "20",
|
||||||
"LLAMACTL_LLAMA_EXECUTABLE": "/env/llama-server",
|
"LLAMACTL_LLAMA_EXECUTABLE": "/env/llama-server",
|
||||||
"LLAMACTL_DEFAULT_AUTO_RESTART": "false",
|
"LLAMACTL_DEFAULT_AUTO_RESTART": "false",
|
||||||
@@ -139,8 +151,8 @@ func TestLoadConfig_EnvironmentOverrides(t *testing.T) {
|
|||||||
if cfg.Instances.PortRange != [2]int{5000, 6000} {
|
if cfg.Instances.PortRange != [2]int{5000, 6000} {
|
||||||
t.Errorf("Expected port range [5000, 6000], got %v", cfg.Instances.PortRange)
|
t.Errorf("Expected port range [5000, 6000], got %v", cfg.Instances.PortRange)
|
||||||
}
|
}
|
||||||
if cfg.Instances.LogDirectory != "/env/logs" {
|
if cfg.Instances.LogsDir != "/env/logs" {
|
||||||
t.Errorf("Expected log directory '/env/logs', got %q", cfg.Instances.LogDirectory)
|
t.Errorf("Expected logs directory '/env/logs', got %q", cfg.Instances.LogsDir)
|
||||||
}
|
}
|
||||||
if cfg.Instances.MaxInstances != 20 {
|
if cfg.Instances.MaxInstances != 20 {
|
||||||
t.Errorf("Expected max instances 20, got %d", cfg.Instances.MaxInstances)
|
t.Errorf("Expected max instances 20, got %d", cfg.Instances.MaxInstances)
|
||||||
|
|||||||
@@ -149,7 +149,7 @@ func NewInstance(name string, globalSettings *InstancesConfig, options *CreateIn
|
|||||||
// Apply defaults
|
// Apply defaults
|
||||||
applyDefaultOptions(optionsCopy, globalSettings)
|
applyDefaultOptions(optionsCopy, globalSettings)
|
||||||
// Create the instance logger
|
// Create the instance logger
|
||||||
logger := NewInstanceLogger(name, globalSettings.LogDirectory)
|
logger := NewInstanceLogger(name, globalSettings.LogsDir)
|
||||||
|
|
||||||
return &Instance{
|
return &Instance{
|
||||||
Name: name,
|
Name: name,
|
||||||
@@ -235,10 +235,12 @@ func (i *Instance) MarshalJSON() ([]byte, error) {
|
|||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Options *CreateInstanceOptions `json:"options,omitempty"`
|
Options *CreateInstanceOptions `json:"options,omitempty"`
|
||||||
Running bool `json:"running"`
|
Running bool `json:"running"`
|
||||||
|
Created int64 `json:"created,omitempty"`
|
||||||
}{
|
}{
|
||||||
Name: i.Name,
|
Name: i.Name,
|
||||||
Options: i.options,
|
Options: i.options,
|
||||||
Running: i.Running,
|
Running: i.Running,
|
||||||
|
Created: i.Created,
|
||||||
}
|
}
|
||||||
|
|
||||||
return json.Marshal(temp)
|
return json.Marshal(temp)
|
||||||
@@ -251,6 +253,7 @@ func (i *Instance) UnmarshalJSON(data []byte) error {
|
|||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Options *CreateInstanceOptions `json:"options,omitempty"`
|
Options *CreateInstanceOptions `json:"options,omitempty"`
|
||||||
Running bool `json:"running"`
|
Running bool `json:"running"`
|
||||||
|
Created int64 `json:"created,omitempty"`
|
||||||
}{}
|
}{}
|
||||||
|
|
||||||
if err := json.Unmarshal(data, &temp); err != nil {
|
if err := json.Unmarshal(data, &temp); err != nil {
|
||||||
@@ -260,6 +263,7 @@ func (i *Instance) UnmarshalJSON(data []byte) error {
|
|||||||
// Set the fields
|
// Set the fields
|
||||||
i.Name = temp.Name
|
i.Name = temp.Name
|
||||||
i.Running = temp.Running
|
i.Running = temp.Running
|
||||||
|
i.Created = temp.Created
|
||||||
|
|
||||||
// Handle options with validation but no defaults
|
// Handle options with validation but no defaults
|
||||||
if temp.Options != nil {
|
if temp.Options != nil {
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ import (
|
|||||||
|
|
||||||
func TestNewInstance(t *testing.T) {
|
func TestNewInstance(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &llamactl.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
@@ -54,7 +54,7 @@ func TestNewInstance(t *testing.T) {
|
|||||||
|
|
||||||
func TestNewInstance_WithRestartOptions(t *testing.T) {
|
func TestNewInstance_WithRestartOptions(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &llamactl.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
@@ -91,7 +91,7 @@ func TestNewInstance_WithRestartOptions(t *testing.T) {
|
|||||||
|
|
||||||
func TestNewInstance_ValidationAndDefaults(t *testing.T) {
|
func TestNewInstance_ValidationAndDefaults(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &llamactl.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
@@ -123,7 +123,7 @@ func TestNewInstance_ValidationAndDefaults(t *testing.T) {
|
|||||||
|
|
||||||
func TestSetOptions(t *testing.T) {
|
func TestSetOptions(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &llamactl.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
@@ -164,7 +164,7 @@ func TestSetOptions(t *testing.T) {
|
|||||||
|
|
||||||
func TestSetOptions_NilOptions(t *testing.T) {
|
func TestSetOptions_NilOptions(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &llamactl.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
@@ -191,7 +191,7 @@ func TestSetOptions_NilOptions(t *testing.T) {
|
|||||||
|
|
||||||
func TestGetProxy(t *testing.T) {
|
func TestGetProxy(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &llamactl.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
}
|
}
|
||||||
|
|
||||||
options := &llamactl.CreateInstanceOptions{
|
options := &llamactl.CreateInstanceOptions{
|
||||||
@@ -224,7 +224,7 @@ func TestGetProxy(t *testing.T) {
|
|||||||
|
|
||||||
func TestMarshalJSON(t *testing.T) {
|
func TestMarshalJSON(t *testing.T) {
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &llamactl.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
DefaultMaxRestarts: 3,
|
DefaultMaxRestarts: 3,
|
||||||
DefaultRestartDelay: 5,
|
DefaultRestartDelay: 5,
|
||||||
@@ -406,7 +406,7 @@ func TestCreateInstanceOptionsValidation(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
globalSettings := &llamactl.InstancesConfig{
|
globalSettings := &llamactl.InstancesConfig{
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, tt := range tests {
|
for _, tt := range tests {
|
||||||
|
|||||||
183
pkg/manager.go
183
pkg/manager.go
@@ -1,7 +1,12 @@
|
|||||||
package llamactl
|
package llamactl
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -28,11 +33,17 @@ type instanceManager struct {
|
|||||||
|
|
||||||
// NewInstanceManager creates a new instance of InstanceManager.
|
// NewInstanceManager creates a new instance of InstanceManager.
|
||||||
func NewInstanceManager(instancesConfig InstancesConfig) InstanceManager {
|
func NewInstanceManager(instancesConfig InstancesConfig) InstanceManager {
|
||||||
return &instanceManager{
|
im := &instanceManager{
|
||||||
instances: make(map[string]*Instance),
|
instances: make(map[string]*Instance),
|
||||||
ports: make(map[int]bool),
|
ports: make(map[int]bool),
|
||||||
instancesConfig: instancesConfig,
|
instancesConfig: instancesConfig,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Load existing instances from disk
|
||||||
|
if err := im.loadInstances(); err != nil {
|
||||||
|
log.Printf("Error loading instances: %v", err)
|
||||||
|
}
|
||||||
|
return im
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListInstances returns a list of all instances managed by the instance manager.
|
// ListInstances returns a list of all instances managed by the instance manager.
|
||||||
@@ -95,6 +106,10 @@ func (im *instanceManager) CreateInstance(name string, options *CreateInstanceOp
|
|||||||
im.instances[instance.Name] = instance
|
im.instances[instance.Name] = instance
|
||||||
im.ports[options.Port] = true
|
im.ports[options.Port] = true
|
||||||
|
|
||||||
|
if err := im.persistInstance(instance); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to persist instance %s: %w", name, err)
|
||||||
|
}
|
||||||
|
|
||||||
return instance, nil
|
return instance, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -150,6 +165,12 @@ func (im *instanceManager) UpdateInstance(name string, options *CreateInstanceOp
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
im.mu.Lock()
|
||||||
|
defer im.mu.Unlock()
|
||||||
|
if err := im.persistInstance(instance); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to persist updated instance %s: %w", name, err)
|
||||||
|
}
|
||||||
|
|
||||||
return instance, nil
|
return instance, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -158,17 +179,24 @@ func (im *instanceManager) DeleteInstance(name string) error {
|
|||||||
im.mu.Lock()
|
im.mu.Lock()
|
||||||
defer im.mu.Unlock()
|
defer im.mu.Unlock()
|
||||||
|
|
||||||
_, exists := im.instances[name]
|
instance, exists := im.instances[name]
|
||||||
if !exists {
|
if !exists {
|
||||||
return fmt.Errorf("instance with name %s not found", name)
|
return fmt.Errorf("instance with name %s not found", name)
|
||||||
}
|
}
|
||||||
|
|
||||||
if im.instances[name].Running {
|
if instance.Running {
|
||||||
return fmt.Errorf("instance with name %s is still running, stop it before deleting", name)
|
return fmt.Errorf("instance with name %s is still running, stop it before deleting", name)
|
||||||
}
|
}
|
||||||
|
|
||||||
delete(im.ports, im.instances[name].options.Port)
|
delete(im.ports, instance.options.Port)
|
||||||
delete(im.instances, name)
|
delete(im.instances, name)
|
||||||
|
|
||||||
|
// Delete the instance's config file if persistence is enabled
|
||||||
|
instancePath := filepath.Join(im.instancesConfig.InstancesDir, instance.Name+".json")
|
||||||
|
if err := os.Remove(instancePath); err != nil && !os.IsNotExist(err) {
|
||||||
|
return fmt.Errorf("failed to delete config file for instance %s: %w", instance.Name, err)
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -190,6 +218,13 @@ func (im *instanceManager) StartInstance(name string) (*Instance, error) {
|
|||||||
return nil, fmt.Errorf("failed to start instance %s: %w", name, err)
|
return nil, fmt.Errorf("failed to start instance %s: %w", name, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
im.mu.Lock()
|
||||||
|
defer im.mu.Unlock()
|
||||||
|
err := im.persistInstance(instance)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to persist instance %s: %w", name, err)
|
||||||
|
}
|
||||||
|
|
||||||
return instance, nil
|
return instance, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -210,6 +245,13 @@ func (im *instanceManager) StopInstance(name string) (*Instance, error) {
|
|||||||
return nil, fmt.Errorf("failed to stop instance %s: %w", name, err)
|
return nil, fmt.Errorf("failed to stop instance %s: %w", name, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
im.mu.Lock()
|
||||||
|
defer im.mu.Unlock()
|
||||||
|
err := im.persistInstance(instance)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to persist instance %s: %w", name, err)
|
||||||
|
}
|
||||||
|
|
||||||
return instance, nil
|
return instance, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -249,6 +291,35 @@ func (im *instanceManager) getNextAvailablePort() (int, error) {
|
|||||||
return 0, fmt.Errorf("no available ports in the specified range")
|
return 0, fmt.Errorf("no available ports in the specified range")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// persistInstance saves an instance to its JSON file
|
||||||
|
func (im *instanceManager) persistInstance(instance *Instance) error {
|
||||||
|
if im.instancesConfig.InstancesDir == "" {
|
||||||
|
return nil // Persistence disabled
|
||||||
|
}
|
||||||
|
|
||||||
|
instancePath := filepath.Join(im.instancesConfig.InstancesDir, instance.Name+".json")
|
||||||
|
tempPath := instancePath + ".tmp"
|
||||||
|
|
||||||
|
// Serialize instance to JSON
|
||||||
|
jsonData, err := json.MarshalIndent(instance, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to marshal instance %s: %w", instance.Name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write to temporary file first
|
||||||
|
if err := os.WriteFile(tempPath, jsonData, 0644); err != nil {
|
||||||
|
return fmt.Errorf("failed to write temp file for instance %s: %w", instance.Name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Atomic rename
|
||||||
|
if err := os.Rename(tempPath, instancePath); err != nil {
|
||||||
|
os.Remove(tempPath) // Clean up temp file
|
||||||
|
return fmt.Errorf("failed to rename temp file for instance %s: %w", instance.Name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (im *instanceManager) Shutdown() {
|
func (im *instanceManager) Shutdown() {
|
||||||
im.mu.Lock()
|
im.mu.Lock()
|
||||||
defer im.mu.Unlock()
|
defer im.mu.Unlock()
|
||||||
@@ -275,3 +346,107 @@ func (im *instanceManager) Shutdown() {
|
|||||||
wg.Wait()
|
wg.Wait()
|
||||||
fmt.Println("All instances stopped.")
|
fmt.Println("All instances stopped.")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// loadInstances restores all instances from disk
|
||||||
|
func (im *instanceManager) loadInstances() error {
|
||||||
|
if im.instancesConfig.InstancesDir == "" {
|
||||||
|
return nil // Persistence disabled
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if instances directory exists
|
||||||
|
if _, err := os.Stat(im.instancesConfig.InstancesDir); os.IsNotExist(err) {
|
||||||
|
return nil // No instances directory, start fresh
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read all JSON files from instances directory
|
||||||
|
files, err := os.ReadDir(im.instancesConfig.InstancesDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to read instances directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
loadedCount := 0
|
||||||
|
for _, file := range files {
|
||||||
|
if file.IsDir() || !strings.HasSuffix(file.Name(), ".json") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
instanceName := strings.TrimSuffix(file.Name(), ".json")
|
||||||
|
instancePath := filepath.Join(im.instancesConfig.InstancesDir, file.Name())
|
||||||
|
|
||||||
|
if err := im.loadInstance(instanceName, instancePath); err != nil {
|
||||||
|
log.Printf("Failed to load instance %s: %v", instanceName, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
loadedCount++
|
||||||
|
}
|
||||||
|
|
||||||
|
if loadedCount > 0 {
|
||||||
|
log.Printf("Loaded %d instances from persistence", loadedCount)
|
||||||
|
// Auto-start instances that have auto-restart enabled
|
||||||
|
go im.autoStartInstances()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// loadInstance loads a single instance from its JSON file
|
||||||
|
func (im *instanceManager) loadInstance(name, path string) error {
|
||||||
|
data, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to read instance file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var persistedInstance Instance
|
||||||
|
if err := json.Unmarshal(data, &persistedInstance); err != nil {
|
||||||
|
return fmt.Errorf("failed to unmarshal instance: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate the instance name matches the filename
|
||||||
|
if persistedInstance.Name != name {
|
||||||
|
return fmt.Errorf("instance name mismatch: file=%s, instance.Name=%s", name, persistedInstance.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create new instance using NewInstance (handles validation, defaults, setup)
|
||||||
|
instance := NewInstance(name, &im.instancesConfig, persistedInstance.GetOptions())
|
||||||
|
|
||||||
|
// Restore persisted fields that NewInstance doesn't set
|
||||||
|
instance.Created = persistedInstance.Created
|
||||||
|
instance.Running = persistedInstance.Running
|
||||||
|
|
||||||
|
// Check for port conflicts and add to maps
|
||||||
|
if instance.GetOptions() != nil && instance.GetOptions().Port > 0 {
|
||||||
|
port := instance.GetOptions().Port
|
||||||
|
if im.ports[port] {
|
||||||
|
return fmt.Errorf("port conflict: instance %s wants port %d which is already in use", name, port)
|
||||||
|
}
|
||||||
|
im.ports[port] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
im.instances[name] = instance
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// autoStartInstances starts instances that were running when persisted and have auto-restart enabled
|
||||||
|
func (im *instanceManager) autoStartInstances() {
|
||||||
|
im.mu.RLock()
|
||||||
|
var instancesToStart []*Instance
|
||||||
|
for _, instance := range im.instances {
|
||||||
|
if instance.Running && // Was running when persisted
|
||||||
|
instance.GetOptions() != nil &&
|
||||||
|
instance.GetOptions().AutoRestart != nil &&
|
||||||
|
*instance.GetOptions().AutoRestart {
|
||||||
|
instancesToStart = append(instancesToStart, instance)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
im.mu.RUnlock()
|
||||||
|
|
||||||
|
for _, instance := range instancesToStart {
|
||||||
|
log.Printf("Auto-starting instance %s", instance.Name)
|
||||||
|
// Reset running state before starting (since Start() expects stopped instance)
|
||||||
|
instance.Running = false
|
||||||
|
if err := instance.Start(); err != nil {
|
||||||
|
log.Printf("Failed to auto-start instance %s: %v", instance.Name, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,6 +1,10 @@
|
|||||||
package llamactl_test
|
package llamactl_test
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"reflect"
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
@@ -10,7 +14,7 @@ import (
|
|||||||
func TestNewInstanceManager(t *testing.T) {
|
func TestNewInstanceManager(t *testing.T) {
|
||||||
config := llamactl.InstancesConfig{
|
config := llamactl.InstancesConfig{
|
||||||
PortRange: [2]int{8000, 9000},
|
PortRange: [2]int{8000, 9000},
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
MaxInstances: 5,
|
MaxInstances: 5,
|
||||||
LlamaExecutable: "llama-server",
|
LlamaExecutable: "llama-server",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
@@ -486,11 +490,396 @@ func TestUpdateInstance_NotFound(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestPersistence_InstancePersistedOnCreation(t *testing.T) {
|
||||||
|
// Create temporary directory for persistence
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
config := llamactl.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := llamactl.NewInstanceManager(config)
|
||||||
|
|
||||||
|
options := &llamactl.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamactl.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create instance
|
||||||
|
_, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check that JSON file was created
|
||||||
|
expectedPath := filepath.Join(tempDir, "test-instance.json")
|
||||||
|
if _, err := os.Stat(expectedPath); os.IsNotExist(err) {
|
||||||
|
t.Errorf("Expected persistence file %s to exist", expectedPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify file contains correct data
|
||||||
|
data, err := os.ReadFile(expectedPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read persistence file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var persistedInstance map[string]interface{}
|
||||||
|
if err := json.Unmarshal(data, &persistedInstance); err != nil {
|
||||||
|
t.Fatalf("Failed to unmarshal persisted data: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if persistedInstance["name"] != "test-instance" {
|
||||||
|
t.Errorf("Expected name 'test-instance', got %v", persistedInstance["name"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_InstancePersistedOnUpdate(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
config := llamactl.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := llamactl.NewInstanceManager(config)
|
||||||
|
|
||||||
|
// Create instance
|
||||||
|
options := &llamactl.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamactl.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update instance
|
||||||
|
newOptions := &llamactl.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamactl.LlamaServerOptions{
|
||||||
|
Model: "/path/to/new-model.gguf",
|
||||||
|
Port: 8081,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err = manager.UpdateInstance("test-instance", newOptions)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("UpdateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify persistence file was updated
|
||||||
|
expectedPath := filepath.Join(tempDir, "test-instance.json")
|
||||||
|
data, err := os.ReadFile(expectedPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read persistence file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var persistedInstance map[string]interface{}
|
||||||
|
if err := json.Unmarshal(data, &persistedInstance); err != nil {
|
||||||
|
t.Fatalf("Failed to unmarshal persisted data: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check that the options were updated
|
||||||
|
options_data, ok := persistedInstance["options"].(map[string]interface{})
|
||||||
|
if !ok {
|
||||||
|
t.Fatal("Expected options to be present in persisted data")
|
||||||
|
}
|
||||||
|
|
||||||
|
if options_data["model"] != "/path/to/new-model.gguf" {
|
||||||
|
t.Errorf("Expected updated model '/path/to/new-model.gguf', got %v", options_data["model"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_InstanceFileDeletedOnDeletion(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
config := llamactl.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := llamactl.NewInstanceManager(config)
|
||||||
|
|
||||||
|
// Create instance
|
||||||
|
options := &llamactl.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamactl.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
_, err := manager.CreateInstance("test-instance", options)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedPath := filepath.Join(tempDir, "test-instance.json")
|
||||||
|
|
||||||
|
// Verify file exists
|
||||||
|
if _, err := os.Stat(expectedPath); os.IsNotExist(err) {
|
||||||
|
t.Fatal("Expected persistence file to exist before deletion")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete instance
|
||||||
|
err = manager.DeleteInstance("test-instance")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("DeleteInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify file was deleted
|
||||||
|
if _, err := os.Stat(expectedPath); !os.IsNotExist(err) {
|
||||||
|
t.Error("Expected persistence file to be deleted")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_InstancesLoadedFromDisk(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
// Create JSON files manually (simulating previous run)
|
||||||
|
instance1JSON := `{
|
||||||
|
"name": "instance1",
|
||||||
|
"running": false,
|
||||||
|
"options": {
|
||||||
|
"model": "/path/to/model1.gguf",
|
||||||
|
"port": 8080
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
instance2JSON := `{
|
||||||
|
"name": "instance2",
|
||||||
|
"running": false,
|
||||||
|
"options": {
|
||||||
|
"model": "/path/to/model2.gguf",
|
||||||
|
"port": 8081
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
// Write JSON files
|
||||||
|
err := os.WriteFile(filepath.Join(tempDir, "instance1.json"), []byte(instance1JSON), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to write test JSON file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = os.WriteFile(filepath.Join(tempDir, "instance2.json"), []byte(instance2JSON), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to write test JSON file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create manager - should load instances from disk
|
||||||
|
config := llamactl.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := llamactl.NewInstanceManager(config)
|
||||||
|
|
||||||
|
// Verify instances were loaded
|
||||||
|
instances, err := manager.ListInstances()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ListInstances failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(instances) != 2 {
|
||||||
|
t.Fatalf("Expected 2 loaded instances, got %d", len(instances))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check instances by name
|
||||||
|
instancesByName := make(map[string]*llamactl.Instance)
|
||||||
|
for _, instance := range instances {
|
||||||
|
instancesByName[instance.Name] = instance
|
||||||
|
}
|
||||||
|
|
||||||
|
instance1, exists := instancesByName["instance1"]
|
||||||
|
if !exists {
|
||||||
|
t.Error("Expected instance1 to be loaded")
|
||||||
|
} else {
|
||||||
|
if instance1.GetOptions().Model != "/path/to/model1.gguf" {
|
||||||
|
t.Errorf("Expected model '/path/to/model1.gguf', got %q", instance1.GetOptions().Model)
|
||||||
|
}
|
||||||
|
if instance1.GetOptions().Port != 8080 {
|
||||||
|
t.Errorf("Expected port 8080, got %d", instance1.GetOptions().Port)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
instance2, exists := instancesByName["instance2"]
|
||||||
|
if !exists {
|
||||||
|
t.Error("Expected instance2 to be loaded")
|
||||||
|
} else {
|
||||||
|
if instance2.GetOptions().Model != "/path/to/model2.gguf" {
|
||||||
|
t.Errorf("Expected model '/path/to/model2.gguf', got %q", instance2.GetOptions().Model)
|
||||||
|
}
|
||||||
|
if instance2.GetOptions().Port != 8081 {
|
||||||
|
t.Errorf("Expected port 8081, got %d", instance2.GetOptions().Port)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_PortMapPopulatedFromLoadedInstances(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
// Create JSON file with specific port
|
||||||
|
instanceJSON := `{
|
||||||
|
"name": "test-instance",
|
||||||
|
"running": false,
|
||||||
|
"options": {
|
||||||
|
"model": "/path/to/model.gguf",
|
||||||
|
"port": 8080
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
err := os.WriteFile(filepath.Join(tempDir, "test-instance.json"), []byte(instanceJSON), 0644)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to write test JSON file: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create manager - should load instance and register port
|
||||||
|
config := llamactl.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
}
|
||||||
|
manager := llamactl.NewInstanceManager(config)
|
||||||
|
|
||||||
|
// Try to create new instance with same port - should fail due to conflict
|
||||||
|
options := &llamactl.CreateInstanceOptions{
|
||||||
|
LlamaServerOptions: llamactl.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model2.gguf",
|
||||||
|
Port: 8080, // Same port as loaded instance
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = manager.CreateInstance("new-instance", options)
|
||||||
|
if err == nil {
|
||||||
|
t.Error("Expected error for port conflict with loaded instance")
|
||||||
|
}
|
||||||
|
if !strings.Contains(err.Error(), "port") || !strings.Contains(err.Error(), "in use") {
|
||||||
|
t.Errorf("Expected port conflict error, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPersistence_CompleteInstanceDataRoundTrip(t *testing.T) {
|
||||||
|
tempDir := t.TempDir()
|
||||||
|
|
||||||
|
config := llamactl.InstancesConfig{
|
||||||
|
PortRange: [2]int{8000, 9000},
|
||||||
|
InstancesDir: tempDir,
|
||||||
|
MaxInstances: 10,
|
||||||
|
DefaultAutoRestart: true,
|
||||||
|
DefaultMaxRestarts: 3,
|
||||||
|
DefaultRestartDelay: 5,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create first manager and instance with comprehensive options
|
||||||
|
manager1 := llamactl.NewInstanceManager(config)
|
||||||
|
|
||||||
|
autoRestart := false
|
||||||
|
maxRestarts := 10
|
||||||
|
restartDelay := 30
|
||||||
|
|
||||||
|
originalOptions := &llamactl.CreateInstanceOptions{
|
||||||
|
AutoRestart: &autoRestart,
|
||||||
|
MaxRestarts: &maxRestarts,
|
||||||
|
RestartDelay: &restartDelay,
|
||||||
|
LlamaServerOptions: llamactl.LlamaServerOptions{
|
||||||
|
Model: "/path/to/model.gguf",
|
||||||
|
Port: 8080,
|
||||||
|
Host: "localhost",
|
||||||
|
CtxSize: 4096,
|
||||||
|
GPULayers: 32,
|
||||||
|
Temperature: 0.7,
|
||||||
|
TopK: 40,
|
||||||
|
TopP: 0.9,
|
||||||
|
Verbose: true,
|
||||||
|
FlashAttn: false,
|
||||||
|
Lora: []string{"adapter1.bin", "adapter2.bin"},
|
||||||
|
HFRepo: "microsoft/DialoGPT-medium",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
originalInstance, err := manager1.CreateInstance("roundtrip-test", originalOptions)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CreateInstance failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create second manager (simulating restart) - should load the instance
|
||||||
|
manager2 := llamactl.NewInstanceManager(config)
|
||||||
|
|
||||||
|
loadedInstance, err := manager2.GetInstance("roundtrip-test")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GetInstance failed after reload: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare all data
|
||||||
|
if loadedInstance.Name != originalInstance.Name {
|
||||||
|
t.Errorf("Name mismatch: original=%q, loaded=%q", originalInstance.Name, loadedInstance.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
originalOpts := originalInstance.GetOptions()
|
||||||
|
loadedOpts := loadedInstance.GetOptions()
|
||||||
|
|
||||||
|
// Compare restart options
|
||||||
|
if *loadedOpts.AutoRestart != *originalOpts.AutoRestart {
|
||||||
|
t.Errorf("AutoRestart mismatch: original=%v, loaded=%v", *originalOpts.AutoRestart, *loadedOpts.AutoRestart)
|
||||||
|
}
|
||||||
|
if *loadedOpts.MaxRestarts != *originalOpts.MaxRestarts {
|
||||||
|
t.Errorf("MaxRestarts mismatch: original=%v, loaded=%v", *originalOpts.MaxRestarts, *loadedOpts.MaxRestarts)
|
||||||
|
}
|
||||||
|
if *loadedOpts.RestartDelay != *originalOpts.RestartDelay {
|
||||||
|
t.Errorf("RestartDelay mismatch: original=%v, loaded=%v", *originalOpts.RestartDelay, *loadedOpts.RestartDelay)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare llama server options
|
||||||
|
if loadedOpts.Model != originalOpts.Model {
|
||||||
|
t.Errorf("Model mismatch: original=%q, loaded=%q", originalOpts.Model, loadedOpts.Model)
|
||||||
|
}
|
||||||
|
if loadedOpts.Port != originalOpts.Port {
|
||||||
|
t.Errorf("Port mismatch: original=%d, loaded=%d", originalOpts.Port, loadedOpts.Port)
|
||||||
|
}
|
||||||
|
if loadedOpts.Host != originalOpts.Host {
|
||||||
|
t.Errorf("Host mismatch: original=%q, loaded=%q", originalOpts.Host, loadedOpts.Host)
|
||||||
|
}
|
||||||
|
if loadedOpts.CtxSize != originalOpts.CtxSize {
|
||||||
|
t.Errorf("CtxSize mismatch: original=%d, loaded=%d", originalOpts.CtxSize, loadedOpts.CtxSize)
|
||||||
|
}
|
||||||
|
if loadedOpts.GPULayers != originalOpts.GPULayers {
|
||||||
|
t.Errorf("GPULayers mismatch: original=%d, loaded=%d", originalOpts.GPULayers, loadedOpts.GPULayers)
|
||||||
|
}
|
||||||
|
if loadedOpts.Temperature != originalOpts.Temperature {
|
||||||
|
t.Errorf("Temperature mismatch: original=%f, loaded=%f", originalOpts.Temperature, loadedOpts.Temperature)
|
||||||
|
}
|
||||||
|
if loadedOpts.TopK != originalOpts.TopK {
|
||||||
|
t.Errorf("TopK mismatch: original=%d, loaded=%d", originalOpts.TopK, loadedOpts.TopK)
|
||||||
|
}
|
||||||
|
if loadedOpts.TopP != originalOpts.TopP {
|
||||||
|
t.Errorf("TopP mismatch: original=%f, loaded=%f", originalOpts.TopP, loadedOpts.TopP)
|
||||||
|
}
|
||||||
|
if loadedOpts.Verbose != originalOpts.Verbose {
|
||||||
|
t.Errorf("Verbose mismatch: original=%v, loaded=%v", originalOpts.Verbose, loadedOpts.Verbose)
|
||||||
|
}
|
||||||
|
if loadedOpts.FlashAttn != originalOpts.FlashAttn {
|
||||||
|
t.Errorf("FlashAttn mismatch: original=%v, loaded=%v", originalOpts.FlashAttn, loadedOpts.FlashAttn)
|
||||||
|
}
|
||||||
|
if loadedOpts.HFRepo != originalOpts.HFRepo {
|
||||||
|
t.Errorf("HFRepo mismatch: original=%q, loaded=%q", originalOpts.HFRepo, loadedOpts.HFRepo)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Compare slice fields
|
||||||
|
if !reflect.DeepEqual(loadedOpts.Lora, originalOpts.Lora) {
|
||||||
|
t.Errorf("Lora mismatch: original=%v, loaded=%v", originalOpts.Lora, loadedOpts.Lora)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify created timestamp is preserved
|
||||||
|
if loadedInstance.Created != originalInstance.Created {
|
||||||
|
t.Errorf("Created timestamp mismatch: original=%d, loaded=%d", originalInstance.Created, loadedInstance.Created)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Helper function to create a test manager with standard config
|
// Helper function to create a test manager with standard config
|
||||||
func createTestManager() llamactl.InstanceManager {
|
func createTestManager() llamactl.InstanceManager {
|
||||||
config := llamactl.InstancesConfig{
|
config := llamactl.InstancesConfig{
|
||||||
PortRange: [2]int{8000, 9000},
|
PortRange: [2]int{8000, 9000},
|
||||||
LogDirectory: "/tmp/test",
|
LogsDir: "/tmp/test",
|
||||||
MaxInstances: 10,
|
MaxInstances: 10,
|
||||||
LlamaExecutable: "llama-server",
|
LlamaExecutable: "llama-server",
|
||||||
DefaultAutoRestart: true,
|
DefaultAutoRestart: true,
|
||||||
|
|||||||
Reference in New Issue
Block a user