From bd31c03f4a44794f5a61307f363fcf3984d3eec5 Mon Sep 17 00:00:00 2001 From: LordMathis Date: Sun, 31 Aug 2025 14:27:00 +0200 Subject: [PATCH 01/13] Create initial documentation structure --- .github/workflows/docs.yml | 65 +++ CONTRIBUTING.md | 44 ++ docs-requirements.txt | 4 + docs/advanced/backends.md | 316 +++++++++++++++ docs/advanced/monitoring.md | 420 +++++++++++++++++++ docs/advanced/troubleshooting.md | 560 ++++++++++++++++++++++++++ docs/development/building.md | 464 +++++++++++++++++++++ docs/development/contributing.md | 373 +++++++++++++++++ docs/getting-started/configuration.md | 154 +++++++ docs/getting-started/installation.md | 55 +++ docs/getting-started/quick-start.md | 86 ++++ docs/index.md | 41 ++ docs/user-guide/api-reference.md | 470 +++++++++++++++++++++ docs/user-guide/managing-instances.md | 171 ++++++++ docs/user-guide/web-ui.md | 216 ++++++++++ mkdocs.yml | 75 ++++ 16 files changed, 3514 insertions(+) create mode 100644 .github/workflows/docs.yml create mode 100644 docs-requirements.txt create mode 100644 docs/advanced/backends.md create mode 100644 docs/advanced/monitoring.md create mode 100644 docs/advanced/troubleshooting.md create mode 100644 docs/development/building.md create mode 100644 docs/development/contributing.md create mode 100644 docs/getting-started/configuration.md create mode 100644 docs/getting-started/installation.md create mode 100644 docs/getting-started/quick-start.md create mode 100644 docs/index.md create mode 100644 docs/user-guide/api-reference.md create mode 100644 docs/user-guide/managing-instances.md create mode 100644 docs/user-guide/web-ui.md create mode 100644 mkdocs.yml diff --git a/.github/workflows/docs.yml b/.github/workflows/docs.yml new file mode 100644 index 0000000..8df96b6 --- /dev/null +++ b/.github/workflows/docs.yml @@ -0,0 +1,65 @@ +name: Build and Deploy Documentation + +on: + push: + branches: [ main ] + paths: + - 'docs/**' + - 'mkdocs.yml' + - 'docs-requirements.txt' + - '.github/workflows/docs.yml' + pull_request: + branches: [ main ] + paths: + - 'docs/**' + - 'mkdocs.yml' + - 'docs-requirements.txt' + +permissions: + contents: read + pages: write + id-token: write + +concurrency: + group: "pages" + cancel-in-progress: false + +jobs: + build: + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v4 + with: + fetch-depth: 0 # Needed for git-revision-date-localized plugin + + - name: Setup Python + uses: actions/setup-python@v4 + with: + python-version: '3.11' + + - name: Install dependencies + run: | + pip install -r docs-requirements.txt + + - name: Build documentation + run: | + mkdocs build --strict + + - name: Upload documentation artifact + if: github.ref == 'refs/heads/main' + uses: actions/upload-pages-artifact@v3 + with: + path: ./site + + deploy: + if: github.ref == 'refs/heads/main' + environment: + name: github-pages + url: ${{ steps.deployment.outputs.page_url }} + runs-on: ubuntu-latest + needs: build + steps: + - name: Deploy to GitHub Pages + id: deployment + uses: actions/deploy-pages@v4 diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 78c8613..1f4a50e 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -129,6 +129,50 @@ Use this format for pull request titles: - Use meaningful component and variable names - Prefer functional components over class components +## Documentation Development + +This project uses MkDocs for documentation. When working on documentation: + +### Setup Documentation Environment + +```bash +# Install documentation dependencies +pip install -r docs-requirements.txt +``` + +### Development Workflow + +```bash +# Serve documentation locally for development +mkdocs serve +``` +The documentation will be available at http://localhost:8000 + +```bash +# Build static documentation site +mkdocs build +``` +The built site will be in the `site/` directory. + +### Documentation Structure + +- `docs/` - Documentation content (Markdown files) +- `mkdocs.yml` - MkDocs configuration +- `docs-requirements.txt` - Python dependencies for documentation + +### Adding New Documentation + +When adding new documentation: + +1. Create Markdown files in the appropriate `docs/` subdirectory +2. Update the navigation in `mkdocs.yml` +3. Test locally with `mkdocs serve` +4. Submit a pull request + +### Documentation Deployment + +Documentation is automatically built and deployed to GitHub Pages when changes are pushed to the main branch. + ## Getting Help - Check existing [issues](https://github.com/lordmathis/llamactl/issues) diff --git a/docs-requirements.txt b/docs-requirements.txt new file mode 100644 index 0000000..256e652 --- /dev/null +++ b/docs-requirements.txt @@ -0,0 +1,4 @@ +mkdocs-material==9.5.3 +mkdocs==1.5.3 +pymdown-extensions==10.7 +mkdocs-git-revision-date-localized-plugin==1.2.4 diff --git a/docs/advanced/backends.md b/docs/advanced/backends.md new file mode 100644 index 0000000..e2542ea --- /dev/null +++ b/docs/advanced/backends.md @@ -0,0 +1,316 @@ +# Backends + +LlamaCtl supports multiple backends for running large language models. This guide covers the available backends and their configuration. + +## Llama.cpp Backend + +The primary backend for LlamaCtl, providing robust support for GGUF models. + +### Features + +- **GGUF Support**: Native support for GGUF model format +- **GPU Acceleration**: CUDA, OpenCL, and Metal support +- **Memory Optimization**: Efficient memory usage and mapping +- **Multi-threading**: Configurable CPU thread utilization +- **Quantization**: Support for various quantization levels + +### Configuration + +```yaml +backends: + llamacpp: + binary_path: "/usr/local/bin/llama-server" + default_options: + threads: 4 + context_size: 2048 + batch_size: 512 + gpu: + enabled: true + layers: 35 +``` + +### Supported Options + +| Option | Description | Default | +|--------|-------------|---------| +| `threads` | Number of CPU threads | 4 | +| `context_size` | Context window size | 2048 | +| `batch_size` | Batch size for processing | 512 | +| `gpu_layers` | Layers to offload to GPU | 0 | +| `memory_lock` | Lock model in memory | false | +| `no_mmap` | Disable memory mapping | false | +| `rope_freq_base` | RoPE frequency base | 10000 | +| `rope_freq_scale` | RoPE frequency scale | 1.0 | + +### GPU Acceleration + +#### CUDA Setup + +```bash +# Install CUDA toolkit +sudo apt update +sudo apt install nvidia-cuda-toolkit + +# Verify CUDA installation +nvcc --version +nvidia-smi +``` + +#### Configuration for GPU + +```json +{ + "name": "gpu-accelerated", + "model_path": "/models/llama-2-13b.gguf", + "port": 8081, + "options": { + "gpu_layers": 35, + "threads": 2, + "context_size": 4096 + } +} +``` + +### Performance Tuning + +#### Memory Optimization + +```yaml +# For limited memory systems +options: + context_size: 1024 + batch_size: 256 + no_mmap: true + memory_lock: false + +# For high-memory systems +options: + context_size: 8192 + batch_size: 1024 + memory_lock: true + no_mmap: false +``` + +#### CPU Optimization + +```yaml +# Match thread count to CPU cores +# For 8-core CPU: +options: + threads: 6 # Leave 2 cores for system + +# For high-performance CPUs: +options: + threads: 16 + batch_size: 1024 +``` + +## Future Backends + +LlamaCtl is designed to support multiple backends. Planned additions: + +### vLLM Backend + +High-performance inference engine optimized for serving: + +- **Features**: Fast inference, batching, streaming +- **Models**: Supports various model formats +- **Scaling**: Horizontal scaling support + +### TensorRT-LLM Backend + +NVIDIA's optimized inference engine: + +- **Features**: Maximum GPU performance +- **Models**: Optimized for NVIDIA GPUs +- **Deployment**: Production-ready inference + +### Ollama Backend + +Integration with Ollama for easy model management: + +- **Features**: Simplified model downloading +- **Models**: Large model library +- **Integration**: Seamless model switching + +## Backend Selection + +### Automatic Detection + +LlamaCtl can automatically detect the best backend: + +```yaml +backends: + auto_detect: true + preference_order: + - "llamacpp" + - "vllm" + - "tensorrt" +``` + +### Manual Selection + +Force a specific backend for an instance: + +```json +{ + "name": "manual-backend", + "backend": "llamacpp", + "model_path": "/models/model.gguf", + "port": 8081 +} +``` + +## Backend-Specific Features + +### Llama.cpp Features + +#### Model Formats + +- **GGUF**: Primary format, best compatibility +- **GGML**: Legacy format (limited support) + +#### Quantization Levels + +- `Q2_K`: Smallest size, lower quality +- `Q4_K_M`: Balanced size and quality +- `Q5_K_M`: Higher quality, larger size +- `Q6_K`: Near-original quality +- `Q8_0`: Minimal loss, largest size + +#### Advanced Options + +```yaml +advanced: + rope_scaling: + type: "linear" + factor: 2.0 + attention: + flash_attention: true + grouped_query: true +``` + +## Monitoring Backend Performance + +### Metrics Collection + +Monitor backend-specific metrics: + +```bash +# Get backend statistics +curl http://localhost:8080/api/instances/my-instance/backend/stats +``` + +**Response:** +```json +{ + "backend": "llamacpp", + "version": "b1234", + "metrics": { + "tokens_per_second": 15.2, + "memory_usage": 4294967296, + "gpu_utilization": 85.5, + "context_usage": 75.0 + } +} +``` + +### Performance Optimization + +#### Benchmark Different Configurations + +```bash +# Test various thread counts +for threads in 2 4 8 16; do + echo "Testing $threads threads" + curl -X PUT http://localhost:8080/api/instances/benchmark \ + -d "{\"options\": {\"threads\": $threads}}" + # Run performance test +done +``` + +#### Memory Usage Optimization + +```bash +# Monitor memory usage +watch -n 1 'curl -s http://localhost:8080/api/instances/my-instance/stats | jq .memory_usage' +``` + +## Troubleshooting Backends + +### Common Llama.cpp Issues + +**Model won't load:** +```bash +# Check model file +file /path/to/model.gguf + +# Verify format +llama-server --model /path/to/model.gguf --dry-run +``` + +**GPU not detected:** +```bash +# Check CUDA installation +nvidia-smi + +# Verify llama.cpp GPU support +llama-server --help | grep -i gpu +``` + +**Performance issues:** +```bash +# Check system resources +htop +nvidia-smi + +# Verify configuration +curl http://localhost:8080/api/instances/my-instance/config +``` + +## Custom Backend Development + +### Backend Interface + +Implement the backend interface for custom backends: + +```go +type Backend interface { + Start(config InstanceConfig) error + Stop(instance *Instance) error + Health(instance *Instance) (*HealthStatus, error) + Stats(instance *Instance) (*Stats, error) +} +``` + +### Registration + +Register your custom backend: + +```go +func init() { + backends.Register("custom", &CustomBackend{}) +} +``` + +## Best Practices + +### Production Deployments + +1. **Resource allocation**: Plan for peak usage +2. **Backend selection**: Choose based on requirements +3. **Monitoring**: Set up comprehensive monitoring +4. **Fallback**: Configure backup backends + +### Development + +1. **Rapid iteration**: Use smaller models +2. **Resource monitoring**: Track usage patterns +3. **Configuration testing**: Validate settings +4. **Performance profiling**: Optimize bottlenecks + +## Next Steps + +- Learn about [Monitoring](monitoring.md) backend performance +- Explore [Troubleshooting](troubleshooting.md) guides +- Set up [Production Monitoring](monitoring.md) diff --git a/docs/advanced/monitoring.md b/docs/advanced/monitoring.md new file mode 100644 index 0000000..7c3c76e --- /dev/null +++ b/docs/advanced/monitoring.md @@ -0,0 +1,420 @@ +# Monitoring + +Comprehensive monitoring setup for LlamaCtl in production environments. + +## Overview + +Effective monitoring of LlamaCtl involves tracking: + +- Instance health and performance +- System resource usage +- API response times +- Error rates and alerts + +## Built-in Monitoring + +### Health Checks + +LlamaCtl provides built-in health monitoring: + +```bash +# Check overall system health +curl http://localhost:8080/api/system/health + +# Check specific instance health +curl http://localhost:8080/api/instances/{name}/health +``` + +### Metrics Endpoint + +Access Prometheus-compatible metrics: + +```bash +curl http://localhost:8080/metrics +``` + +**Available Metrics:** +- `llamactl_instances_total`: Total number of instances +- `llamactl_instances_running`: Number of running instances +- `llamactl_instance_memory_bytes`: Instance memory usage +- `llamactl_instance_cpu_percent`: Instance CPU usage +- `llamactl_api_requests_total`: Total API requests +- `llamactl_api_request_duration_seconds`: API response times + +## Prometheus Integration + +### Configuration + +Add LlamaCtl as a Prometheus target: + +```yaml +# prometheus.yml +scrape_configs: + - job_name: 'llamactl' + static_configs: + - targets: ['localhost:8080'] + metrics_path: '/metrics' + scrape_interval: 15s +``` + +### Custom Metrics + +Enable additional metrics in LlamaCtl: + +```yaml +# config.yaml +monitoring: + enabled: true + prometheus: + enabled: true + path: "/metrics" + metrics: + - instance_stats + - api_performance + - system_resources +``` + +## Grafana Dashboards + +### LlamaCtl Dashboard + +Import the official Grafana dashboard: + +1. Download dashboard JSON from releases +2. Import into Grafana +3. Configure Prometheus data source + +### Key Panels + +**Instance Overview:** +- Instance count and status +- Resource usage per instance +- Health status indicators + +**Performance Metrics:** +- API response times +- Tokens per second +- Memory usage trends + +**System Resources:** +- CPU and memory utilization +- Disk I/O and network usage +- GPU utilization (if applicable) + +### Custom Queries + +**Instance Uptime:** +```promql +(time() - llamactl_instance_start_time_seconds) / 3600 +``` + +**Memory Usage Percentage:** +```promql +(llamactl_instance_memory_bytes / llamactl_system_memory_total_bytes) * 100 +``` + +**API Error Rate:** +```promql +rate(llamactl_api_requests_total{status=~"4.."}[5m]) / rate(llamactl_api_requests_total[5m]) * 100 +``` + +## Alerting + +### Prometheus Alerts + +Configure alerts for critical conditions: + +```yaml +# alerts.yml +groups: + - name: llamactl + rules: + - alert: InstanceDown + expr: llamactl_instance_up == 0 + for: 1m + labels: + severity: critical + annotations: + summary: "LlamaCtl instance {{ $labels.instance_name }} is down" + + - alert: HighMemoryUsage + expr: llamactl_instance_memory_percent > 90 + for: 5m + labels: + severity: warning + annotations: + summary: "High memory usage on {{ $labels.instance_name }}" + + - alert: APIHighLatency + expr: histogram_quantile(0.95, rate(llamactl_api_request_duration_seconds_bucket[5m])) > 2 + for: 2m + labels: + severity: warning + annotations: + summary: "High API latency detected" +``` + +### Notification Channels + +Configure alert notifications: + +**Slack Integration:** +```yaml +# alertmanager.yml +route: + group_by: ['alertname'] + receiver: 'slack' + +receivers: + - name: 'slack' + slack_configs: + - api_url: 'https://hooks.slack.com/services/...' + channel: '#alerts' + title: 'LlamaCtl Alert' + text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}' +``` + +## Log Management + +### Centralized Logging + +Configure log aggregation: + +```yaml +# config.yaml +logging: + level: "info" + output: "json" + destinations: + - type: "file" + path: "/var/log/llamactl/app.log" + - type: "syslog" + facility: "local0" + - type: "elasticsearch" + url: "http://elasticsearch:9200" +``` + +### Log Analysis + +Use ELK stack for log analysis: + +**Elasticsearch Index Template:** +```json +{ + "index_patterns": ["llamactl-*"], + "mappings": { + "properties": { + "timestamp": {"type": "date"}, + "level": {"type": "keyword"}, + "message": {"type": "text"}, + "instance": {"type": "keyword"}, + "component": {"type": "keyword"} + } + } +} +``` + +**Kibana Visualizations:** +- Log volume over time +- Error rate by instance +- Performance trends +- Resource usage patterns + +## Application Performance Monitoring + +### OpenTelemetry Integration + +Enable distributed tracing: + +```yaml +# config.yaml +telemetry: + enabled: true + otlp: + endpoint: "http://jaeger:14268/api/traces" + sampling_rate: 0.1 +``` + +### Custom Spans + +Add custom tracing to track operations: + +```go +ctx, span := tracer.Start(ctx, "instance.start") +defer span.End() + +// Track instance startup time +span.SetAttributes( + attribute.String("instance.name", name), + attribute.String("model.path", modelPath), +) +``` + +## Health Check Configuration + +### Readiness Probes + +Configure Kubernetes readiness probes: + +```yaml +readinessProbe: + httpGet: + path: /api/health + port: 8080 + initialDelaySeconds: 30 + periodSeconds: 10 +``` + +### Liveness Probes + +Configure liveness probes: + +```yaml +livenessProbe: + httpGet: + path: /api/health/live + port: 8080 + initialDelaySeconds: 60 + periodSeconds: 30 +``` + +### Custom Health Checks + +Implement custom health checks: + +```go +func (h *HealthHandler) CustomCheck(ctx context.Context) error { + // Check database connectivity + if err := h.db.Ping(); err != nil { + return fmt.Errorf("database unreachable: %w", err) + } + + // Check instance responsiveness + for _, instance := range h.instances { + if !instance.IsHealthy() { + return fmt.Errorf("instance %s unhealthy", instance.Name) + } + } + + return nil +} +``` + +## Performance Profiling + +### pprof Integration + +Enable Go profiling: + +```yaml +# config.yaml +debug: + pprof_enabled: true + pprof_port: 6060 +``` + +Access profiling endpoints: +```bash +# CPU profile +go tool pprof http://localhost:6060/debug/pprof/profile + +# Memory profile +go tool pprof http://localhost:6060/debug/pprof/heap + +# Goroutine profile +go tool pprof http://localhost:6060/debug/pprof/goroutine +``` + +### Continuous Profiling + +Set up continuous profiling with Pyroscope: + +```yaml +# config.yaml +profiling: + enabled: true + pyroscope: + server_address: "http://pyroscope:4040" + application_name: "llamactl" +``` + +## Security Monitoring + +### Audit Logging + +Enable security audit logs: + +```yaml +# config.yaml +audit: + enabled: true + log_file: "/var/log/llamactl/audit.log" + events: + - "auth.login" + - "auth.logout" + - "instance.create" + - "instance.delete" + - "config.update" +``` + +### Rate Limiting Monitoring + +Track rate limiting metrics: + +```bash +# Monitor rate limit hits +curl http://localhost:8080/metrics | grep rate_limit +``` + +## Troubleshooting Monitoring + +### Common Issues + +**Metrics not appearing:** +1. Check Prometheus configuration +2. Verify network connectivity +3. Review LlamaCtl logs for errors + +**High memory usage:** +1. Check for memory leaks in profiles +2. Monitor garbage collection metrics +3. Review instance configurations + +**Alert fatigue:** +1. Tune alert thresholds +2. Implement alert severity levels +3. Use alert routing and suppression + +### Debug Tools + +**Monitoring health:** +```bash +# Check monitoring endpoints +curl -v http://localhost:8080/metrics +curl -v http://localhost:8080/api/health + +# Review logs +tail -f /var/log/llamactl/app.log +``` + +## Best Practices + +### Production Monitoring + +1. **Comprehensive coverage**: Monitor all critical components +2. **Appropriate alerting**: Balance sensitivity and noise +3. **Regular review**: Analyze trends and patterns +4. **Documentation**: Maintain runbooks for alerts + +### Performance Optimization + +1. **Baseline establishment**: Know normal operating parameters +2. **Trend analysis**: Identify performance degradation early +3. **Capacity planning**: Monitor resource growth trends +4. **Optimization cycles**: Regular performance tuning + +## Next Steps + +- Set up [Troubleshooting](troubleshooting.md) procedures +- Learn about [Backend optimization](backends.md) +- Configure [Production deployment](../development/building.md) diff --git a/docs/advanced/troubleshooting.md b/docs/advanced/troubleshooting.md new file mode 100644 index 0000000..58b85a7 --- /dev/null +++ b/docs/advanced/troubleshooting.md @@ -0,0 +1,560 @@ +# Troubleshooting + +Common issues and solutions for LlamaCtl deployment and operation. + +## Installation Issues + +### Binary Not Found + +**Problem:** `llamactl: command not found` + +**Solutions:** +1. Verify the binary is in your PATH: + ```bash + echo $PATH + which llamactl + ``` + +2. Add to PATH or use full path: + ```bash + export PATH=$PATH:/path/to/llamactl + # or + /full/path/to/llamactl + ``` + +3. Check binary permissions: + ```bash + chmod +x llamactl + ``` + +### Permission Denied + +**Problem:** Permission errors when starting LlamaCtl + +**Solutions:** +1. Check file permissions: + ```bash + ls -la llamactl + chmod +x llamactl + ``` + +2. Verify directory permissions: + ```bash + # Check models directory + ls -la /path/to/models/ + + # Check logs directory + sudo mkdir -p /var/log/llamactl + sudo chown $USER:$USER /var/log/llamactl + ``` + +3. Run with appropriate user: + ```bash + # Don't run as root unless necessary + sudo -u llamactl ./llamactl + ``` + +## Startup Issues + +### Port Already in Use + +**Problem:** `bind: address already in use` + +**Solutions:** +1. Find process using the port: + ```bash + sudo netstat -tulpn | grep :8080 + # or + sudo lsof -i :8080 + ``` + +2. Kill the conflicting process: + ```bash + sudo kill -9 + ``` + +3. Use a different port: + ```bash + llamactl --port 8081 + ``` + +### Configuration Errors + +**Problem:** Invalid configuration preventing startup + +**Solutions:** +1. Validate configuration file: + ```bash + llamactl --config /path/to/config.yaml --validate + ``` + +2. Check YAML syntax: + ```bash + yamllint config.yaml + ``` + +3. Use minimal configuration: + ```yaml + server: + host: "localhost" + port: 8080 + ``` + +## Instance Management Issues + +### Model Loading Failures + +**Problem:** Instance fails to start with model loading errors + +**Diagnostic Steps:** +1. Check model file exists: + ```bash + ls -la /path/to/model.gguf + file /path/to/model.gguf + ``` + +2. Verify model format: + ```bash + # Check if it's a valid GGUF file + hexdump -C /path/to/model.gguf | head -5 + ``` + +3. Test with llama.cpp directly: + ```bash + llama-server --model /path/to/model.gguf --port 8081 + ``` + +**Common Solutions:** +- **Corrupted model:** Re-download the model file +- **Wrong format:** Ensure model is in GGUF format +- **Insufficient memory:** Reduce context size or use smaller model +- **Path issues:** Use absolute paths, check file permissions + +### Memory Issues + +**Problem:** Out of memory errors or system becomes unresponsive + +**Diagnostic Steps:** +1. Check system memory: + ```bash + free -h + cat /proc/meminfo + ``` + +2. Monitor memory usage: + ```bash + top -p $(pgrep llamactl) + ``` + +3. Check instance memory requirements: + ```bash + curl http://localhost:8080/api/instances/{name}/stats + ``` + +**Solutions:** +1. **Reduce context size:** + ```json + { + "options": { + "context_size": 1024 + } + } + ``` + +2. **Enable memory mapping:** + ```json + { + "options": { + "no_mmap": false + } + } + ``` + +3. **Use quantized models:** + - Try Q4_K_M instead of higher precision models + - Use smaller model variants (7B instead of 13B) + +### GPU Issues + +**Problem:** GPU not detected or not being used + +**Diagnostic Steps:** +1. Check GPU availability: + ```bash + nvidia-smi + ``` + +2. Verify CUDA installation: + ```bash + nvcc --version + ``` + +3. Check llama.cpp GPU support: + ```bash + llama-server --help | grep -i gpu + ``` + +**Solutions:** +1. **Install CUDA drivers:** + ```bash + sudo apt update + sudo apt install nvidia-driver-470 nvidia-cuda-toolkit + ``` + +2. **Rebuild llama.cpp with GPU support:** + ```bash + cmake -DLLAMA_CUBLAS=ON .. + make + ``` + +3. **Configure GPU layers:** + ```json + { + "options": { + "gpu_layers": 35 + } + } + ``` + +## Performance Issues + +### Slow Response Times + +**Problem:** API responses are slow or timeouts occur + +**Diagnostic Steps:** +1. Check API response times: + ```bash + time curl http://localhost:8080/api/instances + ``` + +2. Monitor system resources: + ```bash + htop + iotop + ``` + +3. Check instance logs: + ```bash + curl http://localhost:8080/api/instances/{name}/logs + ``` + +**Solutions:** +1. **Optimize thread count:** + ```json + { + "options": { + "threads": 6 + } + } + ``` + +2. **Adjust batch size:** + ```json + { + "options": { + "batch_size": 512 + } + } + ``` + +3. **Enable GPU acceleration:** + ```json + { + "options": { + "gpu_layers": 35 + } + } + ``` + +### High CPU Usage + +**Problem:** LlamaCtl consuming excessive CPU + +**Diagnostic Steps:** +1. Identify CPU-intensive processes: + ```bash + top -p $(pgrep -f llamactl) + ``` + +2. Check thread allocation: + ```bash + curl http://localhost:8080/api/instances/{name}/config + ``` + +**Solutions:** +1. **Reduce thread count:** + ```json + { + "options": { + "threads": 4 + } + } + ``` + +2. **Limit concurrent instances:** + ```yaml + limits: + max_instances: 3 + ``` + +## Network Issues + +### Connection Refused + +**Problem:** Cannot connect to LlamaCtl web interface + +**Diagnostic Steps:** +1. Check if service is running: + ```bash + ps aux | grep llamactl + ``` + +2. Verify port binding: + ```bash + netstat -tulpn | grep :8080 + ``` + +3. Test local connectivity: + ```bash + curl http://localhost:8080/api/health + ``` + +**Solutions:** +1. **Check firewall settings:** + ```bash + sudo ufw status + sudo ufw allow 8080 + ``` + +2. **Bind to correct interface:** + ```yaml + server: + host: "0.0.0.0" # Instead of "localhost" + port: 8080 + ``` + +### CORS Errors + +**Problem:** Web UI shows CORS errors in browser console + +**Solutions:** +1. **Enable CORS in configuration:** + ```yaml + server: + cors_enabled: true + cors_origins: + - "http://localhost:3000" + - "https://yourdomain.com" + ``` + +2. **Use reverse proxy:** + ```nginx + server { + listen 80; + location / { + proxy_pass http://localhost:8080; + proxy_set_header Host $host; + proxy_set_header X-Real-IP $remote_addr; + } + } + ``` + +## Database Issues + +### Startup Database Errors + +**Problem:** Database connection failures on startup + +**Diagnostic Steps:** +1. Check database service: + ```bash + systemctl status postgresql + # or + systemctl status mysql + ``` + +2. Test database connectivity: + ```bash + psql -h localhost -U llamactl -d llamactl + ``` + +**Solutions:** +1. **Start database service:** + ```bash + sudo systemctl start postgresql + sudo systemctl enable postgresql + ``` + +2. **Create database and user:** + ```sql + CREATE DATABASE llamactl; + CREATE USER llamactl WITH PASSWORD 'password'; + GRANT ALL PRIVILEGES ON DATABASE llamactl TO llamactl; + ``` + +## Web UI Issues + +### Blank Page or Loading Issues + +**Problem:** Web UI doesn't load or shows blank page + +**Diagnostic Steps:** +1. Check browser console for errors (F12) +2. Verify API connectivity: + ```bash + curl http://localhost:8080/api/system/status + ``` + +3. Check static file serving: + ```bash + curl http://localhost:8080/ + ``` + +**Solutions:** +1. **Clear browser cache** +2. **Try different browser** +3. **Check for JavaScript errors in console** +4. **Verify API endpoint accessibility** + +### Authentication Issues + +**Problem:** Unable to login or authentication failures + +**Diagnostic Steps:** +1. Check authentication configuration: + ```bash + curl http://localhost:8080/api/config | jq .auth + ``` + +2. Verify user credentials: + ```bash + # Test login endpoint + curl -X POST http://localhost:8080/api/auth/login \ + -H "Content-Type: application/json" \ + -d '{"username":"admin","password":"password"}' + ``` + +**Solutions:** +1. **Reset admin password:** + ```bash + llamactl --reset-admin-password + ``` + +2. **Disable authentication temporarily:** + ```yaml + auth: + enabled: false + ``` + +## Log Analysis + +### Enable Debug Logging + +For detailed troubleshooting, enable debug logging: + +```yaml +logging: + level: "debug" + output: "/var/log/llamactl/debug.log" +``` + +### Key Log Patterns + +Look for these patterns in logs: + +**Startup issues:** +``` +ERRO Failed to start server +ERRO Database connection failed +ERRO Port binding failed +``` + +**Instance issues:** +``` +ERRO Failed to start instance +ERRO Model loading failed +ERRO Process crashed +``` + +**Performance issues:** +``` +WARN High memory usage detected +WARN Request timeout +WARN Resource limit exceeded +``` + +## Getting Help + +### Collecting Information + +When seeking help, provide: + +1. **System information:** + ```bash + uname -a + llamactl --version + ``` + +2. **Configuration:** + ```bash + llamactl --config-dump + ``` + +3. **Logs:** + ```bash + tail -100 /var/log/llamactl/app.log + ``` + +4. **Error details:** + - Exact error messages + - Steps to reproduce + - Environment details + +### Support Channels + +- **GitHub Issues:** Report bugs and feature requests +- **Documentation:** Check this documentation first +- **Community:** Join discussions in GitHub Discussions + +## Preventive Measures + +### Health Monitoring + +Set up monitoring to catch issues early: + +```bash +# Regular health checks +*/5 * * * * curl -f http://localhost:8080/api/health || alert +``` + +### Resource Monitoring + +Monitor system resources: + +```bash +# Disk space monitoring +df -h /var/log/llamactl/ +df -h /path/to/models/ + +# Memory monitoring +free -h +``` + +### Backup Configuration + +Regular configuration backups: + +```bash +# Backup configuration +cp ~/.llamactl/config.yaml ~/.llamactl/config.yaml.backup + +# Backup instance configurations +curl http://localhost:8080/api/instances > instances-backup.json +``` + +## Next Steps + +- Set up [Monitoring](monitoring.md) to prevent issues +- Learn about [Advanced Configuration](backends.md) +- Review [Best Practices](../development/contributing.md) diff --git a/docs/development/building.md b/docs/development/building.md new file mode 100644 index 0000000..6215854 --- /dev/null +++ b/docs/development/building.md @@ -0,0 +1,464 @@ +# Building from Source + +This guide covers building LlamaCtl from source code for development and production deployment. + +## Prerequisites + +### Required Tools + +- **Go 1.24+**: Download from [golang.org](https://golang.org/dl/) +- **Node.js 22+**: Download from [nodejs.org](https://nodejs.org/) +- **Git**: For cloning the repository +- **Make**: For build automation (optional) + +### System Requirements + +- **Memory**: 4GB+ RAM for building +- **Disk**: 2GB+ free space +- **OS**: Linux, macOS, or Windows + +## Quick Build + +### Clone and Build + +```bash +# Clone the repository +git clone https://github.com/lordmathis/llamactl.git +cd llamactl + +# Build the application +go build -o llamactl cmd/server/main.go +``` + +### Run + +```bash +./llamactl +``` + +## Development Build + +### Setup Development Environment + +```bash +# Clone repository +git clone https://github.com/lordmathis/llamactl.git +cd llamactl + +# Install Go dependencies +go mod download + +# Install frontend dependencies +cd webui +npm ci +cd .. +``` + +### Build Components + +```bash +# Build backend only +go build -o llamactl cmd/server/main.go + +# Build frontend only +cd webui +npm run build +cd .. + +# Build everything +make build +``` + +### Development Server + +```bash +# Run backend in development mode +go run cmd/server/main.go --dev + +# Run frontend dev server (separate terminal) +cd webui +npm run dev +``` + +## Production Build + +### Optimized Build + +```bash +# Build with optimizations +go build -ldflags="-s -w" -o llamactl cmd/server/main.go + +# Or use the Makefile +make build-prod +``` + +### Build Flags + +Common build flags for production: + +```bash +go build \ + -ldflags="-s -w -X main.version=1.0.0 -X main.buildTime=$(date -u +%Y-%m-%dT%H:%M:%SZ)" \ + -trimpath \ + -o llamactl \ + cmd/server/main.go +``` + +**Flag explanations:** +- `-s`: Strip symbol table +- `-w`: Strip debug information +- `-X`: Set variable values at build time +- `-trimpath`: Remove absolute paths from binary + +## Cross-Platform Building + +### Build for Multiple Platforms + +```bash +# Linux AMD64 +GOOS=linux GOARCH=amd64 go build -o llamactl-linux-amd64 cmd/server/main.go + +# Linux ARM64 +GOOS=linux GOARCH=arm64 go build -o llamactl-linux-arm64 cmd/server/main.go + +# macOS AMD64 +GOOS=darwin GOARCH=amd64 go build -o llamactl-darwin-amd64 cmd/server/main.go + +# macOS ARM64 (Apple Silicon) +GOOS=darwin GOARCH=arm64 go build -o llamactl-darwin-arm64 cmd/server/main.go + +# Windows AMD64 +GOOS=windows GOARCH=amd64 go build -o llamactl-windows-amd64.exe cmd/server/main.go +``` + +### Automated Cross-Building + +Use the provided Makefile: + +```bash +# Build all platforms +make build-all + +# Build specific platform +make build-linux +make build-darwin +make build-windows +``` + +## Build with Docker + +### Development Container + +```dockerfile +# Dockerfile.dev +FROM golang:1.24-alpine AS builder + +WORKDIR /app +COPY go.mod go.sum ./ +RUN go mod download + +COPY . . +RUN go build -o llamactl cmd/server/main.go + +FROM alpine:latest +RUN apk --no-cache add ca-certificates +WORKDIR /root/ +COPY --from=builder /app/llamactl . + +EXPOSE 8080 +CMD ["./llamactl"] +``` + +```bash +# Build development image +docker build -f Dockerfile.dev -t llamactl:dev . + +# Run container +docker run -p 8080:8080 llamactl:dev +``` + +### Production Container + +```dockerfile +# Dockerfile +FROM node:22-alpine AS frontend-builder + +WORKDIR /app/webui +COPY webui/package*.json ./ +RUN npm ci + +COPY webui/ ./ +RUN npm run build + +FROM golang:1.24-alpine AS backend-builder + +WORKDIR /app +COPY go.mod go.sum ./ +RUN go mod download + +COPY . . +COPY --from=frontend-builder /app/webui/dist ./webui/dist + +RUN CGO_ENABLED=0 GOOS=linux go build \ + -ldflags="-s -w" \ + -o llamactl \ + cmd/server/main.go + +FROM alpine:latest + +RUN apk --no-cache add ca-certificates tzdata +RUN adduser -D -s /bin/sh llamactl + +WORKDIR /home/llamactl +COPY --from=backend-builder /app/llamactl . +RUN chown llamactl:llamactl llamactl + +USER llamactl +EXPOSE 8080 + +CMD ["./llamactl"] +``` + +## Advanced Build Options + +### Static Linking + +For deployments without external dependencies: + +```bash +CGO_ENABLED=0 go build \ + -ldflags="-s -w -extldflags '-static'" \ + -o llamactl-static \ + cmd/server/main.go +``` + +### Debug Build + +Build with debug information: + +```bash +go build -gcflags="all=-N -l" -o llamactl-debug cmd/server/main.go +``` + +### Race Detection Build + +Build with race detection (development only): + +```bash +go build -race -o llamactl-race cmd/server/main.go +``` + +## Build Automation + +### Makefile + +```makefile +# Makefile +VERSION := $(shell git describe --tags --always --dirty) +BUILD_TIME := $(shell date -u +%Y-%m-%dT%H:%M:%SZ) +LDFLAGS := -s -w -X main.version=$(VERSION) -X main.buildTime=$(BUILD_TIME) + +.PHONY: build clean test install + +build: + @echo "Building LlamaCtl..." + @cd webui && npm run build + @go build -ldflags="$(LDFLAGS)" -o llamactl cmd/server/main.go + +build-prod: + @echo "Building production binary..." + @cd webui && npm run build + @CGO_ENABLED=0 go build -ldflags="$(LDFLAGS)" -trimpath -o llamactl cmd/server/main.go + +build-all: build-linux build-darwin build-windows + +build-linux: + @GOOS=linux GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-linux-amd64 cmd/server/main.go + @GOOS=linux GOARCH=arm64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-linux-arm64 cmd/server/main.go + +build-darwin: + @GOOS=darwin GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-darwin-amd64 cmd/server/main.go + @GOOS=darwin GOARCH=arm64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-darwin-arm64 cmd/server/main.go + +build-windows: + @GOOS=windows GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-windows-amd64.exe cmd/server/main.go + +test: + @go test ./... + +clean: + @rm -f llamactl llamactl-* + @rm -rf dist/ + +install: build + @cp llamactl $(GOPATH)/bin/llamactl +``` + +### GitHub Actions + +```yaml +# .github/workflows/build.yml +name: Build + +on: + push: + branches: [ main ] + pull_request: + branches: [ main ] + +jobs: + test: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + - name: Set up Go + uses: actions/setup-go@v4 + with: + go-version: '1.24' + + - name: Set up Node.js + uses: actions/setup-node@v4 + with: + node-version: '22' + + - name: Install dependencies + run: | + go mod download + cd webui && npm ci + + - name: Run tests + run: | + go test ./... + cd webui && npm test + + - name: Build + run: make build + + build: + needs: test + runs-on: ubuntu-latest + if: github.ref == 'refs/heads/main' + + steps: + - uses: actions/checkout@v4 + + - name: Set up Go + uses: actions/setup-go@v4 + with: + go-version: '1.24' + + - name: Set up Node.js + uses: actions/setup-node@v4 + with: + node-version: '22' + + - name: Build all platforms + run: make build-all + + - name: Upload artifacts + uses: actions/upload-artifact@v4 + with: + name: binaries + path: dist/ +``` + +## Build Troubleshooting + +### Common Issues + +**Go version mismatch:** +```bash +# Check Go version +go version + +# Update Go +# Download from https://golang.org/dl/ +``` + +**Node.js issues:** +```bash +# Clear npm cache +npm cache clean --force + +# Remove node_modules and reinstall +rm -rf webui/node_modules +cd webui && npm ci +``` + +**Build failures:** +```bash +# Clean and rebuild +make clean +go mod tidy +make build +``` + +### Performance Issues + +**Slow builds:** +```bash +# Use build cache +export GOCACHE=$(go env GOCACHE) + +# Parallel builds +export GOMAXPROCS=$(nproc) +``` + +**Large binary size:** +```bash +# Use UPX compression +upx --best llamactl + +# Analyze binary size +go tool nm -size llamactl | head -20 +``` + +## Deployment + +### System Service + +Create a systemd service: + +```ini +# /etc/systemd/system/llamactl.service +[Unit] +Description=LlamaCtl Server +After=network.target + +[Service] +Type=simple +User=llamactl +Group=llamactl +ExecStart=/usr/local/bin/llamactl +Restart=always +RestartSec=5 + +[Install] +WantedBy=multi-user.target +``` + +```bash +# Enable and start service +sudo systemctl enable llamactl +sudo systemctl start llamactl +``` + +### Configuration + +```bash +# Create configuration directory +sudo mkdir -p /etc/llamactl + +# Copy configuration +sudo cp config.yaml /etc/llamactl/ + +# Set permissions +sudo chown -R llamactl:llamactl /etc/llamactl +``` + +## Next Steps + +- Configure [Installation](../getting-started/installation.md) +- Set up [Configuration](../getting-started/configuration.md) +- Learn about [Contributing](contributing.md) diff --git a/docs/development/contributing.md b/docs/development/contributing.md new file mode 100644 index 0000000..c2c146f --- /dev/null +++ b/docs/development/contributing.md @@ -0,0 +1,373 @@ +# Contributing + +Thank you for your interest in contributing to LlamaCtl! This guide will help you get started with development and contribution. + +## Development Setup + +### Prerequisites + +- Go 1.24 or later +- Node.js 22 or later +- `llama-server` executable (from [llama.cpp](https://github.com/ggml-org/llama.cpp)) +- Git + +### Getting Started + +1. **Fork and Clone** + ```bash + # Fork the repository on GitHub, then clone your fork + git clone https://github.com/yourusername/llamactl.git + cd llamactl + + # Add upstream remote + git remote add upstream https://github.com/lordmathis/llamactl.git + ``` + +2. **Install Dependencies** + ```bash + # Go dependencies + go mod download + + # Frontend dependencies + cd webui && npm ci && cd .. + ``` + +3. **Run Development Environment** + ```bash + # Start backend server + go run ./cmd/server + ``` + + In a separate terminal: + ```bash + # Start frontend dev server + cd webui && npm run dev + ``` + +## Development Workflow + +### Setting Up Your Environment + +1. **Configuration** + Create a development configuration file: + ```yaml + # dev-config.yaml + server: + host: "localhost" + port: 8080 + logging: + level: "debug" + ``` + +2. **Test Data** + Set up test models and instances for development. + +### Making Changes + +1. **Create a Branch** + ```bash + git checkout -b feature/your-feature-name + ``` + +2. **Development Commands** + ```bash + # Backend + go test ./... -v # Run tests + go test -race ./... -v # Run with race detector + go fmt ./... && go vet ./... # Format and vet code + go build ./cmd/server # Build binary + + # Frontend (from webui/ directory) + npm run test # Run tests + npm run lint # Lint code + npm run type-check # TypeScript check + npm run build # Build for production + ``` + +3. **Code Quality** + ```bash + # Run all checks before committing + make lint + make test + make build + ``` + +## Project Structure + +### Backend (Go) + +``` +cmd/ +├── server/ # Main application entry point +pkg/ +├── backends/ # Model backend implementations +├── config/ # Configuration management +├── instance/ # Instance lifecycle management +├── manager/ # Instance manager +├── server/ # HTTP server and routes +├── testutil/ # Test utilities +└── validation/ # Input validation +``` + +### Frontend (React/TypeScript) + +``` +webui/src/ +├── components/ # React components +├── contexts/ # React contexts +├── hooks/ # Custom hooks +├── lib/ # Utility libraries +├── schemas/ # Zod schemas +└── types/ # TypeScript types +``` + +## Coding Standards + +### Go Code + +- Follow standard Go formatting (`gofmt`) +- Use `go vet` and address all warnings +- Write comprehensive tests for new functionality +- Include documentation comments for exported functions +- Use meaningful variable and function names + +Example: +```go +// CreateInstance creates a new model instance with the given configuration. +// It validates the configuration and ensures the instance name is unique. +func (m *Manager) CreateInstance(ctx context.Context, config InstanceConfig) (*Instance, error) { + if err := config.Validate(); err != nil { + return nil, fmt.Errorf("invalid configuration: %w", err) + } + + // Implementation... +} +``` + +### TypeScript/React Code + +- Use TypeScript strict mode +- Follow React best practices +- Use functional components with hooks +- Implement proper error boundaries +- Write unit tests for components + +Example: +```typescript +interface InstanceCardProps { + instance: Instance; + onStart: (name: string) => Promise; + onStop: (name: string) => Promise; +} + +export const InstanceCard: React.FC = ({ + instance, + onStart, + onStop, +}) => { + // Implementation... +}; +``` + +## Testing + +### Backend Tests + +```bash +# Run all tests +go test ./... + +# Run tests with coverage +go test ./... -coverprofile=coverage.out +go tool cover -html=coverage.out + +# Run specific package tests +go test ./pkg/manager -v + +# Run with race detection +go test -race ./... +``` + +### Frontend Tests + +```bash +cd webui + +# Run unit tests +npm run test + +# Run tests with coverage +npm run test:coverage + +# Run E2E tests +npm run test:e2e +``` + +### Integration Tests + +```bash +# Run integration tests (requires llama-server) +go test ./... -tags=integration +``` + +## Pull Request Process + +### Before Submitting + +1. **Update your branch** + ```bash + git fetch upstream + git rebase upstream/main + ``` + +2. **Run all tests** + ```bash + make test-all + ``` + +3. **Update documentation** if needed + +4. **Write clear commit messages** + ``` + feat: add instance health monitoring + + - Implement health check endpoint + - Add periodic health monitoring + - Update API documentation + + Fixes #123 + ``` + +### Submitting a PR + +1. **Push your branch** + ```bash + git push origin feature/your-feature-name + ``` + +2. **Create Pull Request** + - Use the PR template + - Provide clear description + - Link related issues + - Add screenshots for UI changes + +3. **PR Review Process** + - Automated checks must pass + - Code review by maintainers + - Address feedback promptly + - Keep PR scope focused + +## Issue Guidelines + +### Reporting Bugs + +Use the bug report template and include: + +- Steps to reproduce +- Expected vs actual behavior +- Environment details (OS, Go version, etc.) +- Relevant logs or error messages +- Minimal reproduction case + +### Feature Requests + +Use the feature request template and include: + +- Clear description of the problem +- Proposed solution +- Alternative solutions considered +- Implementation complexity estimate + +### Security Issues + +For security vulnerabilities: +- Do NOT create public issues +- Email security@llamactl.dev +- Provide detailed description +- Allow time for fix before disclosure + +## Development Best Practices + +### API Design + +- Follow REST principles +- Use consistent naming conventions +- Provide comprehensive error messages +- Include proper HTTP status codes +- Document all endpoints + +### Error Handling + +```go +// Wrap errors with context +if err := instance.Start(); err != nil { + return fmt.Errorf("failed to start instance %s: %w", instance.Name, err) +} + +// Use structured logging +log.WithFields(log.Fields{ + "instance": instance.Name, + "error": err, +}).Error("Failed to start instance") +``` + +### Configuration + +- Use environment variables for deployment +- Provide sensible defaults +- Validate configuration on startup +- Support configuration file reloading + +### Performance + +- Profile code for bottlenecks +- Use efficient data structures +- Implement proper caching +- Monitor resource usage + +## Release Process + +### Version Management + +- Use semantic versioning (SemVer) +- Tag releases properly +- Maintain CHANGELOG.md +- Create release notes + +### Building Releases + +```bash +# Build all platforms +make build-all + +# Create release package +make package +``` + +## Getting Help + +### Communication Channels + +- **GitHub Issues**: Bug reports and feature requests +- **GitHub Discussions**: General questions and ideas +- **Code Review**: PR comments and feedback + +### Development Questions + +When asking for help: + +1. Check existing documentation +2. Search previous issues +3. Provide minimal reproduction case +4. Include relevant environment details + +## Recognition + +Contributors are recognized in: + +- CONTRIBUTORS.md file +- Release notes +- Documentation credits +- Annual contributor highlights + +Thank you for contributing to LlamaCtl! diff --git a/docs/getting-started/configuration.md b/docs/getting-started/configuration.md new file mode 100644 index 0000000..6c8ae7f --- /dev/null +++ b/docs/getting-started/configuration.md @@ -0,0 +1,154 @@ +# Configuration + +LlamaCtl can be configured through various methods to suit your needs. + +## Configuration File + +Create a configuration file at `~/.llamactl/config.yaml`: + +```yaml +# Server configuration +server: + host: "0.0.0.0" + port: 8080 + cors_enabled: true + +# Authentication (optional) +auth: + enabled: false + # When enabled, configure your authentication method + # jwt_secret: "your-secret-key" + +# Default instance settings +defaults: + backend: "llamacpp" + timeout: 300 + log_level: "info" + +# Paths +paths: + models_dir: "/path/to/your/models" + logs_dir: "/var/log/llamactl" + data_dir: "/var/lib/llamactl" + +# Instance limits +limits: + max_instances: 10 + max_memory_per_instance: "8GB" +``` + +## Environment Variables + +You can also configure LlamaCtl using environment variables: + +```bash +# Server settings +export LLAMACTL_HOST=0.0.0.0 +export LLAMACTL_PORT=8080 + +# Paths +export LLAMACTL_MODELS_DIR=/path/to/models +export LLAMACTL_LOGS_DIR=/var/log/llamactl + +# Limits +export LLAMACTL_MAX_INSTANCES=5 +``` + +## Command Line Options + +View all available command line options: + +```bash +llamactl --help +``` + +Common options: + +```bash +# Specify config file +llamactl --config /path/to/config.yaml + +# Set log level +llamactl --log-level debug + +# Run on different port +llamactl --port 9090 +``` + +## Instance Configuration + +When creating instances, you can specify various options: + +### Basic Options + +- `name`: Unique identifier for the instance +- `model_path`: Path to the GGUF model file +- `port`: Port for the instance to listen on + +### Advanced Options + +- `threads`: Number of CPU threads to use +- `context_size`: Context window size +- `batch_size`: Batch size for processing +- `gpu_layers`: Number of layers to offload to GPU +- `memory_lock`: Lock model in memory +- `no_mmap`: Disable memory mapping + +### Example Instance Configuration + +```json +{ + "name": "production-model", + "model_path": "/models/llama-2-13b-chat.gguf", + "port": 8081, + "options": { + "threads": 8, + "context_size": 4096, + "batch_size": 512, + "gpu_layers": 35, + "memory_lock": true + } +} +``` + +## Security Configuration + +### Enable Authentication + +To enable authentication, update your config file: + +```yaml +auth: + enabled: true + jwt_secret: "your-very-secure-secret-key" + token_expiry: "24h" +``` + +### HTTPS Configuration + +For production deployments, configure HTTPS: + +```yaml +server: + tls: + enabled: true + cert_file: "/path/to/cert.pem" + key_file: "/path/to/key.pem" +``` + +## Logging Configuration + +Configure logging levels and outputs: + +```yaml +logging: + level: "info" # debug, info, warn, error + format: "json" # json or text + output: "/var/log/llamactl/app.log" +``` + +## Next Steps + +- Learn about [Managing Instances](../user-guide/managing-instances.md) +- Explore [Advanced Configuration](../advanced/monitoring.md) +- Set up [Monitoring](../advanced/monitoring.md) diff --git a/docs/getting-started/installation.md b/docs/getting-started/installation.md new file mode 100644 index 0000000..7a71629 --- /dev/null +++ b/docs/getting-started/installation.md @@ -0,0 +1,55 @@ +# Installation + +This guide will walk you through installing LlamaCtl on your system. + +## Prerequisites + +Before installing LlamaCtl, ensure you have: + +- Go 1.19 or later +- Git +- Sufficient disk space for your models + +## Installation Methods + +### Option 1: Download Binary (Recommended) + +Download the latest release from our [GitHub releases page](https://github.com/lordmathis/llamactl/releases): + +```bash +# Download for Linux +curl -L https://github.com/lordmathis/llamactl/releases/latest/download/llamactl-linux-amd64 -o llamactl + +# Make executable +chmod +x llamactl + +# Move to PATH (optional) +sudo mv llamactl /usr/local/bin/ +``` + +### Option 2: Build from Source + +If you prefer to build from source: + +```bash +# Clone the repository +git clone https://github.com/lordmathis/llamactl.git +cd llamactl + +# Build the application +go build -o llamactl cmd/server/main.go +``` + +For detailed build instructions, see the [Building from Source](../development/building.md) guide. + +## Verification + +Verify your installation by checking the version: + +```bash +llamactl --version +``` + +## Next Steps + +Now that LlamaCtl is installed, continue to the [Quick Start](quick-start.md) guide to get your first instance running! diff --git a/docs/getting-started/quick-start.md b/docs/getting-started/quick-start.md new file mode 100644 index 0000000..2d77e2e --- /dev/null +++ b/docs/getting-started/quick-start.md @@ -0,0 +1,86 @@ +# Quick Start + +This guide will help you get LlamaCtl up and running in just a few minutes. + +## Step 1: Start LlamaCtl + +Start the LlamaCtl server: + +```bash +llamactl +``` + +By default, LlamaCtl will start on `http://localhost:8080`. + +## Step 2: Access the Web UI + +Open your web browser and navigate to: + +``` +http://localhost:8080 +``` + +You should see the LlamaCtl web interface. + +## Step 3: Create Your First Instance + +1. Click the "Add Instance" button +2. Fill in the instance configuration: + - **Name**: Give your instance a descriptive name + - **Model Path**: Path to your Llama.cpp model file + - **Port**: Port for the instance to run on + - **Additional Options**: Any extra Llama.cpp parameters + +3. Click "Create Instance" + +## Step 4: Start Your Instance + +Once created, you can: + +- **Start** the instance by clicking the start button +- **Monitor** its status in real-time +- **View logs** by clicking the logs button +- **Stop** the instance when needed + +## Example Configuration + +Here's a basic example configuration for a Llama 2 model: + +```json +{ + "name": "llama2-7b", + "model_path": "/path/to/llama-2-7b-chat.gguf", + "port": 8081, + "options": { + "threads": 4, + "context_size": 2048 + } +} +``` + +## Using the API + +You can also manage instances via the REST API: + +```bash +# List all instances +curl http://localhost:8080/api/instances + +# Create a new instance +curl -X POST http://localhost:8080/api/instances \ + -H "Content-Type: application/json" \ + -d '{ + "name": "my-model", + "model_path": "/path/to/model.gguf", + "port": 8081 + }' + +# Start an instance +curl -X POST http://localhost:8080/api/instances/my-model/start +``` + +## Next Steps + +- Learn more about the [Web UI](../user-guide/web-ui.md) +- Explore the [API Reference](../user-guide/api-reference.md) +- Configure advanced settings in the [Configuration](configuration.md) guide diff --git a/docs/index.md b/docs/index.md new file mode 100644 index 0000000..f1fd69f --- /dev/null +++ b/docs/index.md @@ -0,0 +1,41 @@ +# LlamaCtl Documentation + +Welcome to the LlamaCtl documentation! LlamaCtl is a powerful management tool for Llama.cpp instances that provides both a web interface and REST API for managing large language models. + +## What is LlamaCtl? + +LlamaCtl is designed to simplify the deployment and management of Llama.cpp instances. It provides: + +- **Instance Management**: Start, stop, and monitor multiple Llama.cpp instances +- **Web UI**: User-friendly interface for managing your models +- **REST API**: Programmatic access to all functionality +- **Health Monitoring**: Real-time status and health checks +- **Configuration Management**: Easy setup and configuration options + +## Key Features + +- 🚀 **Easy Setup**: Quick installation and configuration +- 🌐 **Web Interface**: Intuitive web UI for model management +- 🔧 **REST API**: Full API access for automation +- 📊 **Monitoring**: Real-time health and status monitoring +- 🔒 **Security**: Authentication and access control +- 📱 **Responsive**: Works on desktop and mobile devices + +## Quick Links + +- [Installation Guide](getting-started/installation.md) - Get LlamaCtl up and running +- [Quick Start](getting-started/quick-start.md) - Your first steps with LlamaCtl +- [Web UI Guide](user-guide/web-ui.md) - Learn to use the web interface +- [API Reference](user-guide/api-reference.md) - Complete API documentation + +## Getting Help + +If you need help or have questions: + +- Check the [Troubleshooting](advanced/troubleshooting.md) guide +- Visit our [GitHub repository](https://github.com/lordmathis/llamactl) +- Read the [Contributing guide](development/contributing.md) to help improve LlamaCtl + +--- + +Ready to get started? Head over to the [Installation Guide](getting-started/installation.md)! diff --git a/docs/user-guide/api-reference.md b/docs/user-guide/api-reference.md new file mode 100644 index 0000000..813aa06 --- /dev/null +++ b/docs/user-guide/api-reference.md @@ -0,0 +1,470 @@ +# API Reference + +Complete reference for the LlamaCtl REST API. + +## Base URL + +All API endpoints are relative to the base URL: + +``` +http://localhost:8080/api +``` + +## Authentication + +If authentication is enabled, include the JWT token in the Authorization header: + +```bash +curl -H "Authorization: Bearer " \ + http://localhost:8080/api/instances +``` + +## Instances + +### List All Instances + +Get a list of all instances. + +```http +GET /api/instances +``` + +**Response:** +```json +{ + "instances": [ + { + "name": "llama2-7b", + "status": "running", + "model_path": "/models/llama-2-7b.gguf", + "port": 8081, + "created_at": "2024-01-15T10:30:00Z", + "updated_at": "2024-01-15T12:45:00Z" + } + ] +} +``` + +### Get Instance Details + +Get detailed information about a specific instance. + +```http +GET /api/instances/{name} +``` + +**Response:** +```json +{ + "name": "llama2-7b", + "status": "running", + "model_path": "/models/llama-2-7b.gguf", + "port": 8081, + "pid": 12345, + "options": { + "threads": 4, + "context_size": 2048, + "gpu_layers": 0 + }, + "stats": { + "memory_usage": 4294967296, + "cpu_usage": 25.5, + "uptime": 3600 + }, + "created_at": "2024-01-15T10:30:00Z", + "updated_at": "2024-01-15T12:45:00Z" +} +``` + +### Create Instance + +Create a new instance. + +```http +POST /api/instances +``` + +**Request Body:** +```json +{ + "name": "my-instance", + "model_path": "/path/to/model.gguf", + "port": 8081, + "options": { + "threads": 4, + "context_size": 2048, + "gpu_layers": 0 + } +} +``` + +**Response:** +```json +{ + "message": "Instance created successfully", + "instance": { + "name": "my-instance", + "status": "stopped", + "model_path": "/path/to/model.gguf", + "port": 8081, + "created_at": "2024-01-15T14:30:00Z" + } +} +``` + +### Update Instance + +Update an existing instance configuration. + +```http +PUT /api/instances/{name} +``` + +**Request Body:** +```json +{ + "options": { + "threads": 8, + "context_size": 4096 + } +} +``` + +### Delete Instance + +Delete an instance (must be stopped first). + +```http +DELETE /api/instances/{name} +``` + +**Response:** +```json +{ + "message": "Instance deleted successfully" +} +``` + +## Instance Operations + +### Start Instance + +Start a stopped instance. + +```http +POST /api/instances/{name}/start +``` + +**Response:** +```json +{ + "message": "Instance start initiated", + "status": "starting" +} +``` + +### Stop Instance + +Stop a running instance. + +```http +POST /api/instances/{name}/stop +``` + +**Request Body (Optional):** +```json +{ + "force": false, + "timeout": 30 +} +``` + +**Response:** +```json +{ + "message": "Instance stop initiated", + "status": "stopping" +} +``` + +### Restart Instance + +Restart an instance (stop then start). + +```http +POST /api/instances/{name}/restart +``` + +### Get Instance Health + +Check instance health status. + +```http +GET /api/instances/{name}/health +``` + +**Response:** +```json +{ + "status": "healthy", + "checks": { + "process": "running", + "port": "open", + "response": "ok" + }, + "last_check": "2024-01-15T14:30:00Z" +} +``` + +### Get Instance Logs + +Retrieve instance logs. + +```http +GET /api/instances/{name}/logs +``` + +**Query Parameters:** +- `lines`: Number of lines to return (default: 100) +- `follow`: Stream logs (boolean) +- `level`: Filter by log level (debug, info, warn, error) + +**Response:** +```json +{ + "logs": [ + { + "timestamp": "2024-01-15T14:30:00Z", + "level": "info", + "message": "Model loaded successfully" + } + ] +} +``` + +## Batch Operations + +### Start All Instances + +Start all stopped instances. + +```http +POST /api/instances/start-all +``` + +### Stop All Instances + +Stop all running instances. + +```http +POST /api/instances/stop-all +``` + +## System Information + +### Get System Status + +Get overall system status and metrics. + +```http +GET /api/system/status +``` + +**Response:** +```json +{ + "version": "1.0.0", + "uptime": 86400, + "instances": { + "total": 5, + "running": 3, + "stopped": 2 + }, + "resources": { + "cpu_usage": 45.2, + "memory_usage": 8589934592, + "memory_total": 17179869184, + "disk_usage": 75.5 + } +} +``` + +### Get System Information + +Get detailed system information. + +```http +GET /api/system/info +``` + +**Response:** +```json +{ + "hostname": "server-01", + "os": "linux", + "arch": "amd64", + "cpu_count": 8, + "memory_total": 17179869184, + "version": "1.0.0", + "build_time": "2024-01-15T10:00:00Z" +} +``` + +## Configuration + +### Get Configuration + +Get current LlamaCtl configuration. + +```http +GET /api/config +``` + +### Update Configuration + +Update LlamaCtl configuration (requires restart). + +```http +PUT /api/config +``` + +## Authentication + +### Login + +Authenticate and receive a JWT token. + +```http +POST /api/auth/login +``` + +**Request Body:** +```json +{ + "username": "admin", + "password": "password" +} +``` + +**Response:** +```json +{ + "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...", + "expires_at": "2024-01-16T14:30:00Z" +} +``` + +### Refresh Token + +Refresh an existing JWT token. + +```http +POST /api/auth/refresh +``` + +## Error Responses + +All endpoints may return error responses in the following format: + +```json +{ + "error": "Error message", + "code": "ERROR_CODE", + "details": "Additional error details" +} +``` + +### Common HTTP Status Codes + +- `200`: Success +- `201`: Created +- `400`: Bad Request +- `401`: Unauthorized +- `403`: Forbidden +- `404`: Not Found +- `409`: Conflict (e.g., instance already exists) +- `500`: Internal Server Error + +## WebSocket API + +### Real-time Updates + +Connect to WebSocket for real-time updates: + +```javascript +const ws = new WebSocket('ws://localhost:8080/api/ws'); + +ws.onmessage = function(event) { + const data = JSON.parse(event.data); + console.log('Update:', data); +}; +``` + +**Message Types:** +- `instance_status_changed`: Instance status updates +- `instance_stats_updated`: Resource usage updates +- `system_alert`: System-level alerts + +## Rate Limiting + +API requests are rate limited to: +- **100 requests per minute** for regular endpoints +- **10 requests per minute** for resource-intensive operations + +Rate limit headers are included in responses: +- `X-RateLimit-Limit`: Request limit +- `X-RateLimit-Remaining`: Remaining requests +- `X-RateLimit-Reset`: Reset time (Unix timestamp) + +## SDKs and Libraries + +### Go Client + +```go +import "github.com/lordmathis/llamactl-go-client" + +client := llamactl.NewClient("http://localhost:8080") +instances, err := client.ListInstances() +``` + +### Python Client + +```python +from llamactl import Client + +client = Client("http://localhost:8080") +instances = client.list_instances() +``` + +## Examples + +### Complete Instance Lifecycle + +```bash +# Create instance +curl -X POST http://localhost:8080/api/instances \ + -H "Content-Type: application/json" \ + -d '{ + "name": "example", + "model_path": "/models/example.gguf", + "port": 8081 + }' + +# Start instance +curl -X POST http://localhost:8080/api/instances/example/start + +# Check status +curl http://localhost:8080/api/instances/example + +# Stop instance +curl -X POST http://localhost:8080/api/instances/example/stop + +# Delete instance +curl -X DELETE http://localhost:8080/api/instances/example +``` + +## Next Steps + +- Learn about [Managing Instances](managing-instances.md) in detail +- Explore [Advanced Configuration](../advanced/backends.md) +- Set up [Monitoring](../advanced/monitoring.md) for production use diff --git a/docs/user-guide/managing-instances.md b/docs/user-guide/managing-instances.md new file mode 100644 index 0000000..fcb3455 --- /dev/null +++ b/docs/user-guide/managing-instances.md @@ -0,0 +1,171 @@ +# Managing Instances + +Learn how to effectively manage your Llama.cpp instances with LlamaCtl. + +## Instance Lifecycle + +### Creating Instances + +Instances can be created through the Web UI or API: + +#### Via Web UI +1. Click "Add Instance" button +2. Fill in the configuration form +3. Click "Create" + +#### Via API +```bash +curl -X POST http://localhost:8080/api/instances \ + -H "Content-Type: application/json" \ + -d '{ + "name": "my-instance", + "model_path": "/path/to/model.gguf", + "port": 8081 + }' +``` + +### Starting and Stopping + +#### Start an Instance +```bash +# Via API +curl -X POST http://localhost:8080/api/instances/{name}/start + +# The instance will begin loading the model +``` + +#### Stop an Instance +```bash +# Via API +curl -X POST http://localhost:8080/api/instances/{name}/stop + +# Graceful shutdown with configurable timeout +``` + +### Monitoring Status + +Check instance status in real-time: + +```bash +# Get instance details +curl http://localhost:8080/api/instances/{name} + +# Get health status +curl http://localhost:8080/api/instances/{name}/health +``` + +## Instance States + +Instances can be in one of several states: + +- **Stopped**: Instance is not running +- **Starting**: Instance is initializing and loading the model +- **Running**: Instance is active and ready to serve requests +- **Stopping**: Instance is shutting down gracefully +- **Error**: Instance encountered an error + +## Configuration Management + +### Updating Instance Configuration + +Modify instance settings: + +```bash +curl -X PUT http://localhost:8080/api/instances/{name} \ + -H "Content-Type: application/json" \ + -d '{ + "options": { + "threads": 8, + "context_size": 4096 + } + }' +``` + +!!! note + Configuration changes require restarting the instance to take effect. + +### Viewing Configuration + +```bash +# Get current configuration +curl http://localhost:8080/api/instances/{name}/config +``` + +## Resource Management + +### Memory Usage + +Monitor memory consumption: + +```bash +# Get resource usage +curl http://localhost:8080/api/instances/{name}/stats +``` + +### CPU and GPU Usage + +Track performance metrics: + +- CPU thread utilization +- GPU memory usage (if applicable) +- Request processing times + +## Troubleshooting Common Issues + +### Instance Won't Start + +1. **Check model path**: Ensure the model file exists and is readable +2. **Port conflicts**: Verify the port isn't already in use +3. **Resource limits**: Check available memory and CPU +4. **Permissions**: Ensure proper file system permissions + +### Performance Issues + +1. **Adjust thread count**: Match to your CPU cores +2. **Optimize context size**: Balance memory usage and capability +3. **GPU offloading**: Use `gpu_layers` for GPU acceleration +4. **Batch size tuning**: Optimize for your workload + +### Memory Problems + +1. **Reduce context size**: Lower memory requirements +2. **Disable memory mapping**: Use `no_mmap` option +3. **Enable memory locking**: Use `memory_lock` for performance +4. **Monitor system resources**: Check available RAM + +## Best Practices + +### Production Deployments + +1. **Resource allocation**: Plan memory and CPU requirements +2. **Health monitoring**: Set up regular health checks +3. **Graceful shutdowns**: Use proper stop procedures +4. **Backup configurations**: Save instance configurations +5. **Log management**: Configure appropriate logging levels + +### Development Environments + +1. **Resource sharing**: Use smaller models for development +2. **Quick iterations**: Optimize for fast startup times +3. **Debug logging**: Enable detailed logging for troubleshooting + +## Batch Operations + +### Managing Multiple Instances + +```bash +# Start all instances +curl -X POST http://localhost:8080/api/instances/start-all + +# Stop all instances +curl -X POST http://localhost:8080/api/instances/stop-all + +# Get status of all instances +curl http://localhost:8080/api/instances +``` + +## Next Steps + +- Learn about the [Web UI](web-ui.md) interface +- Explore the complete [API Reference](api-reference.md) +- Set up [Monitoring](../advanced/monitoring.md) for production use diff --git a/docs/user-guide/web-ui.md b/docs/user-guide/web-ui.md new file mode 100644 index 0000000..5207556 --- /dev/null +++ b/docs/user-guide/web-ui.md @@ -0,0 +1,216 @@ +# Web UI Guide + +The LlamaCtl Web UI provides an intuitive interface for managing your Llama.cpp instances. + +## Overview + +The web interface is accessible at `http://localhost:8080` (or your configured host/port) and provides: + +- Instance management dashboard +- Real-time status monitoring +- Configuration management +- Log viewing +- System information + +## Dashboard + +### Instance Cards + +Each instance is displayed as a card showing: + +- **Instance name** and status indicator +- **Model information** (name, size) +- **Current state** (stopped, starting, running, error) +- **Resource usage** (memory, CPU) +- **Action buttons** (start, stop, configure, logs) + +### Status Indicators + +- 🟢 **Green**: Instance is running and healthy +- 🟡 **Yellow**: Instance is starting or stopping +- 🔴 **Red**: Instance has encountered an error +- ⚪ **Gray**: Instance is stopped + +## Creating Instances + +### Add Instance Dialog + +1. Click the **"Add Instance"** button +2. Fill in the required fields: + - **Name**: Unique identifier for your instance + - **Model Path**: Full path to your GGUF model file + - **Port**: Port number for the instance + +3. Configure optional settings: + - **Threads**: Number of CPU threads + - **Context Size**: Context window size + - **GPU Layers**: Layers to offload to GPU + - **Additional Options**: Advanced Llama.cpp parameters + +4. Click **"Create"** to save the instance + +### Model Path Helper + +Use the file browser to select model files: + +- Navigate to your models directory +- Select the `.gguf` file +- Path is automatically filled in the form + +## Managing Instances + +### Starting Instances + +1. Click the **"Start"** button on an instance card +2. Watch the status change to "Starting" +3. Monitor progress in the logs +4. Instance becomes "Running" when ready + +### Stopping Instances + +1. Click the **"Stop"** button +2. Instance gracefully shuts down +3. Status changes to "Stopped" + +### Viewing Logs + +1. Click the **"Logs"** button on any instance +2. Real-time log viewer opens +3. Filter by log level (Debug, Info, Warning, Error) +4. Search through log entries +5. Download logs for offline analysis + +## Configuration Management + +### Editing Instance Settings + +1. Click the **"Configure"** button +2. Modify settings in the configuration dialog +3. Changes require instance restart to take effect +4. Click **"Save"** to apply changes + +### Advanced Options + +Access advanced Llama.cpp options: + +```yaml +# Example advanced configuration +options: + rope_freq_base: 10000 + rope_freq_scale: 1.0 + yarn_ext_factor: -1.0 + yarn_attn_factor: 1.0 + yarn_beta_fast: 32.0 + yarn_beta_slow: 1.0 +``` + +## System Information + +### Health Dashboard + +Monitor overall system health: + +- **System Resources**: CPU, memory, disk usage +- **Instance Summary**: Running/stopped instance counts +- **Performance Metrics**: Request rates, response times + +### Resource Usage + +Track resource consumption: + +- Per-instance memory usage +- CPU utilization +- GPU memory (if applicable) +- Network I/O + +## User Interface Features + +### Theme Support + +Switch between light and dark themes: + +1. Click the theme toggle button +2. Setting is remembered across sessions + +### Responsive Design + +The UI adapts to different screen sizes: + +- **Desktop**: Full-featured dashboard +- **Tablet**: Condensed layout +- **Mobile**: Stack-based navigation + +### Keyboard Shortcuts + +- `Ctrl+N`: Create new instance +- `Ctrl+R`: Refresh dashboard +- `Ctrl+L`: Open logs for selected instance +- `Esc`: Close dialogs + +## Authentication + +### Login + +If authentication is enabled: + +1. Navigate to the web UI +2. Enter your credentials +3. JWT token is stored for the session +4. Automatic logout on token expiry + +### Session Management + +- Sessions persist across browser restarts +- Logout clears authentication tokens +- Configurable session timeout + +## Troubleshooting + +### Common UI Issues + +**Page won't load:** +- Check if LlamaCtl server is running +- Verify the correct URL and port +- Check browser console for errors + +**Instance won't start from UI:** +- Verify model path is correct +- Check for port conflicts +- Review instance logs for errors + +**Real-time updates not working:** +- Check WebSocket connection +- Verify firewall settings +- Try refreshing the page + +### Browser Compatibility + +Supported browsers: +- Chrome/Chromium 90+ +- Firefox 88+ +- Safari 14+ +- Edge 90+ + +## Mobile Access + +### Responsive Features + +On mobile devices: + +- Touch-friendly interface +- Swipe gestures for navigation +- Optimized button sizes +- Condensed information display + +### Limitations + +Some features may be limited on mobile: +- Log viewing (use horizontal scrolling) +- Complex configuration forms +- File browser functionality + +## Next Steps + +- Learn about [API Reference](api-reference.md) for programmatic access +- Set up [Monitoring](../advanced/monitoring.md) for production use +- Explore [Advanced Configuration](../advanced/backends.md) options diff --git a/mkdocs.yml b/mkdocs.yml new file mode 100644 index 0000000..f23c70e --- /dev/null +++ b/mkdocs.yml @@ -0,0 +1,75 @@ +site_name: LlamaCtl Documentation +site_description: User documentation for LlamaCtl - A management tool for Llama.cpp instances +site_author: LlamaCtl Team +site_url: https://llamactl.org + +repo_name: lordmathis/llamactl +repo_url: https://github.com/lordmathis/llamactl + +theme: + name: material + palette: + # Palette toggle for light mode + - scheme: default + primary: indigo + accent: indigo + toggle: + icon: material/brightness-7 + name: Switch to dark mode + # Palette toggle for dark mode + - scheme: slate + primary: indigo + accent: indigo + toggle: + icon: material/brightness-4 + name: Switch to light mode + features: + - navigation.tabs + - navigation.sections + - navigation.expand + - navigation.top + - search.highlight + - search.share + - content.code.copy + +markdown_extensions: + - pymdownx.highlight: + anchor_linenums: true + - pymdownx.inlinehilite + - pymdownx.snippets + - pymdownx.superfences + - admonition + - pymdownx.details + - pymdownx.tabbed: + alternate_style: true + - attr_list + - md_in_html + - toc: + permalink: true + +nav: + - Home: index.md + - Getting Started: + - Installation: getting-started/installation.md + - Quick Start: getting-started/quick-start.md + - Configuration: getting-started/configuration.md + - User Guide: + - Managing Instances: user-guide/managing-instances.md + - Web UI: user-guide/web-ui.md + - API Reference: user-guide/api-reference.md + - Advanced: + - Backends: advanced/backends.md + - Monitoring: advanced/monitoring.md + - Troubleshooting: advanced/troubleshooting.md + - Development: + - Contributing: development/contributing.md + - Building from Source: development/building.md + +plugins: + - search + - git-revision-date-localized + +extra: + social: + - icon: fontawesome/brands/github + link: https://github.com/lordmathis/llamactl From 0b264c8015ed2d4520b530812f4df84b65e683ee Mon Sep 17 00:00:00 2001 From: LordMathis Date: Sun, 31 Aug 2025 14:28:17 +0200 Subject: [PATCH 02/13] Fix typos and consistent naming for Llamactl across documentation --- docs/advanced/backends.md | 8 ++++---- docs/advanced/monitoring.md | 18 +++++++++--------- docs/advanced/troubleshooting.md | 8 ++++---- docs/development/building.md | 6 +++--- docs/development/contributing.md | 4 ++-- docs/getting-started/configuration.md | 4 ++-- docs/getting-started/installation.md | 6 +++--- docs/getting-started/quick-start.md | 10 +++++----- docs/index.md | 14 +++++++------- docs/user-guide/api-reference.md | 6 +++--- docs/user-guide/managing-instances.md | 2 +- docs/user-guide/web-ui.md | 4 ++-- 12 files changed, 45 insertions(+), 45 deletions(-) diff --git a/docs/advanced/backends.md b/docs/advanced/backends.md index e2542ea..0491bc4 100644 --- a/docs/advanced/backends.md +++ b/docs/advanced/backends.md @@ -1,10 +1,10 @@ # Backends -LlamaCtl supports multiple backends for running large language models. This guide covers the available backends and their configuration. +Llamactl supports multiple backends for running large language models. This guide covers the available backends and their configuration. ## Llama.cpp Backend -The primary backend for LlamaCtl, providing robust support for GGUF models. +The primary backend for Llamactl, providing robust support for GGUF models. ### Features @@ -107,7 +107,7 @@ options: ## Future Backends -LlamaCtl is designed to support multiple backends. Planned additions: +Llamactl is designed to support multiple backends. Planned additions: ### vLLM Backend @@ -137,7 +137,7 @@ Integration with Ollama for easy model management: ### Automatic Detection -LlamaCtl can automatically detect the best backend: +Llamactl can automatically detect the best backend: ```yaml backends: diff --git a/docs/advanced/monitoring.md b/docs/advanced/monitoring.md index 7c3c76e..71051e6 100644 --- a/docs/advanced/monitoring.md +++ b/docs/advanced/monitoring.md @@ -1,10 +1,10 @@ # Monitoring -Comprehensive monitoring setup for LlamaCtl in production environments. +Comprehensive monitoring setup for Llamactl in production environments. ## Overview -Effective monitoring of LlamaCtl involves tracking: +Effective monitoring of Llamactl involves tracking: - Instance health and performance - System resource usage @@ -15,7 +15,7 @@ Effective monitoring of LlamaCtl involves tracking: ### Health Checks -LlamaCtl provides built-in health monitoring: +Llamactl provides built-in health monitoring: ```bash # Check overall system health @@ -45,7 +45,7 @@ curl http://localhost:8080/metrics ### Configuration -Add LlamaCtl as a Prometheus target: +Add Llamactl as a Prometheus target: ```yaml # prometheus.yml @@ -59,7 +59,7 @@ scrape_configs: ### Custom Metrics -Enable additional metrics in LlamaCtl: +Enable additional metrics in Llamactl: ```yaml # config.yaml @@ -76,7 +76,7 @@ monitoring: ## Grafana Dashboards -### LlamaCtl Dashboard +### Llamactl Dashboard Import the official Grafana dashboard: @@ -135,7 +135,7 @@ groups: labels: severity: critical annotations: - summary: "LlamaCtl instance {{ $labels.instance_name }} is down" + summary: "Llamactl instance {{ $labels.instance_name }} is down" - alert: HighMemoryUsage expr: llamactl_instance_memory_percent > 90 @@ -170,7 +170,7 @@ receivers: slack_configs: - api_url: 'https://hooks.slack.com/services/...' channel: '#alerts' - title: 'LlamaCtl Alert' + title: 'Llamactl Alert' text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}' ``` @@ -373,7 +373,7 @@ curl http://localhost:8080/metrics | grep rate_limit **Metrics not appearing:** 1. Check Prometheus configuration 2. Verify network connectivity -3. Review LlamaCtl logs for errors +3. Review Llamactl logs for errors **High memory usage:** 1. Check for memory leaks in profiles diff --git a/docs/advanced/troubleshooting.md b/docs/advanced/troubleshooting.md index 58b85a7..1d070d3 100644 --- a/docs/advanced/troubleshooting.md +++ b/docs/advanced/troubleshooting.md @@ -1,6 +1,6 @@ # Troubleshooting -Common issues and solutions for LlamaCtl deployment and operation. +Common issues and solutions for Llamactl deployment and operation. ## Installation Issues @@ -29,7 +29,7 @@ Common issues and solutions for LlamaCtl deployment and operation. ### Permission Denied -**Problem:** Permission errors when starting LlamaCtl +**Problem:** Permission errors when starting Llamactl **Solutions:** 1. Check file permissions: @@ -269,7 +269,7 @@ Common issues and solutions for LlamaCtl deployment and operation. ### High CPU Usage -**Problem:** LlamaCtl consuming excessive CPU +**Problem:** Llamactl consuming excessive CPU **Diagnostic Steps:** 1. Identify CPU-intensive processes: @@ -302,7 +302,7 @@ Common issues and solutions for LlamaCtl deployment and operation. ### Connection Refused -**Problem:** Cannot connect to LlamaCtl web interface +**Problem:** Cannot connect to Llamactl web interface **Diagnostic Steps:** 1. Check if service is running: diff --git a/docs/development/building.md b/docs/development/building.md index 6215854..a102915 100644 --- a/docs/development/building.md +++ b/docs/development/building.md @@ -1,6 +1,6 @@ # Building from Source -This guide covers building LlamaCtl from source code for development and production deployment. +This guide covers building Llamactl from source code for development and production deployment. ## Prerequisites @@ -261,7 +261,7 @@ LDFLAGS := -s -w -X main.version=$(VERSION) -X main.buildTime=$(BUILD_TIME) .PHONY: build clean test install build: - @echo "Building LlamaCtl..." + @echo "Building Llamactl..." @cd webui && npm run build @go build -ldflags="$(LDFLAGS)" -o llamactl cmd/server/main.go @@ -423,7 +423,7 @@ Create a systemd service: ```ini # /etc/systemd/system/llamactl.service [Unit] -Description=LlamaCtl Server +Description=Llamactl Server After=network.target [Service] diff --git a/docs/development/contributing.md b/docs/development/contributing.md index c2c146f..3b27d90 100644 --- a/docs/development/contributing.md +++ b/docs/development/contributing.md @@ -1,6 +1,6 @@ # Contributing -Thank you for your interest in contributing to LlamaCtl! This guide will help you get started with development and contribution. +Thank you for your interest in contributing to Llamactl! This guide will help you get started with development and contribution. ## Development Setup @@ -370,4 +370,4 @@ Contributors are recognized in: - Documentation credits - Annual contributor highlights -Thank you for contributing to LlamaCtl! +Thank you for contributing to Llamactl! diff --git a/docs/getting-started/configuration.md b/docs/getting-started/configuration.md index 6c8ae7f..e9ba2d3 100644 --- a/docs/getting-started/configuration.md +++ b/docs/getting-started/configuration.md @@ -1,6 +1,6 @@ # Configuration -LlamaCtl can be configured through various methods to suit your needs. +Llamactl can be configured through various methods to suit your needs. ## Configuration File @@ -39,7 +39,7 @@ limits: ## Environment Variables -You can also configure LlamaCtl using environment variables: +You can also configure Llamactl using environment variables: ```bash # Server settings diff --git a/docs/getting-started/installation.md b/docs/getting-started/installation.md index 7a71629..9be575e 100644 --- a/docs/getting-started/installation.md +++ b/docs/getting-started/installation.md @@ -1,10 +1,10 @@ # Installation -This guide will walk you through installing LlamaCtl on your system. +This guide will walk you through installing Llamactl on your system. ## Prerequisites -Before installing LlamaCtl, ensure you have: +Before installing Llamactl, ensure you have: - Go 1.19 or later - Git @@ -52,4 +52,4 @@ llamactl --version ## Next Steps -Now that LlamaCtl is installed, continue to the [Quick Start](quick-start.md) guide to get your first instance running! +Now that Llamactl is installed, continue to the [Quick Start](quick-start.md) guide to get your first instance running! diff --git a/docs/getting-started/quick-start.md b/docs/getting-started/quick-start.md index 2d77e2e..a882b10 100644 --- a/docs/getting-started/quick-start.md +++ b/docs/getting-started/quick-start.md @@ -1,16 +1,16 @@ # Quick Start -This guide will help you get LlamaCtl up and running in just a few minutes. +This guide will help you get Llamactl up and running in just a few minutes. -## Step 1: Start LlamaCtl +## Step 1: Start Llamactl -Start the LlamaCtl server: +Start the Llamactl server: ```bash llamactl ``` -By default, LlamaCtl will start on `http://localhost:8080`. +By default, Llamactl will start on `http://localhost:8080`. ## Step 2: Access the Web UI @@ -20,7 +20,7 @@ Open your web browser and navigate to: http://localhost:8080 ``` -You should see the LlamaCtl web interface. +You should see the Llamactl web interface. ## Step 3: Create Your First Instance diff --git a/docs/index.md b/docs/index.md index f1fd69f..b45cae2 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,10 +1,10 @@ -# LlamaCtl Documentation +# Llamactl Documentation -Welcome to the LlamaCtl documentation! LlamaCtl is a powerful management tool for Llama.cpp instances that provides both a web interface and REST API for managing large language models. +Welcome to the Llamactl documentation! Llamactl is a powerful management tool for Llama.cpp instances that provides both a web interface and REST API for managing large language models. -## What is LlamaCtl? +## What is Llamactl? -LlamaCtl is designed to simplify the deployment and management of Llama.cpp instances. It provides: +Llamactl is designed to simplify the deployment and management of Llama.cpp instances. It provides: - **Instance Management**: Start, stop, and monitor multiple Llama.cpp instances - **Web UI**: User-friendly interface for managing your models @@ -23,8 +23,8 @@ LlamaCtl is designed to simplify the deployment and management of Llama.cpp inst ## Quick Links -- [Installation Guide](getting-started/installation.md) - Get LlamaCtl up and running -- [Quick Start](getting-started/quick-start.md) - Your first steps with LlamaCtl +- [Installation Guide](getting-started/installation.md) - Get Llamactl up and running +- [Quick Start](getting-started/quick-start.md) - Your first steps with Llamactl - [Web UI Guide](user-guide/web-ui.md) - Learn to use the web interface - [API Reference](user-guide/api-reference.md) - Complete API documentation @@ -34,7 +34,7 @@ If you need help or have questions: - Check the [Troubleshooting](advanced/troubleshooting.md) guide - Visit our [GitHub repository](https://github.com/lordmathis/llamactl) -- Read the [Contributing guide](development/contributing.md) to help improve LlamaCtl +- Read the [Contributing guide](development/contributing.md) to help improve Llamactl --- diff --git a/docs/user-guide/api-reference.md b/docs/user-guide/api-reference.md index 813aa06..56c274d 100644 --- a/docs/user-guide/api-reference.md +++ b/docs/user-guide/api-reference.md @@ -1,6 +1,6 @@ # API Reference -Complete reference for the LlamaCtl REST API. +Complete reference for the Llamactl REST API. ## Base URL @@ -314,7 +314,7 @@ GET /api/system/info ### Get Configuration -Get current LlamaCtl configuration. +Get current Llamactl configuration. ```http GET /api/config @@ -322,7 +322,7 @@ GET /api/config ### Update Configuration -Update LlamaCtl configuration (requires restart). +Update Llamactl configuration (requires restart). ```http PUT /api/config diff --git a/docs/user-guide/managing-instances.md b/docs/user-guide/managing-instances.md index fcb3455..fa1102e 100644 --- a/docs/user-guide/managing-instances.md +++ b/docs/user-guide/managing-instances.md @@ -1,6 +1,6 @@ # Managing Instances -Learn how to effectively manage your Llama.cpp instances with LlamaCtl. +Learn how to effectively manage your Llama.cpp instances with Llamactl. ## Instance Lifecycle diff --git a/docs/user-guide/web-ui.md b/docs/user-guide/web-ui.md index 5207556..9b1ba29 100644 --- a/docs/user-guide/web-ui.md +++ b/docs/user-guide/web-ui.md @@ -1,6 +1,6 @@ # Web UI Guide -The LlamaCtl Web UI provides an intuitive interface for managing your Llama.cpp instances. +The Llamactl Web UI provides an intuitive interface for managing your Llama.cpp instances. ## Overview @@ -169,7 +169,7 @@ If authentication is enabled: ### Common UI Issues **Page won't load:** -- Check if LlamaCtl server is running +- Check if Llamactl server is running - Verify the correct URL and port - Check browser console for errors From b51974bbf72e1cca64ceb93a5c157fe0b465c7f8 Mon Sep 17 00:00:00 2001 From: LordMathis Date: Sun, 31 Aug 2025 15:41:29 +0200 Subject: [PATCH 03/13] Imrove getting started section --- README.md | 103 +----- docs/development/building.md | 464 -------------------------- docs/development/contributing.md | 373 --------------------- docs/getting-started/configuration.md | 254 +++++++------- docs/getting-started/installation.md | 40 ++- docs/getting-started/quick-start.md | 61 +++- docs/index.md | 9 +- mkdocs.yml | 9 +- 8 files changed, 223 insertions(+), 1090 deletions(-) delete mode 100644 docs/development/building.md delete mode 100644 docs/development/contributing.md diff --git a/README.md b/README.md index d9edfd5..3eed452 100644 --- a/README.md +++ b/README.md @@ -123,7 +123,6 @@ instances: on_demand_start_timeout: 120 # Default on-demand start timeout in seconds timeout_check_interval: 5 # Idle instance timeout check in minutes - auth: require_inference_auth: true # Require auth for inference endpoints inference_keys: [] # Keys for inference endpoints @@ -131,107 +130,7 @@ auth: management_keys: [] # Keys for management endpoints ``` -
Full Configuration Guide - -llamactl can be configured via configuration files or environment variables. Configuration is loaded in the following order of precedence: - -``` -Defaults < Configuration file < Environment variables -``` - -### Configuration Files - -#### Configuration File Locations - -Configuration files are searched in the following locations (in order of precedence): - -**Linux/macOS:** -- `./llamactl.yaml` or `./config.yaml` (current directory) -- `$HOME/.config/llamactl/config.yaml` -- `/etc/llamactl/config.yaml` - -**Windows:** -- `./llamactl.yaml` or `./config.yaml` (current directory) -- `%APPDATA%\llamactl\config.yaml` -- `%USERPROFILE%\llamactl\config.yaml` -- `%PROGRAMDATA%\llamactl\config.yaml` - -You can specify the path to config file with `LLAMACTL_CONFIG_PATH` environment variable. - -### Configuration Options - -#### Server Configuration - -```yaml -server: - host: "0.0.0.0" # Server host to bind to (default: "0.0.0.0") - port: 8080 # Server port to bind to (default: 8080) - allowed_origins: ["*"] # CORS allowed origins (default: ["*"]) - enable_swagger: false # Enable Swagger UI (default: false) -``` - -**Environment Variables:** -- `LLAMACTL_HOST` - Server host -- `LLAMACTL_PORT` - Server port -- `LLAMACTL_ALLOWED_ORIGINS` - Comma-separated CORS origins -- `LLAMACTL_ENABLE_SWAGGER` - Enable Swagger UI (true/false) - -#### Instance Configuration - -```yaml -instances: - port_range: [8000, 9000] # Port range for instances (default: [8000, 9000]) - data_dir: "~/.local/share/llamactl" # Directory for all llamactl data (default varies by OS) - configs_dir: "~/.local/share/llamactl/instances" # Directory for instance configs (default: data_dir/instances) - logs_dir: "~/.local/share/llamactl/logs" # Directory for instance logs (default: data_dir/logs) - auto_create_dirs: true # Automatically create data/config/logs directories (default: true) - max_instances: -1 # Maximum instances (-1 = unlimited) - max_running_instances: -1 # Maximum running instances (-1 = unlimited) - enable_lru_eviction: true # Enable LRU eviction for idle instances - llama_executable: "llama-server" # Path to llama-server executable - default_auto_restart: true # Default auto-restart setting - default_max_restarts: 3 # Default maximum restart attempts - default_restart_delay: 5 # Default restart delay in seconds - default_on_demand_start: true # Default on-demand start setting - on_demand_start_timeout: 120 # Default on-demand start timeout in seconds - timeout_check_interval: 5 # Default instance timeout check interval in minutes -``` - -**Environment Variables:** -- `LLAMACTL_INSTANCE_PORT_RANGE` - Port range (format: "8000-9000" or "8000,9000") -- `LLAMACTL_DATA_DIRECTORY` - Data directory path -- `LLAMACTL_INSTANCES_DIR` - Instance configs directory path -- `LLAMACTL_LOGS_DIR` - Log directory path -- `LLAMACTL_AUTO_CREATE_DATA_DIR` - Auto-create data/config/logs directories (true/false) -- `LLAMACTL_MAX_INSTANCES` - Maximum number of instances -- `LLAMACTL_MAX_RUNNING_INSTANCES` - Maximum number of running instances -- `LLAMACTL_ENABLE_LRU_EVICTION` - Enable LRU eviction for idle instances -- `LLAMACTL_LLAMA_EXECUTABLE` - Path to llama-server executable -- `LLAMACTL_DEFAULT_AUTO_RESTART` - Default auto-restart setting (true/false) -- `LLAMACTL_DEFAULT_MAX_RESTARTS` - Default maximum restarts -- `LLAMACTL_DEFAULT_RESTART_DELAY` - Default restart delay in seconds -- `LLAMACTL_DEFAULT_ON_DEMAND_START` - Default on-demand start setting (true/false) -- `LLAMACTL_ON_DEMAND_START_TIMEOUT` - Default on-demand start timeout in seconds -- `LLAMACTL_TIMEOUT_CHECK_INTERVAL` - Default instance timeout check interval in minutes - - -#### Authentication Configuration - -```yaml -auth: - require_inference_auth: true # Require API key for OpenAI endpoints (default: true) - inference_keys: [] # List of valid inference API keys - require_management_auth: true # Require API key for management endpoints (default: true) - management_keys: [] # List of valid management API keys -``` - -**Environment Variables:** -- `LLAMACTL_REQUIRE_INFERENCE_AUTH` - Require auth for OpenAI endpoints (true/false) -- `LLAMACTL_INFERENCE_KEYS` - Comma-separated inference API keys -- `LLAMACTL_REQUIRE_MANAGEMENT_AUTH` - Require auth for management endpoints (true/false) -- `LLAMACTL_MANAGEMENT_KEYS` - Comma-separated management API keys - -
+For detailed configuration options including environment variables, file locations, and advanced settings, see the [Configuration Guide](docs/getting-started/configuration.md). ## License diff --git a/docs/development/building.md b/docs/development/building.md deleted file mode 100644 index a102915..0000000 --- a/docs/development/building.md +++ /dev/null @@ -1,464 +0,0 @@ -# Building from Source - -This guide covers building Llamactl from source code for development and production deployment. - -## Prerequisites - -### Required Tools - -- **Go 1.24+**: Download from [golang.org](https://golang.org/dl/) -- **Node.js 22+**: Download from [nodejs.org](https://nodejs.org/) -- **Git**: For cloning the repository -- **Make**: For build automation (optional) - -### System Requirements - -- **Memory**: 4GB+ RAM for building -- **Disk**: 2GB+ free space -- **OS**: Linux, macOS, or Windows - -## Quick Build - -### Clone and Build - -```bash -# Clone the repository -git clone https://github.com/lordmathis/llamactl.git -cd llamactl - -# Build the application -go build -o llamactl cmd/server/main.go -``` - -### Run - -```bash -./llamactl -``` - -## Development Build - -### Setup Development Environment - -```bash -# Clone repository -git clone https://github.com/lordmathis/llamactl.git -cd llamactl - -# Install Go dependencies -go mod download - -# Install frontend dependencies -cd webui -npm ci -cd .. -``` - -### Build Components - -```bash -# Build backend only -go build -o llamactl cmd/server/main.go - -# Build frontend only -cd webui -npm run build -cd .. - -# Build everything -make build -``` - -### Development Server - -```bash -# Run backend in development mode -go run cmd/server/main.go --dev - -# Run frontend dev server (separate terminal) -cd webui -npm run dev -``` - -## Production Build - -### Optimized Build - -```bash -# Build with optimizations -go build -ldflags="-s -w" -o llamactl cmd/server/main.go - -# Or use the Makefile -make build-prod -``` - -### Build Flags - -Common build flags for production: - -```bash -go build \ - -ldflags="-s -w -X main.version=1.0.0 -X main.buildTime=$(date -u +%Y-%m-%dT%H:%M:%SZ)" \ - -trimpath \ - -o llamactl \ - cmd/server/main.go -``` - -**Flag explanations:** -- `-s`: Strip symbol table -- `-w`: Strip debug information -- `-X`: Set variable values at build time -- `-trimpath`: Remove absolute paths from binary - -## Cross-Platform Building - -### Build for Multiple Platforms - -```bash -# Linux AMD64 -GOOS=linux GOARCH=amd64 go build -o llamactl-linux-amd64 cmd/server/main.go - -# Linux ARM64 -GOOS=linux GOARCH=arm64 go build -o llamactl-linux-arm64 cmd/server/main.go - -# macOS AMD64 -GOOS=darwin GOARCH=amd64 go build -o llamactl-darwin-amd64 cmd/server/main.go - -# macOS ARM64 (Apple Silicon) -GOOS=darwin GOARCH=arm64 go build -o llamactl-darwin-arm64 cmd/server/main.go - -# Windows AMD64 -GOOS=windows GOARCH=amd64 go build -o llamactl-windows-amd64.exe cmd/server/main.go -``` - -### Automated Cross-Building - -Use the provided Makefile: - -```bash -# Build all platforms -make build-all - -# Build specific platform -make build-linux -make build-darwin -make build-windows -``` - -## Build with Docker - -### Development Container - -```dockerfile -# Dockerfile.dev -FROM golang:1.24-alpine AS builder - -WORKDIR /app -COPY go.mod go.sum ./ -RUN go mod download - -COPY . . -RUN go build -o llamactl cmd/server/main.go - -FROM alpine:latest -RUN apk --no-cache add ca-certificates -WORKDIR /root/ -COPY --from=builder /app/llamactl . - -EXPOSE 8080 -CMD ["./llamactl"] -``` - -```bash -# Build development image -docker build -f Dockerfile.dev -t llamactl:dev . - -# Run container -docker run -p 8080:8080 llamactl:dev -``` - -### Production Container - -```dockerfile -# Dockerfile -FROM node:22-alpine AS frontend-builder - -WORKDIR /app/webui -COPY webui/package*.json ./ -RUN npm ci - -COPY webui/ ./ -RUN npm run build - -FROM golang:1.24-alpine AS backend-builder - -WORKDIR /app -COPY go.mod go.sum ./ -RUN go mod download - -COPY . . -COPY --from=frontend-builder /app/webui/dist ./webui/dist - -RUN CGO_ENABLED=0 GOOS=linux go build \ - -ldflags="-s -w" \ - -o llamactl \ - cmd/server/main.go - -FROM alpine:latest - -RUN apk --no-cache add ca-certificates tzdata -RUN adduser -D -s /bin/sh llamactl - -WORKDIR /home/llamactl -COPY --from=backend-builder /app/llamactl . -RUN chown llamactl:llamactl llamactl - -USER llamactl -EXPOSE 8080 - -CMD ["./llamactl"] -``` - -## Advanced Build Options - -### Static Linking - -For deployments without external dependencies: - -```bash -CGO_ENABLED=0 go build \ - -ldflags="-s -w -extldflags '-static'" \ - -o llamactl-static \ - cmd/server/main.go -``` - -### Debug Build - -Build with debug information: - -```bash -go build -gcflags="all=-N -l" -o llamactl-debug cmd/server/main.go -``` - -### Race Detection Build - -Build with race detection (development only): - -```bash -go build -race -o llamactl-race cmd/server/main.go -``` - -## Build Automation - -### Makefile - -```makefile -# Makefile -VERSION := $(shell git describe --tags --always --dirty) -BUILD_TIME := $(shell date -u +%Y-%m-%dT%H:%M:%SZ) -LDFLAGS := -s -w -X main.version=$(VERSION) -X main.buildTime=$(BUILD_TIME) - -.PHONY: build clean test install - -build: - @echo "Building Llamactl..." - @cd webui && npm run build - @go build -ldflags="$(LDFLAGS)" -o llamactl cmd/server/main.go - -build-prod: - @echo "Building production binary..." - @cd webui && npm run build - @CGO_ENABLED=0 go build -ldflags="$(LDFLAGS)" -trimpath -o llamactl cmd/server/main.go - -build-all: build-linux build-darwin build-windows - -build-linux: - @GOOS=linux GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-linux-amd64 cmd/server/main.go - @GOOS=linux GOARCH=arm64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-linux-arm64 cmd/server/main.go - -build-darwin: - @GOOS=darwin GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-darwin-amd64 cmd/server/main.go - @GOOS=darwin GOARCH=arm64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-darwin-arm64 cmd/server/main.go - -build-windows: - @GOOS=windows GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o dist/llamactl-windows-amd64.exe cmd/server/main.go - -test: - @go test ./... - -clean: - @rm -f llamactl llamactl-* - @rm -rf dist/ - -install: build - @cp llamactl $(GOPATH)/bin/llamactl -``` - -### GitHub Actions - -```yaml -# .github/workflows/build.yml -name: Build - -on: - push: - branches: [ main ] - pull_request: - branches: [ main ] - -jobs: - test: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - - name: Set up Go - uses: actions/setup-go@v4 - with: - go-version: '1.24' - - - name: Set up Node.js - uses: actions/setup-node@v4 - with: - node-version: '22' - - - name: Install dependencies - run: | - go mod download - cd webui && npm ci - - - name: Run tests - run: | - go test ./... - cd webui && npm test - - - name: Build - run: make build - - build: - needs: test - runs-on: ubuntu-latest - if: github.ref == 'refs/heads/main' - - steps: - - uses: actions/checkout@v4 - - - name: Set up Go - uses: actions/setup-go@v4 - with: - go-version: '1.24' - - - name: Set up Node.js - uses: actions/setup-node@v4 - with: - node-version: '22' - - - name: Build all platforms - run: make build-all - - - name: Upload artifacts - uses: actions/upload-artifact@v4 - with: - name: binaries - path: dist/ -``` - -## Build Troubleshooting - -### Common Issues - -**Go version mismatch:** -```bash -# Check Go version -go version - -# Update Go -# Download from https://golang.org/dl/ -``` - -**Node.js issues:** -```bash -# Clear npm cache -npm cache clean --force - -# Remove node_modules and reinstall -rm -rf webui/node_modules -cd webui && npm ci -``` - -**Build failures:** -```bash -# Clean and rebuild -make clean -go mod tidy -make build -``` - -### Performance Issues - -**Slow builds:** -```bash -# Use build cache -export GOCACHE=$(go env GOCACHE) - -# Parallel builds -export GOMAXPROCS=$(nproc) -``` - -**Large binary size:** -```bash -# Use UPX compression -upx --best llamactl - -# Analyze binary size -go tool nm -size llamactl | head -20 -``` - -## Deployment - -### System Service - -Create a systemd service: - -```ini -# /etc/systemd/system/llamactl.service -[Unit] -Description=Llamactl Server -After=network.target - -[Service] -Type=simple -User=llamactl -Group=llamactl -ExecStart=/usr/local/bin/llamactl -Restart=always -RestartSec=5 - -[Install] -WantedBy=multi-user.target -``` - -```bash -# Enable and start service -sudo systemctl enable llamactl -sudo systemctl start llamactl -``` - -### Configuration - -```bash -# Create configuration directory -sudo mkdir -p /etc/llamactl - -# Copy configuration -sudo cp config.yaml /etc/llamactl/ - -# Set permissions -sudo chown -R llamactl:llamactl /etc/llamactl -``` - -## Next Steps - -- Configure [Installation](../getting-started/installation.md) -- Set up [Configuration](../getting-started/configuration.md) -- Learn about [Contributing](contributing.md) diff --git a/docs/development/contributing.md b/docs/development/contributing.md deleted file mode 100644 index 3b27d90..0000000 --- a/docs/development/contributing.md +++ /dev/null @@ -1,373 +0,0 @@ -# Contributing - -Thank you for your interest in contributing to Llamactl! This guide will help you get started with development and contribution. - -## Development Setup - -### Prerequisites - -- Go 1.24 or later -- Node.js 22 or later -- `llama-server` executable (from [llama.cpp](https://github.com/ggml-org/llama.cpp)) -- Git - -### Getting Started - -1. **Fork and Clone** - ```bash - # Fork the repository on GitHub, then clone your fork - git clone https://github.com/yourusername/llamactl.git - cd llamactl - - # Add upstream remote - git remote add upstream https://github.com/lordmathis/llamactl.git - ``` - -2. **Install Dependencies** - ```bash - # Go dependencies - go mod download - - # Frontend dependencies - cd webui && npm ci && cd .. - ``` - -3. **Run Development Environment** - ```bash - # Start backend server - go run ./cmd/server - ``` - - In a separate terminal: - ```bash - # Start frontend dev server - cd webui && npm run dev - ``` - -## Development Workflow - -### Setting Up Your Environment - -1. **Configuration** - Create a development configuration file: - ```yaml - # dev-config.yaml - server: - host: "localhost" - port: 8080 - logging: - level: "debug" - ``` - -2. **Test Data** - Set up test models and instances for development. - -### Making Changes - -1. **Create a Branch** - ```bash - git checkout -b feature/your-feature-name - ``` - -2. **Development Commands** - ```bash - # Backend - go test ./... -v # Run tests - go test -race ./... -v # Run with race detector - go fmt ./... && go vet ./... # Format and vet code - go build ./cmd/server # Build binary - - # Frontend (from webui/ directory) - npm run test # Run tests - npm run lint # Lint code - npm run type-check # TypeScript check - npm run build # Build for production - ``` - -3. **Code Quality** - ```bash - # Run all checks before committing - make lint - make test - make build - ``` - -## Project Structure - -### Backend (Go) - -``` -cmd/ -├── server/ # Main application entry point -pkg/ -├── backends/ # Model backend implementations -├── config/ # Configuration management -├── instance/ # Instance lifecycle management -├── manager/ # Instance manager -├── server/ # HTTP server and routes -├── testutil/ # Test utilities -└── validation/ # Input validation -``` - -### Frontend (React/TypeScript) - -``` -webui/src/ -├── components/ # React components -├── contexts/ # React contexts -├── hooks/ # Custom hooks -├── lib/ # Utility libraries -├── schemas/ # Zod schemas -└── types/ # TypeScript types -``` - -## Coding Standards - -### Go Code - -- Follow standard Go formatting (`gofmt`) -- Use `go vet` and address all warnings -- Write comprehensive tests for new functionality -- Include documentation comments for exported functions -- Use meaningful variable and function names - -Example: -```go -// CreateInstance creates a new model instance with the given configuration. -// It validates the configuration and ensures the instance name is unique. -func (m *Manager) CreateInstance(ctx context.Context, config InstanceConfig) (*Instance, error) { - if err := config.Validate(); err != nil { - return nil, fmt.Errorf("invalid configuration: %w", err) - } - - // Implementation... -} -``` - -### TypeScript/React Code - -- Use TypeScript strict mode -- Follow React best practices -- Use functional components with hooks -- Implement proper error boundaries -- Write unit tests for components - -Example: -```typescript -interface InstanceCardProps { - instance: Instance; - onStart: (name: string) => Promise; - onStop: (name: string) => Promise; -} - -export const InstanceCard: React.FC = ({ - instance, - onStart, - onStop, -}) => { - // Implementation... -}; -``` - -## Testing - -### Backend Tests - -```bash -# Run all tests -go test ./... - -# Run tests with coverage -go test ./... -coverprofile=coverage.out -go tool cover -html=coverage.out - -# Run specific package tests -go test ./pkg/manager -v - -# Run with race detection -go test -race ./... -``` - -### Frontend Tests - -```bash -cd webui - -# Run unit tests -npm run test - -# Run tests with coverage -npm run test:coverage - -# Run E2E tests -npm run test:e2e -``` - -### Integration Tests - -```bash -# Run integration tests (requires llama-server) -go test ./... -tags=integration -``` - -## Pull Request Process - -### Before Submitting - -1. **Update your branch** - ```bash - git fetch upstream - git rebase upstream/main - ``` - -2. **Run all tests** - ```bash - make test-all - ``` - -3. **Update documentation** if needed - -4. **Write clear commit messages** - ``` - feat: add instance health monitoring - - - Implement health check endpoint - - Add periodic health monitoring - - Update API documentation - - Fixes #123 - ``` - -### Submitting a PR - -1. **Push your branch** - ```bash - git push origin feature/your-feature-name - ``` - -2. **Create Pull Request** - - Use the PR template - - Provide clear description - - Link related issues - - Add screenshots for UI changes - -3. **PR Review Process** - - Automated checks must pass - - Code review by maintainers - - Address feedback promptly - - Keep PR scope focused - -## Issue Guidelines - -### Reporting Bugs - -Use the bug report template and include: - -- Steps to reproduce -- Expected vs actual behavior -- Environment details (OS, Go version, etc.) -- Relevant logs or error messages -- Minimal reproduction case - -### Feature Requests - -Use the feature request template and include: - -- Clear description of the problem -- Proposed solution -- Alternative solutions considered -- Implementation complexity estimate - -### Security Issues - -For security vulnerabilities: -- Do NOT create public issues -- Email security@llamactl.dev -- Provide detailed description -- Allow time for fix before disclosure - -## Development Best Practices - -### API Design - -- Follow REST principles -- Use consistent naming conventions -- Provide comprehensive error messages -- Include proper HTTP status codes -- Document all endpoints - -### Error Handling - -```go -// Wrap errors with context -if err := instance.Start(); err != nil { - return fmt.Errorf("failed to start instance %s: %w", instance.Name, err) -} - -// Use structured logging -log.WithFields(log.Fields{ - "instance": instance.Name, - "error": err, -}).Error("Failed to start instance") -``` - -### Configuration - -- Use environment variables for deployment -- Provide sensible defaults -- Validate configuration on startup -- Support configuration file reloading - -### Performance - -- Profile code for bottlenecks -- Use efficient data structures -- Implement proper caching -- Monitor resource usage - -## Release Process - -### Version Management - -- Use semantic versioning (SemVer) -- Tag releases properly -- Maintain CHANGELOG.md -- Create release notes - -### Building Releases - -```bash -# Build all platforms -make build-all - -# Create release package -make package -``` - -## Getting Help - -### Communication Channels - -- **GitHub Issues**: Bug reports and feature requests -- **GitHub Discussions**: General questions and ideas -- **Code Review**: PR comments and feedback - -### Development Questions - -When asking for help: - -1. Check existing documentation -2. Search previous issues -3. Provide minimal reproduction case -4. Include relevant environment details - -## Recognition - -Contributors are recognized in: - -- CONTRIBUTORS.md file -- Release notes -- Documentation credits -- Annual contributor highlights - -Thank you for contributing to Llamactl! diff --git a/docs/getting-started/configuration.md b/docs/getting-started/configuration.md index e9ba2d3..3a859ee 100644 --- a/docs/getting-started/configuration.md +++ b/docs/getting-started/configuration.md @@ -1,59 +1,144 @@ # Configuration -Llamactl can be configured through various methods to suit your needs. +llamactl can be configured via configuration files or environment variables. Configuration is loaded in the following order of precedence: -## Configuration File +``` +Defaults < Configuration file < Environment variables +``` -Create a configuration file at `~/.llamactl/config.yaml`: +llamactl works out of the box with sensible defaults, but you can customize the behavior to suit your needs. + +## Default Configuration + +Here's the default configuration with all available options: ```yaml -# Server configuration server: - host: "0.0.0.0" - port: 8080 - cors_enabled: true + host: "0.0.0.0" # Server host to bind to + port: 8080 # Server port to bind to + allowed_origins: ["*"] # Allowed CORS origins (default: all) + enable_swagger: false # Enable Swagger UI for API docs + +instances: + port_range: [8000, 9000] # Port range for instances + data_dir: ~/.local/share/llamactl # Data directory (platform-specific, see below) + configs_dir: ~/.local/share/llamactl/instances # Instance configs directory + logs_dir: ~/.local/share/llamactl/logs # Logs directory + auto_create_dirs: true # Auto-create data/config/logs dirs if missing + max_instances: -1 # Max instances (-1 = unlimited) + max_running_instances: -1 # Max running instances (-1 = unlimited) + enable_lru_eviction: true # Enable LRU eviction for idle instances + llama_executable: llama-server # Path to llama-server executable + default_auto_restart: true # Auto-restart new instances by default + default_max_restarts: 3 # Max restarts for new instances + default_restart_delay: 5 # Restart delay (seconds) for new instances + default_on_demand_start: true # Default on-demand start setting + on_demand_start_timeout: 120 # Default on-demand start timeout in seconds + timeout_check_interval: 5 # Idle instance timeout check in minutes -# Authentication (optional) auth: - enabled: false - # When enabled, configure your authentication method - # jwt_secret: "your-secret-key" - -# Default instance settings -defaults: - backend: "llamacpp" - timeout: 300 - log_level: "info" - -# Paths -paths: - models_dir: "/path/to/your/models" - logs_dir: "/var/log/llamactl" - data_dir: "/var/lib/llamactl" - -# Instance limits -limits: - max_instances: 10 - max_memory_per_instance: "8GB" + require_inference_auth: true # Require auth for inference endpoints + inference_keys: [] # Keys for inference endpoints + require_management_auth: true # Require auth for management endpoints + management_keys: [] # Keys for management endpoints ``` -## Environment Variables +## Configuration Files -You can also configure Llamactl using environment variables: +### Configuration File Locations -```bash -# Server settings -export LLAMACTL_HOST=0.0.0.0 -export LLAMACTL_PORT=8080 +Configuration files are searched in the following locations (in order of precedence): -# Paths -export LLAMACTL_MODELS_DIR=/path/to/models -export LLAMACTL_LOGS_DIR=/var/log/llamactl +**Linux:** +- `./llamactl.yaml` or `./config.yaml` (current directory) +- `$HOME/.config/llamactl/config.yaml` +- `/etc/llamactl/config.yaml` -# Limits -export LLAMACTL_MAX_INSTANCES=5 +**macOS:** +- `./llamactl.yaml` or `./config.yaml` (current directory) +- `$HOME/Library/Application Support/llamactl/config.yaml` +- `/Library/Application Support/llamactl/config.yaml` + +**Windows:** +- `./llamactl.yaml` or `./config.yaml` (current directory) +- `%APPDATA%\llamactl\config.yaml` +- `%USERPROFILE%\llamactl\config.yaml` +- `%PROGRAMDATA%\llamactl\config.yaml` + +You can specify the path to config file with `LLAMACTL_CONFIG_PATH` environment variable. + +## Configuration Options + +### Server Configuration + +```yaml +server: + host: "0.0.0.0" # Server host to bind to (default: "0.0.0.0") + port: 8080 # Server port to bind to (default: 8080) + allowed_origins: ["*"] # CORS allowed origins (default: ["*"]) + enable_swagger: false # Enable Swagger UI (default: false) ``` +**Environment Variables:** +- `LLAMACTL_HOST` - Server host +- `LLAMACTL_PORT` - Server port +- `LLAMACTL_ALLOWED_ORIGINS` - Comma-separated CORS origins +- `LLAMACTL_ENABLE_SWAGGER` - Enable Swagger UI (true/false) + +### Instance Configuration + +```yaml +instances: + port_range: [8000, 9000] # Port range for instances (default: [8000, 9000]) + data_dir: "~/.local/share/llamactl" # Directory for all llamactl data (default varies by OS) + configs_dir: "~/.local/share/llamactl/instances" # Directory for instance configs (default: data_dir/instances) + logs_dir: "~/.local/share/llamactl/logs" # Directory for instance logs (default: data_dir/logs) + auto_create_dirs: true # Automatically create data/config/logs directories (default: true) + max_instances: -1 # Maximum instances (-1 = unlimited) + max_running_instances: -1 # Maximum running instances (-1 = unlimited) + enable_lru_eviction: true # Enable LRU eviction for idle instances + llama_executable: "llama-server" # Path to llama-server executable + default_auto_restart: true # Default auto-restart setting + default_max_restarts: 3 # Default maximum restart attempts + default_restart_delay: 5 # Default restart delay in seconds + default_on_demand_start: true # Default on-demand start setting + on_demand_start_timeout: 120 # Default on-demand start timeout in seconds + timeout_check_interval: 5 # Default instance timeout check interval in minutes +``` + +**Environment Variables:** +- `LLAMACTL_INSTANCE_PORT_RANGE` - Port range (format: "8000-9000" or "8000,9000") +- `LLAMACTL_DATA_DIRECTORY` - Data directory path +- `LLAMACTL_INSTANCES_DIR` - Instance configs directory path +- `LLAMACTL_LOGS_DIR` - Log directory path +- `LLAMACTL_AUTO_CREATE_DATA_DIR` - Auto-create data/config/logs directories (true/false) +- `LLAMACTL_MAX_INSTANCES` - Maximum number of instances +- `LLAMACTL_MAX_RUNNING_INSTANCES` - Maximum number of running instances +- `LLAMACTL_ENABLE_LRU_EVICTION` - Enable LRU eviction for idle instances +- `LLAMACTL_LLAMA_EXECUTABLE` - Path to llama-server executable +- `LLAMACTL_DEFAULT_AUTO_RESTART` - Default auto-restart setting (true/false) +- `LLAMACTL_DEFAULT_MAX_RESTARTS` - Default maximum restarts +- `LLAMACTL_DEFAULT_RESTART_DELAY` - Default restart delay in seconds +- `LLAMACTL_DEFAULT_ON_DEMAND_START` - Default on-demand start setting (true/false) +- `LLAMACTL_ON_DEMAND_START_TIMEOUT` - Default on-demand start timeout in seconds +- `LLAMACTL_TIMEOUT_CHECK_INTERVAL` - Default instance timeout check interval in minutes + +### Authentication Configuration + +```yaml +auth: + require_inference_auth: true # Require API key for OpenAI endpoints (default: true) + inference_keys: [] # List of valid inference API keys + require_management_auth: true # Require API key for management endpoints (default: true) + management_keys: [] # List of valid management API keys +``` + +**Environment Variables:** +- `LLAMACTL_REQUIRE_INFERENCE_AUTH` - Require auth for OpenAI endpoints (true/false) +- `LLAMACTL_INFERENCE_KEYS` - Comma-separated inference API keys +- `LLAMACTL_REQUIRE_MANAGEMENT_AUTH` - Require auth for management endpoints (true/false) +- `LLAMACTL_MANAGEMENT_KEYS` - Comma-separated management API keys + ## Command Line Options View all available command line options: @@ -62,90 +147,13 @@ View all available command line options: llamactl --help ``` -Common options: - -```bash -# Specify config file -llamactl --config /path/to/config.yaml - -# Set log level -llamactl --log-level debug - -# Run on different port -llamactl --port 9090 -``` - -## Instance Configuration - -When creating instances, you can specify various options: - -### Basic Options - -- `name`: Unique identifier for the instance -- `model_path`: Path to the GGUF model file -- `port`: Port for the instance to listen on - -### Advanced Options - -- `threads`: Number of CPU threads to use -- `context_size`: Context window size -- `batch_size`: Batch size for processing -- `gpu_layers`: Number of layers to offload to GPU -- `memory_lock`: Lock model in memory -- `no_mmap`: Disable memory mapping - -### Example Instance Configuration - -```json -{ - "name": "production-model", - "model_path": "/models/llama-2-13b-chat.gguf", - "port": 8081, - "options": { - "threads": 8, - "context_size": 4096, - "batch_size": 512, - "gpu_layers": 35, - "memory_lock": true - } -} -``` - -## Security Configuration - -### Enable Authentication - -To enable authentication, update your config file: - -```yaml -auth: - enabled: true - jwt_secret: "your-very-secure-secret-key" - token_expiry: "24h" -``` - -### HTTPS Configuration - -For production deployments, configure HTTPS: - -```yaml -server: - tls: - enabled: true - cert_file: "/path/to/cert.pem" - key_file: "/path/to/key.pem" -``` - -## Logging Configuration - -Configure logging levels and outputs: - -```yaml -logging: - level: "info" # debug, info, warn, error - format: "json" # json or text - output: "/var/log/llamactl/app.log" -``` +You can also override configuration using command line flags when starting llamactl. + +## Next Steps + +- Learn about [Managing Instances](../user-guide/managing-instances.md) +- Explore [Advanced Configuration](../advanced/monitoring.md) +- Set up [Monitoring](../advanced/monitoring.md) ## Next Steps diff --git a/docs/getting-started/installation.md b/docs/getting-started/installation.md index 9be575e..9ae35ed 100644 --- a/docs/getting-started/installation.md +++ b/docs/getting-started/installation.md @@ -4,9 +4,19 @@ This guide will walk you through installing Llamactl on your system. ## Prerequisites -Before installing Llamactl, ensure you have: +You need `llama-server` from [llama.cpp](https://github.com/ggml-org/llama.cpp) installed: -- Go 1.19 or later +```bash +# Quick install methods: +# Homebrew (macOS) +brew install llama.cpp + +# Or build from source - see llama.cpp docs +``` + +Additional requirements for building from source: +- Go 1.24 or later +- Node.js 22 or later - Git - Sufficient disk space for your models @@ -14,17 +24,18 @@ Before installing Llamactl, ensure you have: ### Option 1: Download Binary (Recommended) -Download the latest release from our [GitHub releases page](https://github.com/lordmathis/llamactl/releases): +Download the latest release from the [GitHub releases page](https://github.com/lordmathis/llamactl/releases): ```bash -# Download for Linux -curl -L https://github.com/lordmathis/llamactl/releases/latest/download/llamactl-linux-amd64 -o llamactl - -# Make executable -chmod +x llamactl - -# Move to PATH (optional) +# Linux/macOS - Get latest version and download +LATEST_VERSION=$(curl -s https://api.github.com/repos/lordmathis/llamactl/releases/latest | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/') +curl -L https://github.com/lordmathis/llamactl/releases/download/${LATEST_VERSION}/llamactl-${LATEST_VERSION}-$(uname -s | tr '[:upper:]' '[:lower:]')-$(uname -m).tar.gz | tar -xz sudo mv llamactl /usr/local/bin/ + +# Or download manually from: +# https://github.com/lordmathis/llamactl/releases/latest + +# Windows - Download from releases page ``` ### Option 2: Build from Source @@ -36,11 +47,12 @@ If you prefer to build from source: git clone https://github.com/lordmathis/llamactl.git cd llamactl -# Build the application -go build -o llamactl cmd/server/main.go -``` +# Build the web UI +cd webui && npm ci && npm run build && cd .. -For detailed build instructions, see the [Building from Source](../development/building.md) guide. +# Build the application +go build -o llamactl ./cmd/server +``` ## Verification diff --git a/docs/getting-started/quick-start.md b/docs/getting-started/quick-start.md index a882b10..11751c0 100644 --- a/docs/getting-started/quick-start.md +++ b/docs/getting-started/quick-start.md @@ -28,7 +28,6 @@ You should see the Llamactl web interface. 2. Fill in the instance configuration: - **Name**: Give your instance a descriptive name - **Model Path**: Path to your Llama.cpp model file - - **Port**: Port for the instance to run on - **Additional Options**: Any extra Llama.cpp parameters 3. Click "Create Instance" @@ -50,7 +49,6 @@ Here's a basic example configuration for a Llama 2 model: { "name": "llama2-7b", "model_path": "/path/to/llama-2-7b-chat.gguf", - "port": 8081, "options": { "threads": 4, "context_size": 2048 @@ -72,13 +70,70 @@ curl -X POST http://localhost:8080/api/instances \ -d '{ "name": "my-model", "model_path": "/path/to/model.gguf", - "port": 8081 }' # Start an instance curl -X POST http://localhost:8080/api/instances/my-model/start ``` +## OpenAI Compatible API + +Llamactl provides OpenAI-compatible endpoints, making it easy to integrate with existing OpenAI client libraries and tools. + +### Chat Completions + +Once you have an instance running, you can use it with the OpenAI-compatible chat completions endpoint: + +```bash +curl -X POST http://localhost:8080/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "my-model", + "messages": [ + { + "role": "user", + "content": "Hello! Can you help me write a Python function?" + } + ], + "max_tokens": 150, + "temperature": 0.7 + }' +``` + +### Using with Python OpenAI Client + +You can also use the official OpenAI Python client: + +```python +from openai import OpenAI + +# Point the client to your Llamactl server +client = OpenAI( + base_url="http://localhost:8080/v1", + api_key="not-needed" # Llamactl doesn't require API keys by default +) + +# Create a chat completion +response = client.chat.completions.create( + model="my-model", # Use the name of your instance + messages=[ + {"role": "user", "content": "Explain quantum computing in simple terms"} + ], + max_tokens=200, + temperature=0.7 +) + +print(response.choices[0].message.content) +``` + +### List Available Models + +Get a list of running instances (models) in OpenAI-compatible format: + +```bash +curl http://localhost:8080/v1/models +``` + ## Next Steps - Learn more about the [Web UI](../user-guide/web-ui.md) diff --git a/docs/index.md b/docs/index.md index b45cae2..19f7508 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,12 +1,12 @@ # Llamactl Documentation -Welcome to the Llamactl documentation! Llamactl is a powerful management tool for Llama.cpp instances that provides both a web interface and REST API for managing large language models. +Welcome to the Llamactl documentation! Llamactl is a powerful management tool for llama-server instances that provides both a web interface and REST API for managing large language models. ## What is Llamactl? -Llamactl is designed to simplify the deployment and management of Llama.cpp instances. It provides: +Llamactl is designed to simplify the deployment and management of llama-server instances. It provides: -- **Instance Management**: Start, stop, and monitor multiple Llama.cpp instances +- **Instance Management**: Start, stop, and monitor multiple llama-server instances - **Web UI**: User-friendly interface for managing your models - **REST API**: Programmatic access to all functionality - **Health Monitoring**: Real-time status and health checks @@ -33,8 +33,7 @@ Llamactl is designed to simplify the deployment and management of Llama.cpp inst If you need help or have questions: - Check the [Troubleshooting](advanced/troubleshooting.md) guide -- Visit our [GitHub repository](https://github.com/lordmathis/llamactl) -- Read the [Contributing guide](development/contributing.md) to help improve Llamactl +- Visit the [GitHub repository](https://github.com/lordmathis/llamactl) --- diff --git a/mkdocs.yml b/mkdocs.yml index f23c70e..4e7e107 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -1,6 +1,6 @@ -site_name: LlamaCtl Documentation -site_description: User documentation for LlamaCtl - A management tool for Llama.cpp instances -site_author: LlamaCtl Team +site_name: Llamatl Documentation +site_description: User documentation for Llamatl - A management tool for Llama.cpp instances +site_author: Llamatl Team site_url: https://llamactl.org repo_name: lordmathis/llamactl @@ -61,9 +61,6 @@ nav: - Backends: advanced/backends.md - Monitoring: advanced/monitoring.md - Troubleshooting: advanced/troubleshooting.md - - Development: - - Contributing: development/contributing.md - - Building from Source: development/building.md plugins: - search From 92af14b350d8bb85aaaf828711c99d3da177a776 Mon Sep 17 00:00:00 2001 From: LordMathis Date: Sun, 31 Aug 2025 15:50:45 +0200 Subject: [PATCH 04/13] Improve index.md --- docs/index.md | 34 +++++++++++++++++++++++++--------- 1 file changed, 25 insertions(+), 9 deletions(-) diff --git a/docs/index.md b/docs/index.md index 19f7508..a1730c7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,16 +1,27 @@ # Llamactl Documentation -Welcome to the Llamactl documentation! Llamactl is a powerful management tool for llama-server instances that provides both a web interface and REST API for managing large language models. +Welcome to the Llamactl documentation! **Management server and proxy for multiple llama.cpp instances with OpenAI-compatible API routing.** + +![Dashboard Screenshot](images/screenshot.png) ## What is Llamactl? -Llamactl is designed to simplify the deployment and management of llama-server instances. It provides: +Llamactl is designed to simplify the deployment and management of llama-server instances. It provides a modern solution for running multiple large language models with centralized management. -- **Instance Management**: Start, stop, and monitor multiple llama-server instances -- **Web UI**: User-friendly interface for managing your models -- **REST API**: Programmatic access to all functionality -- **Health Monitoring**: Real-time status and health checks -- **Configuration Management**: Easy setup and configuration options +## Why llamactl? + +🚀 **Multiple Model Serving**: Run different models simultaneously (7B for speed, 70B for quality) +🔗 **OpenAI API Compatible**: Drop-in replacement - route requests by model name +🌐 **Web Dashboard**: Modern React UI for visual management (unlike CLI-only tools) +🔐 **API Key Authentication**: Separate keys for management vs inference access +📊 **Instance Monitoring**: Health checks, auto-restart, log management +⚡ **Smart Resource Management**: Idle timeout, LRU eviction, and configurable instance limits +💡 **On-Demand Instance Start**: Automatically launch instances upon receiving OpenAI-compatible API requests +💾 **State Persistence**: Ensure instances remain intact across server restarts + +**Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances +**Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations +**Choose LM Studio if**: You prefer a polished desktop GUI experience with easy model management ## Key Features @@ -24,9 +35,13 @@ Llamactl is designed to simplify the deployment and management of llama-server i ## Quick Links - [Installation Guide](getting-started/installation.md) - Get Llamactl up and running +- [Configuration Guide](getting-started/configuration.md) - Detailed configuration options - [Quick Start](getting-started/quick-start.md) - Your first steps with Llamactl - [Web UI Guide](user-guide/web-ui.md) - Learn to use the web interface +- [Managing Instances](user-guide/managing-instances.md) - Instance lifecycle management - [API Reference](user-guide/api-reference.md) - Complete API documentation +- [Monitoring](advanced/monitoring.md) - Health checks and monitoring +- [Backends](advanced/backends.md) - Backend configuration options ## Getting Help @@ -34,7 +49,8 @@ If you need help or have questions: - Check the [Troubleshooting](advanced/troubleshooting.md) guide - Visit the [GitHub repository](https://github.com/lordmathis/llamactl) +- Review the [Configuration Guide](getting-started/configuration.md) for advanced settings ---- +## License -Ready to get started? Head over to the [Installation Guide](getting-started/installation.md)! +MIT License - see the [LICENSE](https://github.com/lordmathis/llamactl/blob/main/LICENSE) file. From b08f15c5d0e30c26017b787b603e2d6a0bba80af Mon Sep 17 00:00:00 2001 From: LordMathis Date: Sun, 31 Aug 2025 16:04:09 +0200 Subject: [PATCH 05/13] Remove misleading advanced section --- docs/advanced/backends.md | 316 ------------- docs/advanced/monitoring.md | 420 ------------------ docs/getting-started/configuration.md | 12 - docs/index.md | 5 +- docs/user-guide/api-reference.md | 6 - docs/user-guide/managing-instances.md | 6 - .../troubleshooting.md | 6 - docs/user-guide/web-ui.md | 6 - mkdocs.yml | 5 +- 9 files changed, 3 insertions(+), 779 deletions(-) delete mode 100644 docs/advanced/backends.md delete mode 100644 docs/advanced/monitoring.md rename docs/{advanced => user-guide}/troubleshooting.md (98%) diff --git a/docs/advanced/backends.md b/docs/advanced/backends.md deleted file mode 100644 index 0491bc4..0000000 --- a/docs/advanced/backends.md +++ /dev/null @@ -1,316 +0,0 @@ -# Backends - -Llamactl supports multiple backends for running large language models. This guide covers the available backends and their configuration. - -## Llama.cpp Backend - -The primary backend for Llamactl, providing robust support for GGUF models. - -### Features - -- **GGUF Support**: Native support for GGUF model format -- **GPU Acceleration**: CUDA, OpenCL, and Metal support -- **Memory Optimization**: Efficient memory usage and mapping -- **Multi-threading**: Configurable CPU thread utilization -- **Quantization**: Support for various quantization levels - -### Configuration - -```yaml -backends: - llamacpp: - binary_path: "/usr/local/bin/llama-server" - default_options: - threads: 4 - context_size: 2048 - batch_size: 512 - gpu: - enabled: true - layers: 35 -``` - -### Supported Options - -| Option | Description | Default | -|--------|-------------|---------| -| `threads` | Number of CPU threads | 4 | -| `context_size` | Context window size | 2048 | -| `batch_size` | Batch size for processing | 512 | -| `gpu_layers` | Layers to offload to GPU | 0 | -| `memory_lock` | Lock model in memory | false | -| `no_mmap` | Disable memory mapping | false | -| `rope_freq_base` | RoPE frequency base | 10000 | -| `rope_freq_scale` | RoPE frequency scale | 1.0 | - -### GPU Acceleration - -#### CUDA Setup - -```bash -# Install CUDA toolkit -sudo apt update -sudo apt install nvidia-cuda-toolkit - -# Verify CUDA installation -nvcc --version -nvidia-smi -``` - -#### Configuration for GPU - -```json -{ - "name": "gpu-accelerated", - "model_path": "/models/llama-2-13b.gguf", - "port": 8081, - "options": { - "gpu_layers": 35, - "threads": 2, - "context_size": 4096 - } -} -``` - -### Performance Tuning - -#### Memory Optimization - -```yaml -# For limited memory systems -options: - context_size: 1024 - batch_size: 256 - no_mmap: true - memory_lock: false - -# For high-memory systems -options: - context_size: 8192 - batch_size: 1024 - memory_lock: true - no_mmap: false -``` - -#### CPU Optimization - -```yaml -# Match thread count to CPU cores -# For 8-core CPU: -options: - threads: 6 # Leave 2 cores for system - -# For high-performance CPUs: -options: - threads: 16 - batch_size: 1024 -``` - -## Future Backends - -Llamactl is designed to support multiple backends. Planned additions: - -### vLLM Backend - -High-performance inference engine optimized for serving: - -- **Features**: Fast inference, batching, streaming -- **Models**: Supports various model formats -- **Scaling**: Horizontal scaling support - -### TensorRT-LLM Backend - -NVIDIA's optimized inference engine: - -- **Features**: Maximum GPU performance -- **Models**: Optimized for NVIDIA GPUs -- **Deployment**: Production-ready inference - -### Ollama Backend - -Integration with Ollama for easy model management: - -- **Features**: Simplified model downloading -- **Models**: Large model library -- **Integration**: Seamless model switching - -## Backend Selection - -### Automatic Detection - -Llamactl can automatically detect the best backend: - -```yaml -backends: - auto_detect: true - preference_order: - - "llamacpp" - - "vllm" - - "tensorrt" -``` - -### Manual Selection - -Force a specific backend for an instance: - -```json -{ - "name": "manual-backend", - "backend": "llamacpp", - "model_path": "/models/model.gguf", - "port": 8081 -} -``` - -## Backend-Specific Features - -### Llama.cpp Features - -#### Model Formats - -- **GGUF**: Primary format, best compatibility -- **GGML**: Legacy format (limited support) - -#### Quantization Levels - -- `Q2_K`: Smallest size, lower quality -- `Q4_K_M`: Balanced size and quality -- `Q5_K_M`: Higher quality, larger size -- `Q6_K`: Near-original quality -- `Q8_0`: Minimal loss, largest size - -#### Advanced Options - -```yaml -advanced: - rope_scaling: - type: "linear" - factor: 2.0 - attention: - flash_attention: true - grouped_query: true -``` - -## Monitoring Backend Performance - -### Metrics Collection - -Monitor backend-specific metrics: - -```bash -# Get backend statistics -curl http://localhost:8080/api/instances/my-instance/backend/stats -``` - -**Response:** -```json -{ - "backend": "llamacpp", - "version": "b1234", - "metrics": { - "tokens_per_second": 15.2, - "memory_usage": 4294967296, - "gpu_utilization": 85.5, - "context_usage": 75.0 - } -} -``` - -### Performance Optimization - -#### Benchmark Different Configurations - -```bash -# Test various thread counts -for threads in 2 4 8 16; do - echo "Testing $threads threads" - curl -X PUT http://localhost:8080/api/instances/benchmark \ - -d "{\"options\": {\"threads\": $threads}}" - # Run performance test -done -``` - -#### Memory Usage Optimization - -```bash -# Monitor memory usage -watch -n 1 'curl -s http://localhost:8080/api/instances/my-instance/stats | jq .memory_usage' -``` - -## Troubleshooting Backends - -### Common Llama.cpp Issues - -**Model won't load:** -```bash -# Check model file -file /path/to/model.gguf - -# Verify format -llama-server --model /path/to/model.gguf --dry-run -``` - -**GPU not detected:** -```bash -# Check CUDA installation -nvidia-smi - -# Verify llama.cpp GPU support -llama-server --help | grep -i gpu -``` - -**Performance issues:** -```bash -# Check system resources -htop -nvidia-smi - -# Verify configuration -curl http://localhost:8080/api/instances/my-instance/config -``` - -## Custom Backend Development - -### Backend Interface - -Implement the backend interface for custom backends: - -```go -type Backend interface { - Start(config InstanceConfig) error - Stop(instance *Instance) error - Health(instance *Instance) (*HealthStatus, error) - Stats(instance *Instance) (*Stats, error) -} -``` - -### Registration - -Register your custom backend: - -```go -func init() { - backends.Register("custom", &CustomBackend{}) -} -``` - -## Best Practices - -### Production Deployments - -1. **Resource allocation**: Plan for peak usage -2. **Backend selection**: Choose based on requirements -3. **Monitoring**: Set up comprehensive monitoring -4. **Fallback**: Configure backup backends - -### Development - -1. **Rapid iteration**: Use smaller models -2. **Resource monitoring**: Track usage patterns -3. **Configuration testing**: Validate settings -4. **Performance profiling**: Optimize bottlenecks - -## Next Steps - -- Learn about [Monitoring](monitoring.md) backend performance -- Explore [Troubleshooting](troubleshooting.md) guides -- Set up [Production Monitoring](monitoring.md) diff --git a/docs/advanced/monitoring.md b/docs/advanced/monitoring.md deleted file mode 100644 index 71051e6..0000000 --- a/docs/advanced/monitoring.md +++ /dev/null @@ -1,420 +0,0 @@ -# Monitoring - -Comprehensive monitoring setup for Llamactl in production environments. - -## Overview - -Effective monitoring of Llamactl involves tracking: - -- Instance health and performance -- System resource usage -- API response times -- Error rates and alerts - -## Built-in Monitoring - -### Health Checks - -Llamactl provides built-in health monitoring: - -```bash -# Check overall system health -curl http://localhost:8080/api/system/health - -# Check specific instance health -curl http://localhost:8080/api/instances/{name}/health -``` - -### Metrics Endpoint - -Access Prometheus-compatible metrics: - -```bash -curl http://localhost:8080/metrics -``` - -**Available Metrics:** -- `llamactl_instances_total`: Total number of instances -- `llamactl_instances_running`: Number of running instances -- `llamactl_instance_memory_bytes`: Instance memory usage -- `llamactl_instance_cpu_percent`: Instance CPU usage -- `llamactl_api_requests_total`: Total API requests -- `llamactl_api_request_duration_seconds`: API response times - -## Prometheus Integration - -### Configuration - -Add Llamactl as a Prometheus target: - -```yaml -# prometheus.yml -scrape_configs: - - job_name: 'llamactl' - static_configs: - - targets: ['localhost:8080'] - metrics_path: '/metrics' - scrape_interval: 15s -``` - -### Custom Metrics - -Enable additional metrics in Llamactl: - -```yaml -# config.yaml -monitoring: - enabled: true - prometheus: - enabled: true - path: "/metrics" - metrics: - - instance_stats - - api_performance - - system_resources -``` - -## Grafana Dashboards - -### Llamactl Dashboard - -Import the official Grafana dashboard: - -1. Download dashboard JSON from releases -2. Import into Grafana -3. Configure Prometheus data source - -### Key Panels - -**Instance Overview:** -- Instance count and status -- Resource usage per instance -- Health status indicators - -**Performance Metrics:** -- API response times -- Tokens per second -- Memory usage trends - -**System Resources:** -- CPU and memory utilization -- Disk I/O and network usage -- GPU utilization (if applicable) - -### Custom Queries - -**Instance Uptime:** -```promql -(time() - llamactl_instance_start_time_seconds) / 3600 -``` - -**Memory Usage Percentage:** -```promql -(llamactl_instance_memory_bytes / llamactl_system_memory_total_bytes) * 100 -``` - -**API Error Rate:** -```promql -rate(llamactl_api_requests_total{status=~"4.."}[5m]) / rate(llamactl_api_requests_total[5m]) * 100 -``` - -## Alerting - -### Prometheus Alerts - -Configure alerts for critical conditions: - -```yaml -# alerts.yml -groups: - - name: llamactl - rules: - - alert: InstanceDown - expr: llamactl_instance_up == 0 - for: 1m - labels: - severity: critical - annotations: - summary: "Llamactl instance {{ $labels.instance_name }} is down" - - - alert: HighMemoryUsage - expr: llamactl_instance_memory_percent > 90 - for: 5m - labels: - severity: warning - annotations: - summary: "High memory usage on {{ $labels.instance_name }}" - - - alert: APIHighLatency - expr: histogram_quantile(0.95, rate(llamactl_api_request_duration_seconds_bucket[5m])) > 2 - for: 2m - labels: - severity: warning - annotations: - summary: "High API latency detected" -``` - -### Notification Channels - -Configure alert notifications: - -**Slack Integration:** -```yaml -# alertmanager.yml -route: - group_by: ['alertname'] - receiver: 'slack' - -receivers: - - name: 'slack' - slack_configs: - - api_url: 'https://hooks.slack.com/services/...' - channel: '#alerts' - title: 'Llamactl Alert' - text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}' -``` - -## Log Management - -### Centralized Logging - -Configure log aggregation: - -```yaml -# config.yaml -logging: - level: "info" - output: "json" - destinations: - - type: "file" - path: "/var/log/llamactl/app.log" - - type: "syslog" - facility: "local0" - - type: "elasticsearch" - url: "http://elasticsearch:9200" -``` - -### Log Analysis - -Use ELK stack for log analysis: - -**Elasticsearch Index Template:** -```json -{ - "index_patterns": ["llamactl-*"], - "mappings": { - "properties": { - "timestamp": {"type": "date"}, - "level": {"type": "keyword"}, - "message": {"type": "text"}, - "instance": {"type": "keyword"}, - "component": {"type": "keyword"} - } - } -} -``` - -**Kibana Visualizations:** -- Log volume over time -- Error rate by instance -- Performance trends -- Resource usage patterns - -## Application Performance Monitoring - -### OpenTelemetry Integration - -Enable distributed tracing: - -```yaml -# config.yaml -telemetry: - enabled: true - otlp: - endpoint: "http://jaeger:14268/api/traces" - sampling_rate: 0.1 -``` - -### Custom Spans - -Add custom tracing to track operations: - -```go -ctx, span := tracer.Start(ctx, "instance.start") -defer span.End() - -// Track instance startup time -span.SetAttributes( - attribute.String("instance.name", name), - attribute.String("model.path", modelPath), -) -``` - -## Health Check Configuration - -### Readiness Probes - -Configure Kubernetes readiness probes: - -```yaml -readinessProbe: - httpGet: - path: /api/health - port: 8080 - initialDelaySeconds: 30 - periodSeconds: 10 -``` - -### Liveness Probes - -Configure liveness probes: - -```yaml -livenessProbe: - httpGet: - path: /api/health/live - port: 8080 - initialDelaySeconds: 60 - periodSeconds: 30 -``` - -### Custom Health Checks - -Implement custom health checks: - -```go -func (h *HealthHandler) CustomCheck(ctx context.Context) error { - // Check database connectivity - if err := h.db.Ping(); err != nil { - return fmt.Errorf("database unreachable: %w", err) - } - - // Check instance responsiveness - for _, instance := range h.instances { - if !instance.IsHealthy() { - return fmt.Errorf("instance %s unhealthy", instance.Name) - } - } - - return nil -} -``` - -## Performance Profiling - -### pprof Integration - -Enable Go profiling: - -```yaml -# config.yaml -debug: - pprof_enabled: true - pprof_port: 6060 -``` - -Access profiling endpoints: -```bash -# CPU profile -go tool pprof http://localhost:6060/debug/pprof/profile - -# Memory profile -go tool pprof http://localhost:6060/debug/pprof/heap - -# Goroutine profile -go tool pprof http://localhost:6060/debug/pprof/goroutine -``` - -### Continuous Profiling - -Set up continuous profiling with Pyroscope: - -```yaml -# config.yaml -profiling: - enabled: true - pyroscope: - server_address: "http://pyroscope:4040" - application_name: "llamactl" -``` - -## Security Monitoring - -### Audit Logging - -Enable security audit logs: - -```yaml -# config.yaml -audit: - enabled: true - log_file: "/var/log/llamactl/audit.log" - events: - - "auth.login" - - "auth.logout" - - "instance.create" - - "instance.delete" - - "config.update" -``` - -### Rate Limiting Monitoring - -Track rate limiting metrics: - -```bash -# Monitor rate limit hits -curl http://localhost:8080/metrics | grep rate_limit -``` - -## Troubleshooting Monitoring - -### Common Issues - -**Metrics not appearing:** -1. Check Prometheus configuration -2. Verify network connectivity -3. Review Llamactl logs for errors - -**High memory usage:** -1. Check for memory leaks in profiles -2. Monitor garbage collection metrics -3. Review instance configurations - -**Alert fatigue:** -1. Tune alert thresholds -2. Implement alert severity levels -3. Use alert routing and suppression - -### Debug Tools - -**Monitoring health:** -```bash -# Check monitoring endpoints -curl -v http://localhost:8080/metrics -curl -v http://localhost:8080/api/health - -# Review logs -tail -f /var/log/llamactl/app.log -``` - -## Best Practices - -### Production Monitoring - -1. **Comprehensive coverage**: Monitor all critical components -2. **Appropriate alerting**: Balance sensitivity and noise -3. **Regular review**: Analyze trends and patterns -4. **Documentation**: Maintain runbooks for alerts - -### Performance Optimization - -1. **Baseline establishment**: Know normal operating parameters -2. **Trend analysis**: Identify performance degradation early -3. **Capacity planning**: Monitor resource growth trends -4. **Optimization cycles**: Regular performance tuning - -## Next Steps - -- Set up [Troubleshooting](troubleshooting.md) procedures -- Learn about [Backend optimization](backends.md) -- Configure [Production deployment](../development/building.md) diff --git a/docs/getting-started/configuration.md b/docs/getting-started/configuration.md index 3a859ee..25256de 100644 --- a/docs/getting-started/configuration.md +++ b/docs/getting-started/configuration.md @@ -148,15 +148,3 @@ llamactl --help ``` You can also override configuration using command line flags when starting llamactl. - -## Next Steps - -- Learn about [Managing Instances](../user-guide/managing-instances.md) -- Explore [Advanced Configuration](../advanced/monitoring.md) -- Set up [Monitoring](../advanced/monitoring.md) - -## Next Steps - -- Learn about [Managing Instances](../user-guide/managing-instances.md) -- Explore [Advanced Configuration](../advanced/monitoring.md) -- Set up [Monitoring](../advanced/monitoring.md) diff --git a/docs/index.md b/docs/index.md index a1730c7..0637fdc 100644 --- a/docs/index.md +++ b/docs/index.md @@ -40,14 +40,13 @@ Llamactl is designed to simplify the deployment and management of llama-server i - [Web UI Guide](user-guide/web-ui.md) - Learn to use the web interface - [Managing Instances](user-guide/managing-instances.md) - Instance lifecycle management - [API Reference](user-guide/api-reference.md) - Complete API documentation -- [Monitoring](advanced/monitoring.md) - Health checks and monitoring -- [Backends](advanced/backends.md) - Backend configuration options + ## Getting Help If you need help or have questions: -- Check the [Troubleshooting](advanced/troubleshooting.md) guide +- Check the [Troubleshooting](user-guide/troubleshooting.md) guide - Visit the [GitHub repository](https://github.com/lordmathis/llamactl) - Review the [Configuration Guide](getting-started/configuration.md) for advanced settings diff --git a/docs/user-guide/api-reference.md b/docs/user-guide/api-reference.md index 56c274d..fcd88f3 100644 --- a/docs/user-guide/api-reference.md +++ b/docs/user-guide/api-reference.md @@ -462,9 +462,3 @@ curl -X POST http://localhost:8080/api/instances/example/stop # Delete instance curl -X DELETE http://localhost:8080/api/instances/example ``` - -## Next Steps - -- Learn about [Managing Instances](managing-instances.md) in detail -- Explore [Advanced Configuration](../advanced/backends.md) -- Set up [Monitoring](../advanced/monitoring.md) for production use diff --git a/docs/user-guide/managing-instances.md b/docs/user-guide/managing-instances.md index fa1102e..14bbd71 100644 --- a/docs/user-guide/managing-instances.md +++ b/docs/user-guide/managing-instances.md @@ -163,9 +163,3 @@ curl -X POST http://localhost:8080/api/instances/stop-all # Get status of all instances curl http://localhost:8080/api/instances ``` - -## Next Steps - -- Learn about the [Web UI](web-ui.md) interface -- Explore the complete [API Reference](api-reference.md) -- Set up [Monitoring](../advanced/monitoring.md) for production use diff --git a/docs/advanced/troubleshooting.md b/docs/user-guide/troubleshooting.md similarity index 98% rename from docs/advanced/troubleshooting.md rename to docs/user-guide/troubleshooting.md index 1d070d3..2cd299f 100644 --- a/docs/advanced/troubleshooting.md +++ b/docs/user-guide/troubleshooting.md @@ -552,9 +552,3 @@ cp ~/.llamactl/config.yaml ~/.llamactl/config.yaml.backup # Backup instance configurations curl http://localhost:8080/api/instances > instances-backup.json ``` - -## Next Steps - -- Set up [Monitoring](monitoring.md) to prevent issues -- Learn about [Advanced Configuration](backends.md) -- Review [Best Practices](../development/contributing.md) diff --git a/docs/user-guide/web-ui.md b/docs/user-guide/web-ui.md index 9b1ba29..6a3c4c1 100644 --- a/docs/user-guide/web-ui.md +++ b/docs/user-guide/web-ui.md @@ -208,9 +208,3 @@ Some features may be limited on mobile: - Log viewing (use horizontal scrolling) - Complex configuration forms - File browser functionality - -## Next Steps - -- Learn about [API Reference](api-reference.md) for programmatic access -- Set up [Monitoring](../advanced/monitoring.md) for production use -- Explore [Advanced Configuration](../advanced/backends.md) options diff --git a/mkdocs.yml b/mkdocs.yml index 4e7e107..f9fbe3d 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -57,10 +57,7 @@ nav: - Managing Instances: user-guide/managing-instances.md - Web UI: user-guide/web-ui.md - API Reference: user-guide/api-reference.md - - Advanced: - - Backends: advanced/backends.md - - Monitoring: advanced/monitoring.md - - Troubleshooting: advanced/troubleshooting.md + - Troubleshooting: user-guide/troubleshooting.md plugins: - search From 81a6c14bf6dbeb3906695945a83f8ed664f5ee6d Mon Sep 17 00:00:00 2001 From: LordMathis Date: Sun, 31 Aug 2025 16:05:19 +0200 Subject: [PATCH 06/13] Update api docs --- apidocs/docs.go | 32 ++++++++++++++++++++++++++++---- apidocs/swagger.json | 32 ++++++++++++++++++++++++++++---- apidocs/swagger.yaml | 24 ++++++++++++++++++++---- 3 files changed, 76 insertions(+), 12 deletions(-) diff --git a/apidocs/docs.go b/apidocs/docs.go index 78bd3c0..7ea502e 100644 --- a/apidocs/docs.go +++ b/apidocs/docs.go @@ -884,6 +884,10 @@ const docTemplate = `{ "host": { "type": "string" }, + "idle_timeout": { + "description": "Idle timeout", + "type": "integer" + }, "ignore_eos": { "type": "boolean" }, @@ -1018,6 +1022,10 @@ const docTemplate = `{ "numa": { "type": "string" }, + "on_demand_start": { + "description": "On demand start", + "type": "boolean" + }, "override_kv": { "type": "array", "items": { @@ -1078,8 +1086,7 @@ const docTemplate = `{ "reranking": { "type": "boolean" }, - "restart_delay_seconds": { - "description": "RestartDelay duration in seconds", + "restart_delay": { "type": "integer" }, "rope_freq_base": { @@ -1194,6 +1201,19 @@ const docTemplate = `{ } } }, + "instance.InstanceStatus": { + "type": "integer", + "enum": [ + 0, + 1, + 2 + ], + "x-enum-varnames": [ + "Stopped", + "Running", + "Failed" + ] + }, "instance.Process": { "type": "object", "properties": { @@ -1204,9 +1224,13 @@ const docTemplate = `{ "name": { "type": "string" }, - "running": { + "status": { "description": "Status", - "type": "boolean" + "allOf": [ + { + "$ref": "#/definitions/instance.InstanceStatus" + } + ] } } }, diff --git a/apidocs/swagger.json b/apidocs/swagger.json index 95493f1..be8d193 100644 --- a/apidocs/swagger.json +++ b/apidocs/swagger.json @@ -877,6 +877,10 @@ "host": { "type": "string" }, + "idle_timeout": { + "description": "Idle timeout", + "type": "integer" + }, "ignore_eos": { "type": "boolean" }, @@ -1011,6 +1015,10 @@ "numa": { "type": "string" }, + "on_demand_start": { + "description": "On demand start", + "type": "boolean" + }, "override_kv": { "type": "array", "items": { @@ -1071,8 +1079,7 @@ "reranking": { "type": "boolean" }, - "restart_delay_seconds": { - "description": "RestartDelay duration in seconds", + "restart_delay": { "type": "integer" }, "rope_freq_base": { @@ -1187,6 +1194,19 @@ } } }, + "instance.InstanceStatus": { + "type": "integer", + "enum": [ + 0, + 1, + 2 + ], + "x-enum-varnames": [ + "Stopped", + "Running", + "Failed" + ] + }, "instance.Process": { "type": "object", "properties": { @@ -1197,9 +1217,13 @@ "name": { "type": "string" }, - "running": { + "status": { "description": "Status", - "type": "boolean" + "allOf": [ + { + "$ref": "#/definitions/instance.InstanceStatus" + } + ] } } }, diff --git a/apidocs/swagger.yaml b/apidocs/swagger.yaml index c32e7f5..bc6e4ec 100644 --- a/apidocs/swagger.yaml +++ b/apidocs/swagger.yaml @@ -136,6 +136,9 @@ definitions: type: string host: type: string + idle_timeout: + description: Idle timeout + type: integer ignore_eos: type: boolean jinja: @@ -226,6 +229,9 @@ definitions: type: boolean numa: type: string + on_demand_start: + description: On demand start + type: boolean override_kv: items: type: string @@ -266,8 +272,7 @@ definitions: type: number reranking: type: boolean - restart_delay_seconds: - description: RestartDelay duration in seconds + restart_delay: type: integer rope_freq_base: type: number @@ -344,6 +349,16 @@ definitions: yarn_orig_ctx: type: integer type: object + instance.InstanceStatus: + enum: + - 0 + - 1 + - 2 + type: integer + x-enum-varnames: + - Stopped + - Running + - Failed instance.Process: properties: created: @@ -351,9 +366,10 @@ definitions: type: integer name: type: string - running: + status: + allOf: + - $ref: '#/definitions/instance.InstanceStatus' description: Status - type: boolean type: object server.OpenAIInstance: properties: From 131b1b407d413be09cc7a50fcf5e18cdf8f4fc0b Mon Sep 17 00:00:00 2001 From: LordMathis Date: Sun, 31 Aug 2025 16:21:18 +0200 Subject: [PATCH 07/13] Update api-referrence --- docs/user-guide/api-reference.md | 516 ++++++++++++++----------------- 1 file changed, 232 insertions(+), 284 deletions(-) diff --git a/docs/user-guide/api-reference.md b/docs/user-guide/api-reference.md index fcd88f3..1152ebe 100644 --- a/docs/user-guide/api-reference.md +++ b/docs/user-guide/api-reference.md @@ -7,18 +7,69 @@ Complete reference for the Llamactl REST API. All API endpoints are relative to the base URL: ``` -http://localhost:8080/api +http://localhost:8080/api/v1 ``` ## Authentication -If authentication is enabled, include the JWT token in the Authorization header: +Llamactl supports API key authentication. If authentication is enabled, include the API key in the Authorization header: ```bash -curl -H "Authorization: Bearer " \ - http://localhost:8080/api/instances +curl -H "Authorization: Bearer " \ + http://localhost:8080/api/v1/instances ``` +The server supports two types of API keys: +- **Management API Keys**: Required for instance management operations (CRUD operations on instances) +- **Inference API Keys**: Required for OpenAI-compatible inference endpoints + +## System Endpoints + +### Get Llamactl Version + +Get the version information of the llamactl server. + +```http +GET /api/v1/version +``` + +**Response:** +``` +Version: 1.0.0 +Commit: abc123 +Build Time: 2024-01-15T10:00:00Z +``` + +### Get Llama Server Help + +Get help text for the llama-server command. + +```http +GET /api/v1/server/help +``` + +**Response:** Plain text help output from `llama-server --help` + +### Get Llama Server Version + +Get version information of the llama-server binary. + +```http +GET /api/v1/server/version +``` + +**Response:** Plain text version output from `llama-server --version` + +### List Available Devices + +List available devices for llama-server. + +```http +GET /api/v1/server/devices +``` + +**Response:** Plain text device list from `llama-server --list-devices` + ## Instances ### List All Instances @@ -26,23 +77,18 @@ curl -H "Authorization: Bearer " \ Get a list of all instances. ```http -GET /api/instances +GET /api/v1/instances ``` **Response:** ```json -{ - "instances": [ - { - "name": "llama2-7b", - "status": "running", - "model_path": "/models/llama-2-7b.gguf", - "port": 8081, - "created_at": "2024-01-15T10:30:00Z", - "updated_at": "2024-01-15T12:45:00Z" - } - ] -} +[ + { + "name": "llama2-7b", + "status": "running", + "created": 1705312200 + } +] ``` ### Get Instance Details @@ -50,7 +96,7 @@ GET /api/instances Get detailed information about a specific instance. ```http -GET /api/instances/{name} +GET /api/v1/instances/{name} ``` **Response:** @@ -58,92 +104,57 @@ GET /api/instances/{name} { "name": "llama2-7b", "status": "running", - "model_path": "/models/llama-2-7b.gguf", - "port": 8081, - "pid": 12345, - "options": { - "threads": 4, - "context_size": 2048, - "gpu_layers": 0 - }, - "stats": { - "memory_usage": 4294967296, - "cpu_usage": 25.5, - "uptime": 3600 - }, - "created_at": "2024-01-15T10:30:00Z", - "updated_at": "2024-01-15T12:45:00Z" + "created": 1705312200 } ``` ### Create Instance -Create a new instance. +Create and start a new instance. ```http -POST /api/instances +POST /api/v1/instances/{name} ``` -**Request Body:** -```json -{ - "name": "my-instance", - "model_path": "/path/to/model.gguf", - "port": 8081, - "options": { - "threads": 4, - "context_size": 2048, - "gpu_layers": 0 - } -} -``` +**Request Body:** JSON object with instance configuration. See [Managing Instances](managing-instances.md) for available configuration options. **Response:** ```json { - "message": "Instance created successfully", - "instance": { - "name": "my-instance", - "status": "stopped", - "model_path": "/path/to/model.gguf", - "port": 8081, - "created_at": "2024-01-15T14:30:00Z" - } + "name": "llama2-7b", + "status": "running", + "created": 1705312200 } ``` ### Update Instance -Update an existing instance configuration. +Update an existing instance configuration. See [Managing Instances](managing-instances.md) for available configuration options. ```http -PUT /api/instances/{name} +PUT /api/v1/instances/{name} ``` -**Request Body:** +**Request Body:** JSON object with configuration fields to update. + +**Response:** ```json { - "options": { - "threads": 8, - "context_size": 4096 - } + "name": "llama2-7b", + "status": "running", + "created": 1705312200 } ``` ### Delete Instance -Delete an instance (must be stopped first). +Stop and remove an instance. ```http -DELETE /api/instances/{name} +DELETE /api/v1/instances/{name} ``` -**Response:** -```json -{ - "message": "Instance deleted successfully" -} -``` +**Response:** `204 No Content` ## Instance Operations @@ -152,38 +163,36 @@ DELETE /api/instances/{name} Start a stopped instance. ```http -POST /api/instances/{name}/start +POST /api/v1/instances/{name}/start ``` **Response:** ```json { - "message": "Instance start initiated", - "status": "starting" + "name": "llama2-7b", + "status": "starting", + "created": 1705312200 } ``` +**Error Responses:** +- `409 Conflict`: Maximum number of running instances reached +- `500 Internal Server Error`: Failed to start instance + ### Stop Instance Stop a running instance. ```http -POST /api/instances/{name}/stop -``` - -**Request Body (Optional):** -```json -{ - "force": false, - "timeout": 30 -} +POST /api/v1/instances/{name}/stop ``` **Response:** ```json { - "message": "Instance stop initiated", - "status": "stopping" + "name": "llama2-7b", + "status": "stopping", + "created": 1705312200 } ``` @@ -192,27 +201,15 @@ POST /api/instances/{name}/stop Restart an instance (stop then start). ```http -POST /api/instances/{name}/restart -``` - -### Get Instance Health - -Check instance health status. - -```http -GET /api/instances/{name}/health +POST /api/v1/instances/{name}/restart ``` **Response:** ```json { - "status": "healthy", - "checks": { - "process": "running", - "port": "open", - "response": "ok" - }, - "last_check": "2024-01-15T14:30:00Z" + "name": "llama2-7b", + "status": "restarting", + "created": 1705312200 } ``` @@ -221,146 +218,108 @@ GET /api/instances/{name}/health Retrieve instance logs. ```http -GET /api/instances/{name}/logs +GET /api/v1/instances/{name}/logs ``` **Query Parameters:** -- `lines`: Number of lines to return (default: 100) -- `follow`: Stream logs (boolean) -- `level`: Filter by log level (debug, info, warn, error) +- `lines`: Number of lines to return (default: all lines, use -1 for all) + +**Response:** Plain text log output + +**Example:** +```bash +curl "http://localhost:8080/api/v1/instances/my-instance/logs?lines=100" +``` + +### Proxy to Instance + +Proxy HTTP requests directly to the llama-server instance. + +```http +GET /api/v1/instances/{name}/proxy/* +POST /api/v1/instances/{name}/proxy/* +``` + +This endpoint forwards all requests to the underlying llama-server instance running on its configured port. The proxy strips the `/api/v1/instances/{name}/proxy` prefix and forwards the remaining path to the instance. + +**Example - Check Instance Health:** +```bash +curl -H "Authorization: Bearer your-api-key" \ + http://localhost:8080/api/v1/instances/my-model/proxy/health +``` + +This forwards the request to `http://instance-host:instance-port/health` on the actual llama-server instance. + +**Error Responses:** +- `503 Service Unavailable`: Instance is not running + +## OpenAI-Compatible API + +Llamactl provides OpenAI-compatible endpoints for inference operations. + +### List Models + +List all instances in OpenAI-compatible format. + +```http +GET /v1/models +``` **Response:** ```json { - "logs": [ + "object": "list", + "data": [ { - "timestamp": "2024-01-15T14:30:00Z", - "level": "info", - "message": "Model loaded successfully" + "id": "llama2-7b", + "object": "model", + "created": 1705312200, + "owned_by": "llamactl" } ] } ``` -## Batch Operations +### Chat Completions, Completions, Embeddings -### Start All Instances - -Start all stopped instances. +All OpenAI-compatible inference endpoints are available: ```http -POST /api/instances/start-all +POST /v1/chat/completions +POST /v1/completions +POST /v1/embeddings +POST /v1/rerank +POST /v1/reranking ``` -### Stop All Instances +**Request Body:** Standard OpenAI format with `model` field specifying the instance name -Stop all running instances. - -```http -POST /api/instances/stop-all -``` - -## System Information - -### Get System Status - -Get overall system status and metrics. - -```http -GET /api/system/status -``` - -**Response:** +**Example:** ```json { - "version": "1.0.0", - "uptime": 86400, - "instances": { - "total": 5, - "running": 3, - "stopped": 2 - }, - "resources": { - "cpu_usage": 45.2, - "memory_usage": 8589934592, - "memory_total": 17179869184, - "disk_usage": 75.5 - } + "model": "llama2-7b", + "messages": [ + { + "role": "user", + "content": "Hello, how are you?" + } + ] } ``` -### Get System Information +The server routes requests to the appropriate instance based on the `model` field in the request body. Instances with on-demand starting enabled will be automatically started if not running. For configuration details, see [Managing Instances](managing-instances.md). -Get detailed system information. +**Error Responses:** +- `400 Bad Request`: Invalid request body or missing model name +- `503 Service Unavailable`: Instance is not running and on-demand start is disabled +- `409 Conflict`: Cannot start instance due to maximum instances limit -```http -GET /api/system/info -``` +## Instance Status Values -**Response:** -```json -{ - "hostname": "server-01", - "os": "linux", - "arch": "amd64", - "cpu_count": 8, - "memory_total": 17179869184, - "version": "1.0.0", - "build_time": "2024-01-15T10:00:00Z" -} -``` - -## Configuration - -### Get Configuration - -Get current Llamactl configuration. - -```http -GET /api/config -``` - -### Update Configuration - -Update Llamactl configuration (requires restart). - -```http -PUT /api/config -``` - -## Authentication - -### Login - -Authenticate and receive a JWT token. - -```http -POST /api/auth/login -``` - -**Request Body:** -```json -{ - "username": "admin", - "password": "password" -} -``` - -**Response:** -```json -{ - "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...", - "expires_at": "2024-01-16T14:30:00Z" -} -``` - -### Refresh Token - -Refresh an existing JWT token. - -```http -POST /api/auth/refresh -``` +Instances can have the following status values: +- `stopped`: Instance is not running +- `running`: Instance is running and ready to accept requests +- `failed`: Instance failed to start or crashed ## Error Responses @@ -368,9 +327,7 @@ All endpoints may return error responses in the following format: ```json { - "error": "Error message", - "code": "ERROR_CODE", - "details": "Additional error details" + "error": "Error message description" } ``` @@ -378,87 +335,78 @@ All endpoints may return error responses in the following format: - `200`: Success - `201`: Created -- `400`: Bad Request -- `401`: Unauthorized -- `403`: Forbidden -- `404`: Not Found -- `409`: Conflict (e.g., instance already exists) +- `204`: No Content (successful deletion) +- `400`: Bad Request (invalid parameters or request body) +- `401`: Unauthorized (missing or invalid API key) +- `403`: Forbidden (insufficient permissions) +- `404`: Not Found (instance not found) +- `409`: Conflict (instance already exists, max instances reached) - `500`: Internal Server Error - -## WebSocket API - -### Real-time Updates - -Connect to WebSocket for real-time updates: - -```javascript -const ws = new WebSocket('ws://localhost:8080/api/ws'); - -ws.onmessage = function(event) { - const data = JSON.parse(event.data); - console.log('Update:', data); -}; -``` - -**Message Types:** -- `instance_status_changed`: Instance status updates -- `instance_stats_updated`: Resource usage updates -- `system_alert`: System-level alerts - -## Rate Limiting - -API requests are rate limited to: -- **100 requests per minute** for regular endpoints -- **10 requests per minute** for resource-intensive operations - -Rate limit headers are included in responses: -- `X-RateLimit-Limit`: Request limit -- `X-RateLimit-Remaining`: Remaining requests -- `X-RateLimit-Reset`: Reset time (Unix timestamp) - -## SDKs and Libraries - -### Go Client - -```go -import "github.com/lordmathis/llamactl-go-client" - -client := llamactl.NewClient("http://localhost:8080") -instances, err := client.ListInstances() -``` - -### Python Client - -```python -from llamactl import Client - -client = Client("http://localhost:8080") -instances = client.list_instances() -``` +- `503`: Service Unavailable (instance not running) ## Examples ### Complete Instance Lifecycle ```bash -# Create instance -curl -X POST http://localhost:8080/api/instances \ +# Create and start instance +curl -X POST http://localhost:8080/api/v1/instances/my-model \ -H "Content-Type: application/json" \ + -H "Authorization: Bearer your-api-key" \ -d '{ - "name": "example", - "model_path": "/models/example.gguf", - "port": 8081 + "model": "/models/llama-2-7b.gguf" }' -# Start instance -curl -X POST http://localhost:8080/api/instances/example/start +# Check instance status +curl -H "Authorization: Bearer your-api-key" \ + http://localhost:8080/api/v1/instances/my-model -# Check status -curl http://localhost:8080/api/instances/example +# Get instance logs +curl -H "Authorization: Bearer your-api-key" \ + "http://localhost:8080/api/v1/instances/my-model/logs?lines=50" + +# Use OpenAI-compatible chat completions +curl -X POST http://localhost:8080/v1/chat/completions \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer your-inference-api-key" \ + -d '{ + "model": "my-model", + "messages": [ + {"role": "user", "content": "Hello!"} + ], + "max_tokens": 100 + }' # Stop instance -curl -X POST http://localhost:8080/api/instances/example/stop +curl -X POST -H "Authorization: Bearer your-api-key" \ + http://localhost:8080/api/v1/instances/my-model/stop # Delete instance -curl -X DELETE http://localhost:8080/api/instances/example +curl -X DELETE -H "Authorization: Bearer your-api-key" \ + http://localhost:8080/api/v1/instances/my-model ``` + +### Using the Proxy Endpoint + +You can also directly proxy requests to the llama-server instance: + +```bash +# Direct proxy to instance (bypasses OpenAI compatibility layer) +curl -X POST http://localhost:8080/api/v1/instances/my-model/proxy/completion \ + -H "Content-Type: application/json" \ + -H "Authorization: Bearer your-api-key" \ + -d '{ + "prompt": "Hello, world!", + "n_predict": 50 + }' +``` + +## Swagger Documentation + +If swagger documentation is enabled in the server configuration, you can access the interactive API documentation at: + +``` +http://localhost:8080/swagger/ +``` + +This provides a complete interactive interface for testing all API endpoints. From 56756192e3c6be3c2cc16a2d2d7ee33316323ccf Mon Sep 17 00:00:00 2001 From: LordMathis Date: Sun, 31 Aug 2025 16:27:58 +0200 Subject: [PATCH 08/13] Fix formatting in configuration.md --- docs/getting-started/configuration.md | 78 +++++++++++++-------------- 1 file changed, 39 insertions(+), 39 deletions(-) diff --git a/docs/getting-started/configuration.md b/docs/getting-started/configuration.md index 25256de..64b097a 100644 --- a/docs/getting-started/configuration.md +++ b/docs/getting-started/configuration.md @@ -49,21 +49,21 @@ auth: Configuration files are searched in the following locations (in order of precedence): -**Linux:** -- `./llamactl.yaml` or `./config.yaml` (current directory) -- `$HOME/.config/llamactl/config.yaml` -- `/etc/llamactl/config.yaml` +**Linux:** +- `./llamactl.yaml` or `./config.yaml` (current directory) +- `$HOME/.config/llamactl/config.yaml` +- `/etc/llamactl/config.yaml` -**macOS:** -- `./llamactl.yaml` or `./config.yaml` (current directory) -- `$HOME/Library/Application Support/llamactl/config.yaml` -- `/Library/Application Support/llamactl/config.yaml` +**macOS:** +- `./llamactl.yaml` or `./config.yaml` (current directory) +- `$HOME/Library/Application Support/llamactl/config.yaml` +- `/Library/Application Support/llamactl/config.yaml` -**Windows:** -- `./llamactl.yaml` or `./config.yaml` (current directory) -- `%APPDATA%\llamactl\config.yaml` -- `%USERPROFILE%\llamactl\config.yaml` -- `%PROGRAMDATA%\llamactl\config.yaml` +**Windows:** +- `./llamactl.yaml` or `./config.yaml` (current directory) +- `%APPDATA%\llamactl\config.yaml` +- `%USERPROFILE%\llamactl\config.yaml` +- `%PROGRAMDATA%\llamactl\config.yaml` You can specify the path to config file with `LLAMACTL_CONFIG_PATH` environment variable. @@ -79,11 +79,11 @@ server: enable_swagger: false # Enable Swagger UI (default: false) ``` -**Environment Variables:** -- `LLAMACTL_HOST` - Server host -- `LLAMACTL_PORT` - Server port -- `LLAMACTL_ALLOWED_ORIGINS` - Comma-separated CORS origins -- `LLAMACTL_ENABLE_SWAGGER` - Enable Swagger UI (true/false) +**Environment Variables:** +- `LLAMACTL_HOST` - Server host +- `LLAMACTL_PORT` - Server port +- `LLAMACTL_ALLOWED_ORIGINS` - Comma-separated CORS origins +- `LLAMACTL_ENABLE_SWAGGER` - Enable Swagger UI (true/false) ### Instance Configuration @@ -106,22 +106,22 @@ instances: timeout_check_interval: 5 # Default instance timeout check interval in minutes ``` -**Environment Variables:** -- `LLAMACTL_INSTANCE_PORT_RANGE` - Port range (format: "8000-9000" or "8000,9000") -- `LLAMACTL_DATA_DIRECTORY` - Data directory path -- `LLAMACTL_INSTANCES_DIR` - Instance configs directory path -- `LLAMACTL_LOGS_DIR` - Log directory path -- `LLAMACTL_AUTO_CREATE_DATA_DIR` - Auto-create data/config/logs directories (true/false) -- `LLAMACTL_MAX_INSTANCES` - Maximum number of instances -- `LLAMACTL_MAX_RUNNING_INSTANCES` - Maximum number of running instances -- `LLAMACTL_ENABLE_LRU_EVICTION` - Enable LRU eviction for idle instances -- `LLAMACTL_LLAMA_EXECUTABLE` - Path to llama-server executable -- `LLAMACTL_DEFAULT_AUTO_RESTART` - Default auto-restart setting (true/false) -- `LLAMACTL_DEFAULT_MAX_RESTARTS` - Default maximum restarts -- `LLAMACTL_DEFAULT_RESTART_DELAY` - Default restart delay in seconds -- `LLAMACTL_DEFAULT_ON_DEMAND_START` - Default on-demand start setting (true/false) -- `LLAMACTL_ON_DEMAND_START_TIMEOUT` - Default on-demand start timeout in seconds -- `LLAMACTL_TIMEOUT_CHECK_INTERVAL` - Default instance timeout check interval in minutes +**Environment Variables:** +- `LLAMACTL_INSTANCE_PORT_RANGE` - Port range (format: "8000-9000" or "8000,9000") +- `LLAMACTL_DATA_DIRECTORY` - Data directory path +- `LLAMACTL_INSTANCES_DIR` - Instance configs directory path +- `LLAMACTL_LOGS_DIR` - Log directory path +- `LLAMACTL_AUTO_CREATE_DATA_DIR` - Auto-create data/config/logs directories (true/false) +- `LLAMACTL_MAX_INSTANCES` - Maximum number of instances +- `LLAMACTL_MAX_RUNNING_INSTANCES` - Maximum number of running instances +- `LLAMACTL_ENABLE_LRU_EVICTION` - Enable LRU eviction for idle instances +- `LLAMACTL_LLAMA_EXECUTABLE` - Path to llama-server executable +- `LLAMACTL_DEFAULT_AUTO_RESTART` - Default auto-restart setting (true/false) +- `LLAMACTL_DEFAULT_MAX_RESTARTS` - Default maximum restarts +- `LLAMACTL_DEFAULT_RESTART_DELAY` - Default restart delay in seconds +- `LLAMACTL_DEFAULT_ON_DEMAND_START` - Default on-demand start setting (true/false) +- `LLAMACTL_ON_DEMAND_START_TIMEOUT` - Default on-demand start timeout in seconds +- `LLAMACTL_TIMEOUT_CHECK_INTERVAL` - Default instance timeout check interval in minutes ### Authentication Configuration @@ -133,11 +133,11 @@ auth: management_keys: [] # List of valid management API keys ``` -**Environment Variables:** -- `LLAMACTL_REQUIRE_INFERENCE_AUTH` - Require auth for OpenAI endpoints (true/false) -- `LLAMACTL_INFERENCE_KEYS` - Comma-separated inference API keys -- `LLAMACTL_REQUIRE_MANAGEMENT_AUTH` - Require auth for management endpoints (true/false) -- `LLAMACTL_MANAGEMENT_KEYS` - Comma-separated management API keys +**Environment Variables:** +- `LLAMACTL_REQUIRE_INFERENCE_AUTH` - Require auth for OpenAI endpoints (true/false) +- `LLAMACTL_INFERENCE_KEYS` - Comma-separated inference API keys +- `LLAMACTL_REQUIRE_MANAGEMENT_AUTH` - Require auth for management endpoints (true/false) +- `LLAMACTL_MANAGEMENT_KEYS` - Comma-separated management API keys ## Command Line Options From 969b4b14e11cf0a9f213834ab2c342fc7fe985a3 Mon Sep 17 00:00:00 2001 From: LordMathis Date: Wed, 3 Sep 2025 21:11:26 +0200 Subject: [PATCH 09/13] Refactor installation and troubleshooting documentation for clarity and completeness --- docs/getting-started/installation.md | 23 +- docs/getting-started/quick-start.md | 2 + docs/user-guide/api-reference.md | 8 +- docs/user-guide/troubleshooting.md | 554 ++++----------------------- 4 files changed, 99 insertions(+), 488 deletions(-) diff --git a/docs/getting-started/installation.md b/docs/getting-started/installation.md index 9ae35ed..90f78a8 100644 --- a/docs/getting-started/installation.md +++ b/docs/getting-started/installation.md @@ -6,19 +6,17 @@ This guide will walk you through installing Llamactl on your system. You need `llama-server` from [llama.cpp](https://github.com/ggml-org/llama.cpp) installed: -```bash -# Quick install methods: -# Homebrew (macOS) -brew install llama.cpp -# Or build from source - see llama.cpp docs +**Quick install methods:** + +```bash +# Homebrew (macOS/Linux) +brew install llama.cpp +# Winget (Windows) +winget install llama.cpp ``` -Additional requirements for building from source: -- Go 1.24 or later -- Node.js 22 or later -- Git -- Sufficient disk space for your models +Or build from source - see llama.cpp docs ## Installation Methods @@ -40,6 +38,11 @@ sudo mv llamactl /usr/local/bin/ ### Option 2: Build from Source +Requirements: +- Go 1.24 or later +- Node.js 22 or later +- Git + If you prefer to build from source: ```bash diff --git a/docs/getting-started/quick-start.md b/docs/getting-started/quick-start.md index 11751c0..6ea5720 100644 --- a/docs/getting-started/quick-start.md +++ b/docs/getting-started/quick-start.md @@ -20,6 +20,8 @@ Open your web browser and navigate to: http://localhost:8080 ``` +Login with the management API key. By default it is generated during server startup. Copy it from the terminal output. + You should see the Llamactl web interface. ## Step 3: Create Your First Instance diff --git a/docs/user-guide/api-reference.md b/docs/user-guide/api-reference.md index 1152ebe..3f99e53 100644 --- a/docs/user-guide/api-reference.md +++ b/docs/user-guide/api-reference.md @@ -316,10 +316,10 @@ The server routes requests to the appropriate instance based on the `model` fiel ## Instance Status Values -Instances can have the following status values: -- `stopped`: Instance is not running -- `running`: Instance is running and ready to accept requests -- `failed`: Instance failed to start or crashed +Instances can have the following status values: +- `stopped`: Instance is not running +- `running`: Instance is running and ready to accept requests +- `failed`: Instance failed to start or crashed ## Error Responses diff --git a/docs/user-guide/troubleshooting.md b/docs/user-guide/troubleshooting.md index 2cd299f..5608139 100644 --- a/docs/user-guide/troubleshooting.md +++ b/docs/user-guide/troubleshooting.md @@ -1,103 +1,27 @@ # Troubleshooting -Common issues and solutions for Llamactl deployment and operation. +Issues specific to Llamactl deployment and operation. -## Installation Issues +## Configuration Issues -### Binary Not Found - -**Problem:** `llamactl: command not found` - -**Solutions:** -1. Verify the binary is in your PATH: - ```bash - echo $PATH - which llamactl - ``` - -2. Add to PATH or use full path: - ```bash - export PATH=$PATH:/path/to/llamactl - # or - /full/path/to/llamactl - ``` - -3. Check binary permissions: - ```bash - chmod +x llamactl - ``` - -### Permission Denied - -**Problem:** Permission errors when starting Llamactl - -**Solutions:** -1. Check file permissions: - ```bash - ls -la llamactl - chmod +x llamactl - ``` - -2. Verify directory permissions: - ```bash - # Check models directory - ls -la /path/to/models/ - - # Check logs directory - sudo mkdir -p /var/log/llamactl - sudo chown $USER:$USER /var/log/llamactl - ``` - -3. Run with appropriate user: - ```bash - # Don't run as root unless necessary - sudo -u llamactl ./llamactl - ``` - -## Startup Issues - -### Port Already in Use - -**Problem:** `bind: address already in use` - -**Solutions:** -1. Find process using the port: - ```bash - sudo netstat -tulpn | grep :8080 - # or - sudo lsof -i :8080 - ``` - -2. Kill the conflicting process: - ```bash - sudo kill -9 - ``` - -3. Use a different port: - ```bash - llamactl --port 8081 - ``` - -### Configuration Errors +### Invalid Configuration **Problem:** Invalid configuration preventing startup **Solutions:** -1. Validate configuration file: - ```bash - llamactl --config /path/to/config.yaml --validate - ``` - -2. Check YAML syntax: - ```bash - yamllint config.yaml - ``` - -3. Use minimal configuration: +1. Use minimal configuration: ```yaml server: - host: "localhost" + host: "0.0.0.0" port: 8080 + instances: + port_range: [8000, 9000] + ``` + +2. Check data directory permissions: + ```bash + # Ensure data directory is writable (default: ~/.local/share/llamactl) + mkdir -p ~/.local/share/llamactl/{instances,logs} ``` ## Instance Management Issues @@ -106,449 +30,131 @@ Common issues and solutions for Llamactl deployment and operation. **Problem:** Instance fails to start with model loading errors -**Diagnostic Steps:** -1. Check model file exists: - ```bash - ls -la /path/to/model.gguf - file /path/to/model.gguf - ``` - -2. Verify model format: - ```bash - # Check if it's a valid GGUF file - hexdump -C /path/to/model.gguf | head -5 - ``` - -3. Test with llama.cpp directly: - ```bash - llama-server --model /path/to/model.gguf --port 8081 - ``` - -**Common Solutions:** -- **Corrupted model:** Re-download the model file -- **Wrong format:** Ensure model is in GGUF format -- **Insufficient memory:** Reduce context size or use smaller model -- **Path issues:** Use absolute paths, check file permissions +**Common Solutions:** +- **llama-server not found:** Ensure `llama-server` binary is in PATH +- **Wrong model format:** Ensure model is in GGUF format +- **Insufficient memory:** Use smaller model or reduce context size +- **Path issues:** Use absolute paths to model files ### Memory Issues **Problem:** Out of memory errors or system becomes unresponsive -**Diagnostic Steps:** -1. Check system memory: - ```bash - free -h - cat /proc/meminfo - ``` - -2. Monitor memory usage: - ```bash - top -p $(pgrep llamactl) - ``` - -3. Check instance memory requirements: - ```bash - curl http://localhost:8080/api/instances/{name}/stats - ``` - **Solutions:** 1. **Reduce context size:** ```json { - "options": { - "context_size": 1024 - } + "n_ctx": 1024 } ``` -2. **Enable memory mapping:** - ```json - { - "options": { - "no_mmap": false - } - } - ``` +2. **Use quantized models:** + - Try Q4_K_M instead of higher precision models + - Use smaller model variants (7B instead of 13B) -3. **Use quantized models:** - - Try Q4_K_M instead of higher precision models - - Use smaller model variants (7B instead of 13B) +### GPU Configuration -### GPU Issues - -**Problem:** GPU not detected or not being used - -**Diagnostic Steps:** -1. Check GPU availability: - ```bash - nvidia-smi - ``` - -2. Verify CUDA installation: - ```bash - nvcc --version - ``` - -3. Check llama.cpp GPU support: - ```bash - llama-server --help | grep -i gpu - ``` +**Problem:** GPU not being used effectively **Solutions:** -1. **Install CUDA drivers:** - ```bash - sudo apt update - sudo apt install nvidia-driver-470 nvidia-cuda-toolkit - ``` - -2. **Rebuild llama.cpp with GPU support:** - ```bash - cmake -DLLAMA_CUBLAS=ON .. - make - ``` - -3. **Configure GPU layers:** +1. **Configure GPU layers:** ```json { - "options": { - "gpu_layers": 35 - } + "n_gpu_layers": 35 } ``` -## Performance Issues +### Advanced Instance Issues -### Slow Response Times +**Problem:** Complex model loading, performance, or compatibility issues -**Problem:** API responses are slow or timeouts occur +Since llamactl uses `llama-server` under the hood, many instance-related issues are actually llama.cpp issues. For advanced troubleshooting: -**Diagnostic Steps:** -1. Check API response times: - ```bash - time curl http://localhost:8080/api/instances - ``` +**Resources:** +- **llama.cpp Documentation:** [https://github.com/ggml/llama.cpp](https://github.com/ggml/llama.cpp) +- **llama.cpp Issues:** [https://github.com/ggml/llama.cpp/issues](https://github.com/ggml/llama.cpp/issues) +- **llama.cpp Discussions:** [https://github.com/ggml/llama.cpp/discussions](https://github.com/ggml/llama.cpp/discussions) -2. Monitor system resources: - ```bash - htop - iotop - ``` +**Testing directly with llama-server:** +```bash +# Test your model and parameters directly with llama-server +llama-server --model /path/to/model.gguf --port 8081 --n-gpu-layers 35 +``` -3. Check instance logs: - ```bash - curl http://localhost:8080/api/instances/{name}/logs - ``` +This helps determine if the issue is with llamactl or with the underlying llama.cpp/llama-server. -**Solutions:** -1. **Optimize thread count:** - ```json - { - "options": { - "threads": 6 - } - } - ``` - -2. **Adjust batch size:** - ```json - { - "options": { - "batch_size": 512 - } - } - ``` - -3. **Enable GPU acceleration:** - ```json - { - "options": { - "gpu_layers": 35 - } - } - ``` - -### High CPU Usage - -**Problem:** Llamactl consuming excessive CPU - -**Diagnostic Steps:** -1. Identify CPU-intensive processes: - ```bash - top -p $(pgrep -f llamactl) - ``` - -2. Check thread allocation: - ```bash - curl http://localhost:8080/api/instances/{name}/config - ``` - -**Solutions:** -1. **Reduce thread count:** - ```json - { - "options": { - "threads": 4 - } - } - ``` - -2. **Limit concurrent instances:** - ```yaml - limits: - max_instances: 3 - ``` - -## Network Issues - -### Connection Refused - -**Problem:** Cannot connect to Llamactl web interface - -**Diagnostic Steps:** -1. Check if service is running: - ```bash - ps aux | grep llamactl - ``` - -2. Verify port binding: - ```bash - netstat -tulpn | grep :8080 - ``` - -3. Test local connectivity: - ```bash - curl http://localhost:8080/api/health - ``` - -**Solutions:** -1. **Check firewall settings:** - ```bash - sudo ufw status - sudo ufw allow 8080 - ``` - -2. **Bind to correct interface:** - ```yaml - server: - host: "0.0.0.0" # Instead of "localhost" - port: 8080 - ``` +## API and Network Issues ### CORS Errors **Problem:** Web UI shows CORS errors in browser console **Solutions:** -1. **Enable CORS in configuration:** +1. **Configure allowed origins:** ```yaml server: - cors_enabled: true - cors_origins: + allowed_origins: - "http://localhost:3000" - "https://yourdomain.com" ``` -2. **Use reverse proxy:** - ```nginx - server { - listen 80; - location / { - proxy_pass http://localhost:8080; - proxy_set_header Host $host; - proxy_set_header X-Real-IP $remote_addr; - } - } - ``` +## Authentication Issues -## Database Issues - -### Startup Database Errors - -**Problem:** Database connection failures on startup - -**Diagnostic Steps:** -1. Check database service: - ```bash - systemctl status postgresql - # or - systemctl status mysql - ``` - -2. Test database connectivity: - ```bash - psql -h localhost -U llamactl -d llamactl - ``` +**Problem:** API requests failing with authentication errors **Solutions:** -1. **Start database service:** - ```bash - sudo systemctl start postgresql - sudo systemctl enable postgresql - ``` - -2. **Create database and user:** - ```sql - CREATE DATABASE llamactl; - CREATE USER llamactl WITH PASSWORD 'password'; - GRANT ALL PRIVILEGES ON DATABASE llamactl TO llamactl; - ``` - -## Web UI Issues - -### Blank Page or Loading Issues - -**Problem:** Web UI doesn't load or shows blank page - -**Diagnostic Steps:** -1. Check browser console for errors (F12) -2. Verify API connectivity: - ```bash - curl http://localhost:8080/api/system/status - ``` - -3. Check static file serving: - ```bash - curl http://localhost:8080/ - ``` - -**Solutions:** -1. **Clear browser cache** -2. **Try different browser** -3. **Check for JavaScript errors in console** -4. **Verify API endpoint accessibility** - -### Authentication Issues - -**Problem:** Unable to login or authentication failures - -**Diagnostic Steps:** -1. Check authentication configuration: - ```bash - curl http://localhost:8080/api/config | jq .auth - ``` - -2. Verify user credentials: - ```bash - # Test login endpoint - curl -X POST http://localhost:8080/api/auth/login \ - -H "Content-Type: application/json" \ - -d '{"username":"admin","password":"password"}' - ``` - -**Solutions:** -1. **Reset admin password:** - ```bash - llamactl --reset-admin-password - ``` - -2. **Disable authentication temporarily:** +1. **Disable authentication temporarily:** ```yaml auth: - enabled: false + require_management_auth: false + require_inference_auth: false ``` -## Log Analysis +2. **Configure API keys:** + ```yaml + auth: + management_keys: + - "your-management-key" + inference_keys: + - "your-inference-key" + ``` + +3. **Use correct Authorization header:** + ```bash + curl -H "Authorization: Bearer your-api-key" \ + http://localhost:8080/api/v1/instances + ``` + +## Debugging and Logs + +### Viewing Instance Logs + +```bash +# Get instance logs via API +curl http://localhost:8080/api/v1/instances/{name}/logs + +# Or check log files directly +tail -f ~/.local/share/llamactl/logs/{instance-name}.log +``` ### Enable Debug Logging -For detailed troubleshooting, enable debug logging: - -```yaml -logging: - level: "debug" - output: "/var/log/llamactl/debug.log" -``` - -### Key Log Patterns - -Look for these patterns in logs: - -**Startup issues:** -``` -ERRO Failed to start server -ERRO Database connection failed -ERRO Port binding failed -``` - -**Instance issues:** -``` -ERRO Failed to start instance -ERRO Model loading failed -ERRO Process crashed -``` - -**Performance issues:** -``` -WARN High memory usage detected -WARN Request timeout -WARN Resource limit exceeded +```bash +export LLAMACTL_LOG_LEVEL=debug +llamactl ``` ## Getting Help -### Collecting Information - -When seeking help, provide: +When reporting issues, include: 1. **System information:** ```bash - uname -a llamactl --version ``` -2. **Configuration:** - ```bash - llamactl --config-dump - ``` +2. **Configuration file** (remove sensitive keys) -3. **Logs:** - ```bash - tail -100 /var/log/llamactl/app.log - ``` +3. **Relevant log output** -4. **Error details:** - - Exact error messages - - Steps to reproduce - - Environment details - -### Support Channels - -- **GitHub Issues:** Report bugs and feature requests -- **Documentation:** Check this documentation first -- **Community:** Join discussions in GitHub Discussions - -## Preventive Measures - -### Health Monitoring - -Set up monitoring to catch issues early: - -```bash -# Regular health checks -*/5 * * * * curl -f http://localhost:8080/api/health || alert -``` - -### Resource Monitoring - -Monitor system resources: - -```bash -# Disk space monitoring -df -h /var/log/llamactl/ -df -h /path/to/models/ - -# Memory monitoring -free -h -``` - -### Backup Configuration - -Regular configuration backups: - -```bash -# Backup configuration -cp ~/.llamactl/config.yaml ~/.llamactl/config.yaml.backup - -# Backup instance configurations -curl http://localhost:8080/api/instances > instances-backup.json -``` +4. **Steps to reproduce the issue** From 3013a343f132864e3341eec9e2cf7ae2e7293709 Mon Sep 17 00:00:00 2001 From: LordMathis Date: Wed, 3 Sep 2025 22:47:15 +0200 Subject: [PATCH 10/13] Update documentation: remove Web UI guide and adjust navigation links --- docs/getting-started/quick-start.md | 2 +- docs/index.md | 1 - docs/user-guide/managing-instances.md | 235 ++++++++++++++------------ docs/user-guide/web-ui.md | 210 ----------------------- mkdocs.yml | 1 - 5 files changed, 129 insertions(+), 320 deletions(-) delete mode 100644 docs/user-guide/web-ui.md diff --git a/docs/getting-started/quick-start.md b/docs/getting-started/quick-start.md index 6ea5720..4de1065 100644 --- a/docs/getting-started/quick-start.md +++ b/docs/getting-started/quick-start.md @@ -138,6 +138,6 @@ curl http://localhost:8080/v1/models ## Next Steps -- Learn more about the [Web UI](../user-guide/web-ui.md) +- Manage instances [Managing Instances](../user-guide/managing-instances.md) - Explore the [API Reference](../user-guide/api-reference.md) - Configure advanced settings in the [Configuration](configuration.md) guide diff --git a/docs/index.md b/docs/index.md index 0637fdc..8dc6b1c 100644 --- a/docs/index.md +++ b/docs/index.md @@ -37,7 +37,6 @@ Llamactl is designed to simplify the deployment and management of llama-server i - [Installation Guide](getting-started/installation.md) - Get Llamactl up and running - [Configuration Guide](getting-started/configuration.md) - Detailed configuration options - [Quick Start](getting-started/quick-start.md) - Your first steps with Llamactl -- [Web UI Guide](user-guide/web-ui.md) - Learn to use the web interface - [Managing Instances](user-guide/managing-instances.md) - Instance lifecycle management - [API Reference](user-guide/api-reference.md) - Complete API documentation diff --git a/docs/user-guide/managing-instances.md b/docs/user-guide/managing-instances.md index 14bbd71..9d9e4dc 100644 --- a/docs/user-guide/managing-instances.md +++ b/docs/user-guide/managing-instances.md @@ -1,73 +1,121 @@ # Managing Instances -Learn how to effectively manage your Llama.cpp instances with Llamactl. +Learn how to effectively manage your Llama.cpp instances with Llamactl through both the Web UI and API. -## Instance Lifecycle +## Overview -### Creating Instances +Llamactl provides two ways to manage instances: -Instances can be created through the Web UI or API: +- **Web UI**: Accessible at `http://localhost:8080` with an intuitive dashboard +- **REST API**: Programmatic access for automation and integration -#### Via Web UI -1. Click "Add Instance" button -2. Fill in the configuration form -3. Click "Create" +### Authentication + +If authentication is enabled: +1. Navigate to the web UI +2. Enter your credentials +3. Bearer token is stored for the session + +### Theme Support + +- Switch between light and dark themes +- Setting is remembered across sessions + +## Instance Cards + +Each instance is displayed as a card showing: + +- **Instance name** +- **Health status badge** (unknown, ready, error, failed) +- **Action buttons** (start, stop, edit, logs, delete) + +## Create Instance + +### Via Web UI + +1. Click the **"Add Instance"** button on the dashboard +2. Enter a unique **Name** for your instance (only required field) +3. Configure model source (choose one): + - **Model Path**: Full path to your downloaded GGUF model file + - **HuggingFace Repo**: Repository name (e.g., `microsoft/Phi-3-mini-4k-instruct-gguf`) + - **HuggingFace File**: Specific file within the repo (optional, uses default if not specified) +4. Configure optional instance management settings: + - **Auto Restart**: Automatically restart instance on failure + - **Max Restarts**: Maximum number of restart attempts + - **Restart Delay**: Delay in seconds between restart attempts + - **On Demand Start**: Start instance when receiving a request to the OpenAI compatible endpoint + - **Idle Timeout**: Minutes before stopping idle instance (set to 0 to disable) +5. Configure optional llama-server backend options: + - **Threads**: Number of CPU threads to use + - **Context Size**: Context window size (ctx_size) + - **GPU Layers**: Number of layers to offload to GPU + - **Port**: Network port (auto-assigned by llamactl if not specified) + - **Additional Parameters**: Any other llama-server command line options (see [llama-server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md)) +6. Click **"Create"** to save the instance + +### Via API -#### Via API ```bash -curl -X POST http://localhost:8080/api/instances \ +# Create instance with local model file +curl -X POST http://localhost:8080/api/instances/my-instance \ -H "Content-Type: application/json" \ -d '{ - "name": "my-instance", - "model_path": "/path/to/model.gguf", - "port": 8081 + "backend_type": "llama_cpp", + "backend_options": { + "model": "/path/to/model.gguf", + "threads": 8, + "ctx_size": 4096 + } + }' + +# Create instance with HuggingFace model +curl -X POST http://localhost:8080/api/instances/phi3-mini \ + -H "Content-Type: application/json" \ + -d '{ + "backend_type": "llama_cpp", + "backend_options": { + "hf_repo": "microsoft/Phi-3-mini-4k-instruct-gguf", + "hf_file": "Phi-3-mini-4k-instruct-q4.gguf", + "gpu_layers": 32 + }, + "auto_restart": true, + "max_restarts": 3 }' ``` -### Starting and Stopping +## Start Instance -#### Start an Instance +### Via Web UI +1. Click the **"Start"** button on an instance card +2. Watch the status change to "Unknown" +3. Monitor progress in the logs +4. Instance status changes to "Ready" when ready + +### Via API ```bash -# Via API curl -X POST http://localhost:8080/api/instances/{name}/start - -# The instance will begin loading the model ``` -#### Stop an Instance +## Stop Instance + +### Via Web UI +1. Click the **"Stop"** button on an instance card +2. Instance gracefully shuts down + +### Via API ```bash -# Via API curl -X POST http://localhost:8080/api/instances/{name}/stop - -# Graceful shutdown with configurable timeout ``` -### Monitoring Status +## Edit Instance -Check instance status in real-time: - -```bash -# Get instance details -curl http://localhost:8080/api/instances/{name} - -# Get health status -curl http://localhost:8080/api/instances/{name}/health -``` - -## Instance States - -Instances can be in one of several states: - -- **Stopped**: Instance is not running -- **Starting**: Instance is initializing and loading the model -- **Running**: Instance is active and ready to serve requests -- **Stopping**: Instance is shutting down gracefully -- **Error**: Instance encountered an error - -## Configuration Management - -### Updating Instance Configuration +### Via Web UI +1. Click the **"Edit"** button on an instance card +2. Modify settings in the configuration dialog +3. Changes require instance restart to take effect +4. Click **"Update & Restart"** to apply changes +### Via API Modify instance settings: ```bash @@ -84,82 +132,55 @@ curl -X PUT http://localhost:8080/api/instances/{name} \ !!! note Configuration changes require restarting the instance to take effect. -### Viewing Configuration + +## View Logs + +### Via Web UI + +1. Click the **"Logs"** button on any instance card +2. Real-time log viewer opens + +### Via API +Check instance status in real-time: ```bash -# Get current configuration -curl http://localhost:8080/api/instances/{name}/config +# Get instance details +curl http://localhost:8080/api/instances/{name}/logs ``` -## Resource Management +## Delete Instance -### Memory Usage +### Via Web UI +1. Click the **"Delete"** button on an instance card +2. Only stopped instances can be deleted +3. Confirm deletion in the dialog -Monitor memory consumption: +### Via API +```bash +curl -X DELETE http://localhost:8080/api/instances/{name} +``` + +## Instance Proxy + +Llamactl proxies all requests to the underlying llama-server instances. ```bash -# Get resource usage -curl http://localhost:8080/api/instances/{name}/stats +# Get instance details +curl http://localhost:8080/api/instances/{name}/proxy/ ``` -### CPU and GPU Usage +Check llama-server [docs](https://github.com/ggml-org/llama.cpp/blob/master/tools/server/README.md) for more information. -Track performance metrics: +### Instance Health -- CPU thread utilization -- GPU memory usage (if applicable) -- Request processing times +#### Via Web UI -## Troubleshooting Common Issues +1. The health status badge is displayed on each instance card -### Instance Won't Start +#### Via API -1. **Check model path**: Ensure the model file exists and is readable -2. **Port conflicts**: Verify the port isn't already in use -3. **Resource limits**: Check available memory and CPU -4. **Permissions**: Ensure proper file system permissions - -### Performance Issues - -1. **Adjust thread count**: Match to your CPU cores -2. **Optimize context size**: Balance memory usage and capability -3. **GPU offloading**: Use `gpu_layers` for GPU acceleration -4. **Batch size tuning**: Optimize for your workload - -### Memory Problems - -1. **Reduce context size**: Lower memory requirements -2. **Disable memory mapping**: Use `no_mmap` option -3. **Enable memory locking**: Use `memory_lock` for performance -4. **Monitor system resources**: Check available RAM - -## Best Practices - -### Production Deployments - -1. **Resource allocation**: Plan memory and CPU requirements -2. **Health monitoring**: Set up regular health checks -3. **Graceful shutdowns**: Use proper stop procedures -4. **Backup configurations**: Save instance configurations -5. **Log management**: Configure appropriate logging levels - -### Development Environments - -1. **Resource sharing**: Use smaller models for development -2. **Quick iterations**: Optimize for fast startup times -3. **Debug logging**: Enable detailed logging for troubleshooting - -## Batch Operations - -### Managing Multiple Instances +Check the health status of your instances: ```bash -# Start all instances -curl -X POST http://localhost:8080/api/instances/start-all - -# Stop all instances -curl -X POST http://localhost:8080/api/instances/stop-all - -# Get status of all instances -curl http://localhost:8080/api/instances +curl http://localhost:8080/api/instances/{name}/proxy/health ``` diff --git a/docs/user-guide/web-ui.md b/docs/user-guide/web-ui.md deleted file mode 100644 index 6a3c4c1..0000000 --- a/docs/user-guide/web-ui.md +++ /dev/null @@ -1,210 +0,0 @@ -# Web UI Guide - -The Llamactl Web UI provides an intuitive interface for managing your Llama.cpp instances. - -## Overview - -The web interface is accessible at `http://localhost:8080` (or your configured host/port) and provides: - -- Instance management dashboard -- Real-time status monitoring -- Configuration management -- Log viewing -- System information - -## Dashboard - -### Instance Cards - -Each instance is displayed as a card showing: - -- **Instance name** and status indicator -- **Model information** (name, size) -- **Current state** (stopped, starting, running, error) -- **Resource usage** (memory, CPU) -- **Action buttons** (start, stop, configure, logs) - -### Status Indicators - -- 🟢 **Green**: Instance is running and healthy -- 🟡 **Yellow**: Instance is starting or stopping -- 🔴 **Red**: Instance has encountered an error -- ⚪ **Gray**: Instance is stopped - -## Creating Instances - -### Add Instance Dialog - -1. Click the **"Add Instance"** button -2. Fill in the required fields: - - **Name**: Unique identifier for your instance - - **Model Path**: Full path to your GGUF model file - - **Port**: Port number for the instance - -3. Configure optional settings: - - **Threads**: Number of CPU threads - - **Context Size**: Context window size - - **GPU Layers**: Layers to offload to GPU - - **Additional Options**: Advanced Llama.cpp parameters - -4. Click **"Create"** to save the instance - -### Model Path Helper - -Use the file browser to select model files: - -- Navigate to your models directory -- Select the `.gguf` file -- Path is automatically filled in the form - -## Managing Instances - -### Starting Instances - -1. Click the **"Start"** button on an instance card -2. Watch the status change to "Starting" -3. Monitor progress in the logs -4. Instance becomes "Running" when ready - -### Stopping Instances - -1. Click the **"Stop"** button -2. Instance gracefully shuts down -3. Status changes to "Stopped" - -### Viewing Logs - -1. Click the **"Logs"** button on any instance -2. Real-time log viewer opens -3. Filter by log level (Debug, Info, Warning, Error) -4. Search through log entries -5. Download logs for offline analysis - -## Configuration Management - -### Editing Instance Settings - -1. Click the **"Configure"** button -2. Modify settings in the configuration dialog -3. Changes require instance restart to take effect -4. Click **"Save"** to apply changes - -### Advanced Options - -Access advanced Llama.cpp options: - -```yaml -# Example advanced configuration -options: - rope_freq_base: 10000 - rope_freq_scale: 1.0 - yarn_ext_factor: -1.0 - yarn_attn_factor: 1.0 - yarn_beta_fast: 32.0 - yarn_beta_slow: 1.0 -``` - -## System Information - -### Health Dashboard - -Monitor overall system health: - -- **System Resources**: CPU, memory, disk usage -- **Instance Summary**: Running/stopped instance counts -- **Performance Metrics**: Request rates, response times - -### Resource Usage - -Track resource consumption: - -- Per-instance memory usage -- CPU utilization -- GPU memory (if applicable) -- Network I/O - -## User Interface Features - -### Theme Support - -Switch between light and dark themes: - -1. Click the theme toggle button -2. Setting is remembered across sessions - -### Responsive Design - -The UI adapts to different screen sizes: - -- **Desktop**: Full-featured dashboard -- **Tablet**: Condensed layout -- **Mobile**: Stack-based navigation - -### Keyboard Shortcuts - -- `Ctrl+N`: Create new instance -- `Ctrl+R`: Refresh dashboard -- `Ctrl+L`: Open logs for selected instance -- `Esc`: Close dialogs - -## Authentication - -### Login - -If authentication is enabled: - -1. Navigate to the web UI -2. Enter your credentials -3. JWT token is stored for the session -4. Automatic logout on token expiry - -### Session Management - -- Sessions persist across browser restarts -- Logout clears authentication tokens -- Configurable session timeout - -## Troubleshooting - -### Common UI Issues - -**Page won't load:** -- Check if Llamactl server is running -- Verify the correct URL and port -- Check browser console for errors - -**Instance won't start from UI:** -- Verify model path is correct -- Check for port conflicts -- Review instance logs for errors - -**Real-time updates not working:** -- Check WebSocket connection -- Verify firewall settings -- Try refreshing the page - -### Browser Compatibility - -Supported browsers: -- Chrome/Chromium 90+ -- Firefox 88+ -- Safari 14+ -- Edge 90+ - -## Mobile Access - -### Responsive Features - -On mobile devices: - -- Touch-friendly interface -- Swipe gestures for navigation -- Optimized button sizes -- Condensed information display - -### Limitations - -Some features may be limited on mobile: -- Log viewing (use horizontal scrolling) -- Complex configuration forms -- File browser functionality diff --git a/mkdocs.yml b/mkdocs.yml index f9fbe3d..ed4be3a 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -55,7 +55,6 @@ nav: - Configuration: getting-started/configuration.md - User Guide: - Managing Instances: user-guide/managing-instances.md - - Web UI: user-guide/web-ui.md - API Reference: user-guide/api-reference.md - Troubleshooting: user-guide/troubleshooting.md From ef1a2601fbecfe30c0a4d8d1912f74da0fca1600 Mon Sep 17 00:00:00 2001 From: LordMathis Date: Wed, 3 Sep 2025 23:04:11 +0200 Subject: [PATCH 11/13] Update managing-instances.md with new HuggingFace repository and file examples --- docs/user-guide/managing-instances.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/user-guide/managing-instances.md b/docs/user-guide/managing-instances.md index 9d9e4dc..0ee2171 100644 --- a/docs/user-guide/managing-instances.md +++ b/docs/user-guide/managing-instances.md @@ -37,7 +37,7 @@ Each instance is displayed as a card showing: 2. Enter a unique **Name** for your instance (only required field) 3. Configure model source (choose one): - **Model Path**: Full path to your downloaded GGUF model file - - **HuggingFace Repo**: Repository name (e.g., `microsoft/Phi-3-mini-4k-instruct-gguf`) + - **HuggingFace Repo**: Repository name (e.g., `unsloth/gemma-3-27b-it-GGUF`) - **HuggingFace File**: Specific file within the repo (optional, uses default if not specified) 4. Configure optional instance management settings: - **Auto Restart**: Automatically restart instance on failure @@ -69,13 +69,13 @@ curl -X POST http://localhost:8080/api/instances/my-instance \ }' # Create instance with HuggingFace model -curl -X POST http://localhost:8080/api/instances/phi3-mini \ +curl -X POST http://localhost:8080/api/instances/gemma-3-27b \ -H "Content-Type: application/json" \ -d '{ "backend_type": "llama_cpp", "backend_options": { - "hf_repo": "microsoft/Phi-3-mini-4k-instruct-gguf", - "hf_file": "Phi-3-mini-4k-instruct-q4.gguf", + "hf_repo": "unsloth/gemma-3-27b-it-GGUF", + "hf_file": "gemma-3-27b-it-GGUF.gguf", "gpu_layers": 32 }, "auto_restart": true, @@ -122,7 +122,7 @@ Modify instance settings: curl -X PUT http://localhost:8080/api/instances/{name} \ -H "Content-Type: application/json" \ -d '{ - "options": { + "backend_options": { "threads": 8, "context_size": 4096 } From 5eada9b6ce7bbeafa66b440674b30c877f8ba6be Mon Sep 17 00:00:00 2001 From: LordMathis Date: Wed, 3 Sep 2025 23:09:50 +0200 Subject: [PATCH 12/13] Replace main screenshot --- README.md | 2 +- docs/images/dashboard.png | Bin 0 -> 44792 bytes docs/images/screenshot.png | Bin 48558 -> 0 bytes 3 files changed, 1 insertion(+), 1 deletion(-) create mode 100644 docs/images/dashboard.png delete mode 100644 docs/images/screenshot.png diff --git a/README.md b/README.md index 3eed452..a2a1e48 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ 💡 **On-Demand Instance Start**: Automatically launch instances upon receiving OpenAI-compatible API requests 💾 **State Persistence**: Ensure instances remain intact across server restarts -![Dashboard Screenshot](docs/images/screenshot.png) +![Dashboard Screenshot](docs/images/dashboard.png) **Choose llamactl if**: You need authentication, health monitoring, auto-restart, and centralized management of multiple llama-server instances **Choose Ollama if**: You want the simplest setup with strong community ecosystem and third-party integrations diff --git a/docs/images/dashboard.png b/docs/images/dashboard.png new file mode 100644 index 0000000000000000000000000000000000000000..01cea2a477e272568e8a993b5770e3e758f5f757 GIT binary patch literal 44792 zcmeEuRZv~q(k`q(u;3P);4Z;}FCYYW_XL8w%fd-;NpNT34him%5Zv9}-68m$?0wF^ zPwjnP?#q3+^;fN;s5R%Bvqz8a9zDMPdW0y-OQ0eXBE!JIph`);Rfd6iwhaRV(nNd? zd~-tDPYnY@4kPteMAc1yKMmfEs2-~;UoPngS;NG{#HPG|o{@utLq92r-q@t+H3x?y zB$v_g)+Dy}u+!c!_~6Zn!BOK8*JYXdjnw3BIYETK_ygsDf+z?U1LhweL`BdK^5JK( z!lY;*a+tq=LSf+e_&|uhFn|Bl!N4Lq;hVwXz`}uH{{A5!9zpo$x8OE-L@=}wObJE& z=ZnZ;{33<^@p|As7BCEG0iF<6{GXyfiE#dp0Ri{f!GbWhS%wkm{+r}rXg2yk$K&gZ zg5e8x^?L`#{4bKjfJnUlF-qV*U0)cu1{6+E=zo(OV|)AAe-8A4=J;$&!Dq|wg+H3# z`yc)@s!6`P5_IKq1I8zEHK*0TVMVq@5zooT6AO_4Qumt_Fc^u!bP9W@GQv0aXTRDF zX0N3Vzm7*jt|#*VE1f=qA1RIwuJeCPE-eNdp4xZAj+|)fP~`{DdOG*e92|PXx3saK zC`2=m?P5!=rH1Wri&v0YUrpNi?-OiW@z=Dy;kRFdf)K<*utF%_Au4=m56s1QC9tpE zVY}2a))hszbI^EozT{)OeS398S5AU-n@~;?hDBHD4@Qq22ow$Y9}@w$VGV~BA+EHK zeQCW_T={wZtVYqTK?Y;G#fax(k0%!Xm|x7@6MN^zn`-J;1W{OyaI^P)V2kIqXhy4N zE}5|RbhZAE0EGPEOi|_S;Y@nvoZ$1wKKI8TMqzLHdM#V%@7kw44@Dl45znDZodvJ{ z($#dFuYS(25R#ZYc>=KzY}%?ljxSovG#a90uYI-!}xbMn9cj?74b8 ztWxv#ZRp{_8@`Py>UG<@6ciQ*Ug4Ed6gT-2tyC`LqeC^k))^x8W4hx0bi)T$d1ZSfYcyXWW2a}#M@b){%9q-c zt`^#fy7%TT$ql?r&1M-%8$rY~q2Tu*5x#9DNb7NX>E)|_UQ73NXKZWe z{O(#Nb_$kx1>XQcfMwVO$&b`51q~kIFNfR09!5Dgmf2KH=A=(EKonpf#;CJ9L7pg> z8CLkg2pP%{@UppjZ#OB5+~K!I;W%8!?|NVJJG*Hz`l4ZxwWVg`CFc=fJ;?{5+J66I zJ-%Xy+weD;0dsb-$zpT}X|fxLHS_M?9dv9PuhaR|UN=CFKKN>A_tMx*CzR`6If}q; zRF@Xsty=0tikz_kCNWc1fcuw8y!qs-4>fq>+Wl$>oF@_no;|O>$7$Lb%{#{w5riUA z$>U@U8r1op@)f{(uI)j);CroJ4L>uk%pCq}&N0~FfCw1~dGE~t*RZC|TEIm2CQ@;^ zj{uZ5SMuSWkkd&KU2JcIUe#}X|DEH`&V6N|Os9OJNc%f^Zda6+DNM~~PrQwuI5{te z^WWwS8ig%dLX~+Dc@?=~H+AwfC`XGvP{SA!9mOd=a8$wKQ$TY+_#P?c4OM_AH|=${e23k!%{OiR$jMh-ZSoM23x1|Hj` zYm`h5I9od*c^32Nw?(NuHj9^2ixAV;DFbkXO(6V&fE=Qe37nrzOSUFNJn-Dy>^83> zaBi2!MnR|M;4(f!5Wb=Cv{bt>slkOT=Mw}>gXpxkEe6%;LeY3~cHc@QFqA*VZP_QC z1MUWEh#pyWfRFhezW=L#ty2iac!`v?bP@%@=NCK9oGS^e20OhIdz|*Yx;Lg%-o<@3 zN2h`4WRbg5h41PeO>&n>iM2DvqH`_H?(<0069&dnF}63^MWJHQlp|_x-4$ezUZ~f` zE-Hgs8F%yJ@kpTd^cEf<&l}PICQmbbgJ$2^W{-SI?cwXTBsP~XH?DJNUN17@7?cvx zhy_gI)D{-~HIOO0!cfUZQy!S2qb5t{MEP9~vax9g3mr@blG8(RnI&<1@$%(#T$GQz z+jm;0Y?*AFAPKeV1JayzJ{o-PH^g4|H*#@CC(&G0A2{F|IwDqwP+kqc8orrhAkaSC*nIfHQ!Dq;3-Qj zqn25wH>;dBJ29Yl7{4^91KsNXniS8JC>jtLna_I`PnUPH+M)F5^Vp)n`THHJh4Rti zLBVsK@hDP$V|G^vioW}CNntjf*Ny6UnTi3Q=Xy?0#ri-@L}&Q6g6WW677cR)oThtM z6v5j}AOdY#H|Xd-w`BPQm=H5yLhF+FS0zatip>pb^rkia(qQlRGKcpF=sU+FDx8*s z6c1NMwx#3sVvf6`-#_2%pC_THW?3;8=~t&Z1W85Hy%O}gVP)2=3Z;=tD=zCvr6;Mj zTu5{<8IsKu3(5k*+bHl0srkl;7-DP1GFkQ_?ZOwv-CqK;-bTJ1GH6;iVkRAq^tl{q zg75RJ=9GxSi`yM963vyy%X70gRGI$vVf$n)zZchZH~jj1c?J984LUlyhf~*;$no;Z znrfl8k#nh!-u$E2rT^%?0d#MV!5Et-UzfMyr4Wny4J+v}i-(w@^_&?`0*ltCBDQu} zf-&XsC7*1C4B@lItDq{!E$?JrQ+7ef!_PU#aQGX~p^c(}9A?9g8?oxbI@Aaf;dTU{ zHNovM6F23kveH7TM#&!Dv{NkY#t%etDc{>PG#+BOM6c@5 zdl?RXb`)=SglW)7gdz&?7U?uH#L?*oD^tlBnw{+tXNcTP&0gq82}{Muo^KqC8nNj$ zI7Qr?TPP5?_q7KgL`kmln-x-R&DDu8OQ+Dle_nT#7_#$8c!TyR3;VP{ zz`kq41bJPba&c}SDaF3A7iCbTBlB85Fi)wm%FkHx8O{34LCi3kFPkQ(l0T4nW{N)h zYkU!OKYgMyUSLm@ENDq#ymf7)D0zvu{WC}H>_(N`ho$IJgM;f)_NSLbbWJ{FW0auE zaSAp)pXD2JW;47;HG{GF@1^Dfivbg-)mV+YqDyPSF#|3J2pP13I-XiUTcQQfjt3 z;G>yp)ao@dsO`>Lm%@0CF+O!X*70+?;4}6&zJq&aGnHT6@Px!yPPI$i-vUsw?)wBG@MNL?5 zzMHS{;Rxw!UR0l(%0Xo^Y!z*Xg|wS$%#gFcbT|*Hg+J&3wyfvc_8XZT>XkNOtYie4gQTG^drF0vBrp7B!ah z(Y*9Pe&n|k`O)B4&2*?ltkMbPmQwZy`uZ|(4KJ~ExQivc5F(XPUQALUI-OOK4<~)4 zg!9-y&TR^Lf!U0Jop<$e)=mA4?iIn>O8nbwb8`5{i>pTeYK-l z+P7ynJN_Kc=9)>w->u4V?{(VQb%*7GaqpF>NQ8GXJ)xkcPEx%FEpxr zKGc~WeYg?Cv6TR4`+Cs!@!Vp2oKk(6=`b~!Bit(#m zUY#9-77y_b=?SXusY5ATgLYA!`wKznsjdz{g4kS?A5T8qocWXNDL=wkhTCTz-Wt@A zHR_blbGF2zM`;i$)u~NlafB(g*%$IoUixrBSe275$mcLzq;`JMhd*24o_qS1h})Bx zgx5iIpd1;eTkK%221@tn!`kY7A5SBf-g8vpbso5-0Ay)dHVd;#q3b;b29KFl9s}hu zVuaiUawN00Y$^FI;hlKrI~dT~JYH%2w%zD!WQ3jG4 z72W&PN;Cr-oNUBaMd){yd@|_egJx1jD02`XlN(usX*y?T%`ezC-`wP{@k&F(PPUg1 zozKxB?2$=qdbr<>=1f#$uRON;H3IeTx46)2Xl7KPK`Y$#ZYDfjX0u;^eS1~-F+>!H zTKOaQ&r&rXTU~yGIG_1nTDwO-Kc5cw(^JxmH+c3I*|5x1Hk~p7o}o)inkGjyk>?%o zZp{BPV?XZLu^?Q7yDG+Ze>WybZ~jLZ-k?O!G?se+yiT?W3927nIJ|VsW*X{O%nOiS zvh59%!Y+TC%tzyuX3af~p$ijsR<*K7EWPOG7e`s%blRg5@?^uc9J%sI;lEz*_OP59 zE{sQ?colNsan=d;!6=nku{#$cRzH{H8?GUNOe`7Z>eo83bOe*C-8VyqUM+5|ZKNNC z3vRd6*I8bSCg|ZYDCZ58w^USHzg3l>Gv8gIB8iI_Be0O}qm> zlLAlu3yKN_vN?J7$KbLSkGpk)BF$3F9*`duZ;{RO+ENqu;iIvrcAMi-^cWgB$q;NB zMSapBH+!jQH%Tk~THn{FtKm7t8n((0S#h4s<%n)tcEb>OEB&Lwn+mF=^=hlJ0CmSU z9H~P0kq_6==2SWiRY0l?_Vamy4t!-WzSV+{CYyyx_+6r1j%Jrl@8!Qkz9F73d54Ks zTdHmF${F^?@FS>|>Kh;>sWTJNmzUYE_W6TZ6!ytCGwLv zw$4`1EoUpt;TP7dz3&mwwyf2+%sS8QacEtb7NcJ-Sf$_iCu42L&YQYRD8r8Nae~%<}8!= zjIxPDlpCRO>?(ESe@dYWU ze5qyR#U=)=JX19c)awXe;?YS6g5)$~`f%lIQ$#0tiC z8ly9;BCT3#LC^E?xr>U0LEVb{<$=M@YU8jx`s}2a~OAwZ#JyQnve1!NI!=U?YozF(CH;@6}-7S^*%2i#@IGViT!hF zXfW93IJVe!A$?Otb+}ULc2q%SeqGtU^1?fD6F;Ux8$7)yJMyEi4g|Yl?T4P>15?TsJw3$UJ)_m5Iq8P|+b! zXS&puiXSiB18aA)dU}m!(6f@F6}rKSvF1eVW!^WxAB4LE(L*0_id#yKlNpZu0=fzv zP>*y3KIY#3EYu5}@SH(d`l1ncQE1=Z`hr#Eni#ZD8{f9|N3>>piq<++yE*eS9sY+J zqi`OzW!73y2)xl;lQBX_o56R_biM}Z$C*>(v*|e|g62)vI7EjNb`_ntUJQPuH16A_ylw$oM$MO^Bg!(xN=0sF6Dm z;n6_A+)gOh%WrPvF6ub9WokpgVUiQ)<@@dXAZWIhXFZ<11y1I)hl9v(Dh&w?LKcQj zeu|eStHrtc@kT!ESVv-PzxKg1`n*5GO+EZrG$rkhVBxP{HU;XwR>OxNpId3Y9(G8N zG9FTy?c#omxrt@$eIu(7aG~2v9ax@?z+KpU(WT&VpyjIWzQNmx=hO-$8qVvjYQ;QC zQ~EwBj72x3palV7e+%qo{78kpiE*_AcpFi=w$y~L`Qr}v#muvW?j=UUXHe8E2~f_b zkvNLu)R}h{erh-z2di8Az}p26z;^wzD56FhHQslih#%@t8g&>h|JAgLn}YTILak)? z23U-_8tit890C^Wt?O49Ek8vwB04QGZEJrnu}%?%cN|k5DqPfTUWwZd>zA%U+2^As*io$Z6}ya@N}zk?3sZr;Et?dduQJ7K@*9x}*9K z9VOw1wF*f+#x!ecW7&e%nq^ia?Cfe2<`SqZZ_;XmR?z>18G@d|UhSkmOZ=fOZ!_M4 zL{!q-vrX2M8hkx|_K)WUt%Wab8SjoyD6QA5UA6}Gpa{ge7a-dS$kl_OIcQN6suP`0 zT4D>j9}j5`e989gdBl5kzxyiF+Jpq(aV#wP2ivE2{WX{866mss1KLvf!UDQ~%lYYrgh z>ve_eKh(_!Is`QyUN<$fyw{s=KaEFWcbPCnR~-n>oe&WqUpk4GU?1F$>F3#!C2J3i z0Wn~Fum3m%5ATkrJ3oNwdUpk^uG7Hks_P_C0k6Rn(FrU2QB8$xdVm0=M-drCxHvVp zXjBRzNRYtZ?o`+OKDqa7_Ig54PXu~fgRn4J^YtcFrojsT3*zs*=(lh26BtA6%Ba^Q z_pu6fGXl|eGjE_&fGxs3Pkrsx64j;O>NJe;Qe3w~=UYO+K8nRsUdW0z=6$E>kG`S< zU6;No<$CE0HGyYRs{|tf0mPl51npgC=l30~ek^lz;d=}xXz`WB3GswHM?ijczji>N zl>P87VQ`<_^SR&QZ+zJFKX8S~r%9Z;gqG_ycL$AtFQH)?23KVVFCfDgAg}T55NWqE z{S6M~D>9#7%U4Eki%v9*YI20uUve7L)3b8+)zp5?ruoe<%-o|g)W_rJeY39L?m46h z1tRox>Ep6rdz0(K8{4SHOx+=!8nche(3Ii;tnK~9Bi_e^S0b##x$@~K31i-!fDHJsBDHV_42M!41U6L!LWPG+ZOOXLd{azh#v3t>>z7q? zu7@Kh0j-fDj#c0UUmb}{Yfd=!80dSV-?{qJgiP5Cz zJWeyUE#`aj4W8G02-Wq2UH+=%A!w{U+I}`xB0L)#uYT;H?Q0vUv{u^9w;TTnNb zWZ|5mP+t!@5+c)z3rH7&bV4eM8F*}Gp0P;5H6tnmv1s#jJgeUlu@ravh?lOYF2Quw zL;S%C8qu%wsiQWEwyw;iK#Q`tvUj!{oPc+HhE>?L?l(*4#F7aza`&!Irgqfv&8I|2_jL&Ro7b$<| zluKWB-pr%|w#A?9?fXWa8>>KuJLDQX>sJ}rW@joBB|J%5i-fYp~+~QZqsepgi}y(Rry?T3^;c zOgg_UaTint{CPP#zohQ4NL`NS_qLU(62CRs?|#^|l9$4QE7MF$-gRV-SP@jIw(x)t ztW<}FIp3}BEIdDHb>CybbyLg;$Hy01CZ=Mkp5aTx-p+8l)x$-@qCL9B z8z7{E<&WU)_^41%9cJ>C_Ya#en)wr=j0#8-4RB*zNOjk@PYY)rMctdq5fYrWhgpla zVE;M#(e?-9P#V7x6Ioh(Dxb@N0##{F}n@{ zvg{t2tgI+f9z8h$S#kS>?o-2)_|I@563_#DpVM;!wSw7Uj1Il5z`;bmQSw-gl10hV zxeMiqk)wqvG6mYDkiOe8YsT%XM>1@qQgDL6`kSv-k4vVs7hm9++*_!SMrlXnt10)^ zIS`#3Tai7{pWwhj>SaVf7Ov^WBXTl@nlEJz%QJSw1?ra{Yu6|%S34rI8O(b9Su(bxd7yS$!j~S$bqSB!vO$c{OrbE|)F@e(YH!;s&ytKjCu0UL9 zMnK6IlPI(*Tpcer;Hh-q4bfp7-dMz@l^;vASBc%yE*O5ryKJK`bXrpSgVJ`linq#c z#6ie~^R@Ntw|*;ifu*cVU+Z9p2`NNb{%%={YjhDqF=p284MudtfLVa_A>COIvfIXy4zflB# zTAEI2m|Dskg*Un3Zu)eiM+ECmB!ifF*bzXz_tXct1e7lwN_ATGN?*?qzDN5v7a(En z2e~maYXTed4tzuE%>7WjabQC?*VgZsQCCeB%v(^)t@u&h4!>j;f8y>OF5K?g0>*hj?8N)}VCD=z_O-im zfNOw&#HB-Dx(&11vJrcp0;$b!otpyUDNH{>%8$4H@i_)~^hyKEpr!A69J`a(Q+;x) zXvCW>%ooM_xx`Xny!+-B_z;e21ii!C7z25fE&20F&!4-32q$0pn%&RPwxBwr_4cSw zr9C(dSP7(am?AXNJ-8?IGA~r%LrKC)j&+dGL($h}`0^Gutdt@EiQfGMJHRNi8Ij=L z!jONw|K2E}x@U0%b?7En2!RzZTf=S=IU0{uON@y7=v$2$4Uj?bL}3C^g8;>;^_%rf zjgHn=Y61z}gdr-?+PAwlGbN-Kdk6Qw>6n+?QQ`~ZUfb8kubcU?3}J{+{p?mH>XbJKRm)<1qP{XTtG|rvqGa)Fc zJDo{y4l3}lF0UMyqqYQ=6UObuGh4DBBUaW#_Gzc7JYPTlXs@LTuu4H|Cc&bTpDpyP zH{30;TcavYov`G(!bDPy9Nrr^)VqOxx`9wT6g;FlUdxxP&2clS)1tkk)$)tynBUl- z=)ps2mpFKox(MhriD)GKgwaJk+%e~Fq zA1Kpm765>71rjg&+e1O1Mi{w5S0TM2qg+W-O%JalZ~TqV=cJ;-?Ka*Md;9gb<~6>B z#z)_P;?&FyMji%~THBuZny*bc1B@DU;<)wm=3{v|OVbzoYxxQmt{66Tll9YT>D zl?m{AHxtntmW8wu8mv8-)0FRmq57h$3|tXcX+#p{Q@iOjYRE+fwMBe~taz-*Jrq9$ zS&rgM8`XSpEt~AI7~Q*Sb`a!f__fwtKkDR z^&Z-S*r{|`5JVqi+pig3$@?$1j~xX-m7)EKMiJ!?cGVRVh4M#QgEwCIkt(jA=>ByZ z;wQLs(Eaca>)jH2DvaZZwv<|1dRNl&gNW!~#j4RQWV!Nj;R-K3UM^$gTAfTYUcN?P zcsAK2z&u4n^ZJ)z53f>JE z*qh2Col}Ki4~Hd`Z|_an!fK<>VBOr>D!J+G3JSvmL<}8{>yD`!;($#(jBSxhJtfBf zcrqevGZ>(zz;oGXcD))B$rA}^E8$J&w3$~GhPMsh9(e;T4N~h0`k#;UQ}TO;lLz4Cj>e0 z$<&{6@dh8UUOB7yYy5JL{)Hqp7>nLbb$Bqvt9fvB)EcSt&i!DD0(kP03dlzuP(>@M z(zE{%m=N|zLi9eTrMXe5+5$KA(o=x$7(;U~f9M6VKq|_9^^tBg{f|EW(QBvdM4V^^ z=dYCSpHTn+Yn%cBfM%o%iMx{w!)*TMTmI_DErWOto^zsm^B+Wq4BwNafe$}F(G0)ugVEel7KV zTCV=%Nmi_|kBxLX1%2duM?|lwKGjI&Mwg4{2Xj7&HC78EQDnj^q8|8-$`%72hbhYw zH&5KA`_7SZ%1`JL0JljEc6~po#p`zPaGz{vVD733<2d;$)knhp0C8Hq#f3=}I*1Gs zROBH;{v#>+bKeCk)>b8Xg|#brDBG6L#zQ@LkGC_p4^X&_AjQs}Bs%eJU-{uYmd$8m zvHnt*Zlv!K#ll8tCaXw=?-4hg zt@5kl%8g_;lIz8wjV6sy)jvn@pMP!M0nEAqLgESdzm)thbG4HL2y|sLEOh_Dr1=+T z&Nl@U;F?rEmv9#O2O~~52B5=P)}Tnk|239>y>ZzD2w4jyKa}+T@`!&k`lhI0{0@^3 z{FVP=0sWf}R7U^=E*?ueRs6q(`!9Z88yCPo`F}Vx@Hd_A{JG+L<-NN zm|D@qRUnPhD%^ODvkHSU9a%>Neu~~y>qzOGtJ(ccDX0CYtjE=1^UBTfO8dw`O1>1+ zDTgUD)a3vI;`n=*tD}NNJcq&BZuW4Yx>gvqRf-wj8p_bC2!MN=IhrRW-W7SM7-ch> zBgm{>F{&62fLkM{xsG(Yb>C>hTsBU*7_^&SH+x?99^cQ`#}s%>AfGS)pfrw<_z&Lm zpK(wFSWVFU4jt|*tW)wht;`J6%3=#0N1Gw&Z&E ze+HpQOcZNCCrb&7_kK}+SSDw!b&;yJP|r-_=qmKdABxSe(QVyuaDM0aueIl|l@FA5 z>r0P+wN4*BniK<3MW%A|}lY7z=xW)7voX;X}SUzO^9iIgPkaa_NJ|%Te-{*$1|7+!8S9YI@Qh_uI3yl# z(R1ug=AwM!bRRBwfh{Mk^Vp5@hf{cLoAokioTYc?bGbH#Qo^#tZ_}Pqv>-H+d;s1b z=}|H5f7qQY(FS*f;pa8pg{zu75SHPmCnEy<6!bJcuSp$!p0YH>Rj-v*-0>=Ngm%}E z7^)l;Q}lw?jt)KDg|B~CRNDl={dWulz`>aO{ye+^BE{q7AI;TEy%R(QHVb%xL$g&j zqU}K!Dn359mvTsB1xh{|KEj^A6Hhl;#epJp>Z%Q#YL$~s=n>-aHOjws;^@z+a$qhFozSi6)}aO!h?`k6tsNDOjG ze17I?_jt&wR)bqSsnYDheBY#AvU{@Hl{@RBmbuu>S7^zoQ7WNRvw!ZA!4~zB9puiU zD?gA7v1SJU(?Tg={02)u$U|iULd0DTZU$nh>?x8tAfR07INfTaO z)f5UE&6SfI%4mgt3gH+5QsE{fCz)OXinoY-&Yt+xa`BqXMpLK^m%Fx+p0KK~+wN5f9tUpW&dTJ85>?VX)}mh|SSz z1vmnEpKb?;YUw@B3#uGC1bUOHDNFSllPo68H}`D80=T5YO*t(*E^0gg5Fb6GT4QD3 z-bHcOrI30!&K53;GQa($zkdodTh@7bdy>iR^Xgs6S%kbD@6S9;Wjlp!iUS#ghf)fMAJvH z`^ziT+wzT0c@mvbsGq)4cT8?~UdgEkN(1!a2X>#lP!c1Ai`@^YP z#>UKyTqGu^+}zol`aE zlB8fyDMQSJ8=}T`qDM4izva86``xucy_-=W{&?OO5~4V1y-DP^Wk)699+XhMi`Yx} z0+;x-X)*3s#t`=Tdm3lQ3k3dos!h0t#avx&HsjuPr`=CeLUzsF3r+R}xJ-!>UM!H( zt9{;IBK`7=VK+XH&6ktEf!lSN)DO?C*cE<43b3iA!MV~8QtXCrRj$o$CoVt>(GH<7 zN0Zg1y_D*y~bCofVg%c=0Tlq%`&(BD%z2W#AU`AAoegu(P#nV#sB z)-tz{qN>NX4_Np&0QIau@OHJVOd%+WI3KTHxn#P98uJcoUv3ooBRiIBezeNrT7W^V zsGwrDOCj;cTvm5WGf`aFeNyEXy=!`Oyh+I3KZ2P#ka{1`4OgqjEZexBZNAqy>bGF6 zPu*jdB7^qx_>t1czHD$>!RQDl2soT?k)V`N$O5>iV*@&_1rE6toJKb<;}S|i%|5L* zxKzZlrJN*gR;t_(T27j`>NjH=j6+U(W}A2c|g?PJ;L z&85W22Oh+%jLBl!(oQrT`S@bN>@@o8qyzk7+06H;u zareH*mBkWV!~Rmd*L*aCTA_?i?E{~(9MhO~jspN+X(o1zGD&46^=0|Rug_r9yvS9_ zAEfWQ0I0}j>%p+&MaGo4i7etpsf)YRw;e`Z;f0a>kjICB-giPWLoZm4zuIA~=*eI$ z=sSu`Wmwx3Ws?bCqNAL|R@p9naQ>AR-XYo?Ggb34_%}XE!`m+jR46o2 zq`InE29X|nQ$C56-Ue=He@{L%Wnn;RN;6{6z0VgVGwI{aU0_^JIf@_|oGY`DtfI9J z>A1o&!hHA28%~q+_ZLT@aeJdZF{W`)99M^fyNi$*lOoTzcD;)(nEA_oB+CG{0jD%44Y=_3muZ z}dh2G7{z0~@V`Dd2)ZCb(t0tkN}Q20|) ziwV&jhA#vtG$DWHGHa78GIMZaP7;8gVUsq);U9KGda zcyIDk7l_mTG=1_LcbbDGpXvQ_#W(tm4&m!=e7rvgf6qHzH+eN~>RyWC@aS6!-5fCL zwc5w1moL5r5~WKQ99;k=+qoQ22`wb1r&YT8XJUo>v}rdihRWuoeP`J!>q|NsO24lN z)(OaeR~^b}I{0>DG-uBy#LtosLIkSYzBQ(@f++ut7wAUkYwK+~k)v%gOm5K8~AnkN4Z zAAx{~s|UqYg!?bA@&!wf{ToaA&&&8gQ|Tcn!k6OjX~k210mpaqKWWB){PVwktxq%d z|K}8?n19L;_GtifWIi<@mx+=CyLMP@oNy`FP%?A`*aRwH`yT2P_|G8NT5618Bjxu9 z=zca?@a(W?`wN5{2(Viic9STGM$wM5*b19qGNU!y7|^r&>*4e>?Vj*9(Btb@^i@d{aG^OwICN57=FZP(xS^s! zNX!B7f?ju@I&O5t0wXbYUdC8ivd+VU4gn2B^H;szMFjZ;fx!_{enDR@sGT8xHWn1b z0(C$$6Lyej5UQB%VpA>vmuM;AxdNUD#|r}bSfP?%DMn3e3WUAS6Y=+2iuq<$a*Sk= z9TuMBf?$KN03|6lbrS>POM?bq#*{EALmTHyfri8oyWhtvJ5kg2+wEOZ=Rju{cnV6# z22irskUp&)uzYC$lu3(D_HFJKp)tfMTq@F*UjUdN2H0>Jk!oWlRt&^cIASguczMFf zxLIrXhMo|+Pr*1G@>VxTtvu%uSLqZ_xQVYV+{e#v9M&bzj}N_W&U+pyNQ|&<2!RVe zfMQQsMdpE`zIz@ke^EP484p|Oczq2_-3Wz7CCMmGr17ceJS ze9r(;88N?gC9X?f*gI*>YeOw#3~@^Z)nz?SR^zhWDS^#~AGsvC_V|*!@Qs+q<3n0e znu%cTc&Lq)U{@jz96wSK97vu9aEP9^lQ)1x2qXP=m35$P0&@3RV3fH)f2q+@J^zL0 zI4<12B)QYJB2Hw~hepS}cs)42PAYzGvj-rIU>@D=dk$H%S-G%;>8ctxHU&F&*y~3dMP^BJj^}aTYlLRCe@`3|!bW?1=6s%U=NP!U< zz*&*R%tsW>p`4qs8!B$Ieb`xml+*P~J}naOQT!w^DTf<*%6BIE5+1Msl~`p)L4aT; zi`iE9$r~tdFH1i?K(`QWkS+{(#&0C7;adfR<;BDB#XwVEFa`~Ina(SJgnS9~!&@0B zV2wZ1;MAVM(msjsaoLR+5Fz6a5m1YPpMR2y9^T_Q`W>7@b~t%izzSR?L2`3*Q_MkJ zDOn7%*95yH&2cQ2W?0;j2CK1538Wv(sa!j~R#uf`h{S!cP5t z`>?)ktA3yw7W`DZfb?*#N`ykJrywqb*Uw=w-h*By45L1+ zVIfbtVvJiNFn$uo^__4#2e^&-X|R4^*RV1_X@9UUU>Z2@w@=;!vS7nm5yZ@+Qo-@T zH8fthkK@A;ziW$6K!a;&Pku%KbUe5D|8YXavLT#R5@Wq4F0nrGlN3R&uYt*%|NN&E zt~(?Jw1{vo03E`%lPCby5;nOf9WH1-H8VHrWJq|4w0-zfA^au0U)X_Egz3+NB3OXe z)klY^{aESvLtr=B_GjQWy?+Uuu;}OE4{igN4;;4n6IDQ9&0*_0@7T!EzMfTGz%{umed zU*mEuzVVK_i)o>`fR-x-j6|{hpaYgyk%k8f7eS%>91K=(#qK@f?zITVJBY=R~)mk*9n#TeK zR3`uQiiy3CN2emde9Sc}ylRD^Ii}oK@GxM53p>fc*=$EQ9N3NDq1CukansLPzLA)| zi17EaZ<4J2!|3A|u;ZS?<^zVa&N=Re0eC(9li@IRb4%)h0N|N4hmVs%?7RhRhm02( z`$mG|=H+65fAhW63L0N}&>xn1t_-MM;%SYOZ0IHeG_MV+3747_qWorn{`7?DTRSN8 z<58J% z*k&>0ju>_jzzgPEP+(vX>u_LofJLXs7aRH66E6PA#^jnZnpQ^ET8AgAslHOBs-SfY zAYhm+`B8Fu%sVwWZnbO>oWzY}-OH!V@nmO-BM+0YXB9SG9`eg(cPZopkYrJlfPldE zNSccG=I!>0N^flQOI(J;lz0z$wW$Y8)&MnQcIT5{_5pF=R_8sI4F*Mdy=ZzPZJ>m} zdVh6;I*?pBy=&^(jhpHp^eB}nq;oix&gShYol1x4b$c0Mv)CYxP9|J)Q(xxYzV(f% z-{3yyTNm?|p-dVON8rT$^not$=%v$Zlmzp+$mSZTQx9Y`S4!$&jyn_R+gAL9xU%yj zK%&ZKjXRmsnC0oX0Er~VI94fCqVe~5Qrow+M@}cFT*(5~EC-8@-FDW%4r_ZcVr};* zbb@dsvz=DA;*d^t`DyKPx2AOr^>8jKa)rDmeMZf;2b^DGJ~CRpUT;QizSevu{qZ`6 z&(%y`kFN$AlRuB)na0Zt(XB@L$EUN5{QRQF9@YB}3M7PIgDs`D zSoJd<;&zMe-^l=kT(z)>=$wdZY@GF_1rOBB>-4?}SD^T1A#&w~)=9lp)qK~9KalcP z9C=F(JyzUS>Xqmx#mA%FwGUt15RO~hPr)7ye%X`M-k^3(Ae7Y)qgSf*8B;cdd18yan$1$PM0!~16@wzH-NRC2_bd9-cjZVi+0lOl8j0#aud{{QAqLkHbZkV@l zx_k2h{=dwLe}?M=QdqRmUR)W*sXhR(RWik^cQ{)1)}z-LWa@3G zwA%0>eC?U_!Fa0mN`^N_pJbLRH+6NNgVNJU9&*KjPRGagMXIIj z=EjqzP%l&SJlo!&97NXEUP((F@b{7P`-upKqa2v!|xvK*(PhE z&o6MJL-Cj--5odbYV;b8WiFCt95g-reBc`dwib?F(U0<-Suruk6z)BR-yCp)l;To4 ziK;N(*XyCddx7r+Z@bW}^SZxMCj3mzoHFn{?gO)Q?YN`vrMqdyFL_|P9sVYsWO7BHq zMMOX;(m|w22PuJsPy|6hKtXy5Qltr?6CeSliuB%zv`_+xfrJuD?&1B-H*@FCy?5sR zd1t==80O@hv!7kg+Ur@*+7%xN$=VHu#5XUR)W=*D89^vbxK+XZwZV*duJMXWnp1%F zb@@9?@JndCbTd4xf`)nyVf?u|P_re*UZcI>!0RGv)X|rBml&d-k zuu6KGT>6P>;7iQI(zovopRR7MGtM6SW40fSw@~Q)QQ|*9t4+G{d>Rh8x1?F7OqtLd z64tGQpJ4mBsZC>mb%~26TK|i@|5I=;$1EU^96`j$FnMC7`PP1Yj^R@a&hNe;(d%`B zBiXzDtOW0H)J3lhnq_|M=}3d06$td`j{r4^Q2Nu$fF(hTeEePQhG(v-NMF3Vgk@_k z=M~PQtG(Z@7eCGDFN|vu%9g!#0&tSXsF%K6a!BT)wI(|?`N&@?op3?N7;Xg)UX75) z6MDL|jH^!YqS9Ee_En|vcBgkVgpFw`DYxe9=AF~|MNaY9;U6J6n=(Uiqr$#oj1!A?v>C&@vB`(0y!C@BS^*;rEY|jb5Xtse+%$$Ne*m#z_bF zGc46eSZz)z8+VJ}+5dM#eFq=-wNdODh z=fRw|XE`m*HG(Ss@j}#UE=hvM1#ca>RNLK>UT41y1RG8$``>}i*8RSHcGAUY3zMQg z-K}@K*cUK;&+eD+q&Sz13oEbk+sNhGzTh!3)3n+&&>vJhU}?Yqpr8Zpuh& z_|p7xiFt969ks~uk2Q1Y@D#$|ubr{KKBcxtF#&4((}M5wNu2{$97loxj)(~frhI*^ z_C(s|$6k3@rb^4bVUYnKAC6<)ngOLx`Rbn(>`ycC-q+FlLvrc=5Mo^JB(yKB)M4qd z6$I$HD_8rHt(%;GBW~NLk6T7|3tq^#;lqNgQQNX z(z#)`%U@Qx?QeuSES91pJnxElga{5ndkG=S1od4IUZd&}vc=-CUcOgRJ?P{rD?Ac_ zAcpTnnM^qR&&v9j+u*F&7$c0KuY&|o{Ih94HvzLynS(4kd8m1#T>e9o-K^9LI@D0Y zN$=L$ZOJJUit9Jwy3Mg~`9-QX;<-o)6L{Hh$;z4Ke2Iv_BT=R=_1eNEwcip9Ux%z< zLcdrh0=08--crWjV>5?i<+Q&mC&@N?3r)|I35fMs49h^~r95?Br=+G8ApCk6AUKC(d+NVHtE1D_n9*Np2x8+V?;eY zZeJ4*GIwK^Bd%^N#(d~`3d`Lw(MTgW0M>(|Rm<;`R#Hq8&ixBuy$%MQ zaPbgF2Q6ekm%C6b35uRn+aPTj+hc>J9H~mJ+v+b}5oBA+Fw!n#s26{U>9>8EqQ_Ws zC)p$8qfS!Zv)Z6+WP55=Kn}SqW~*0V&RF~s%B=F-b*uCmT<~aqQZSCDJZ;F3jl$~a zR{lr_`bTF;bfsRs{*6T%`Czkfkv7Bk+JW$JS4u6}b|B??+WSfNIOmcY-RDhQ{ua{~ z&*s-L026YVE>e?u=p}lu7?ga{_Cc8ibo*bSKbie5tDtY3i9O67OOO6ckJqoGMZY}+ zJc~Amb8RQRGTZ>2_P;TzLE3S2&!0AYKl=mfkLj@d@(HFAdROW@lW9~WG6=c+>{knN z7-lkoe-1IwS<7A8^~~Ji3$4=3MS<`k zalP5VJN6#D9Mi9Z(vVJi{1uUP^83B%0wk7jQI55j9*~meSdl;Tlp#Hf`W+pyDR)D6 z2}f}r@wGx2Kf@kicUD5n;AVuj;VpUdHP}@wzpfFP1}5>A&Cd(@<40p7I^;|ijlX>q zSAg@pjO*iVCKh+;P6f(*pSA}k8ub4>(HN+$FUFN~?w4!X?sxqv|u&P}bzScF$#N{^?~mYHKgl=d@^qkAR~}c1<)e%bj;P z>5P`>+{?8%uV1Xs^_aEVJWJKg5FN(4@!BW#mAB0LfXukZ+5CM9|k; zvIDu&@1s6)b?q!2bkb2TKx91a68VnQuJ;7|%2bM&)sX1@qB&(*zMAP%aPj%T7CTZe zTMy7bg|Kl?_6zv1v2z-+Ok1!rXGXM4m1zhySDg$#CU7Jx*Ov{)b2pYM^?Vo>*XhnN z5#E;e;ru!uZEfioK5CrAfOImS3boBtOMSK=@M3GKGQEJqOz7?3yjv^d9?LiU*RoA1 z@@s<@Y1TX1S9U7It#pjm_JxPzdx8fn1lwQ^rF-cgui>$YY_?%PC=AomH?9CxmT?P= z)0f2WLKdYj8fYC^?ir6T`c>D($(eC?)S*)m1#w>=>~ZkSw~5KQ824R;JdPy2-8Z}@ z@Bz_-t*8=98#J~NuQ~m<;p-dxrrVS!A)qlfO1((R4e${{dX#>StLq9lYOK0ur=Ai| zjaArr9%LrN$2Ih*b3jFL(fQT*O*CbjaeT0c1aM%B`$dKUPDxJNQmGZ zog??x)bBSe>p{`;V!jI(+$Z?u-zM1w^GjE$$hkfr>*2V#W4>;%W0-qk*35MP$#fxV z@0>$ag8Nkx0P`*pdcTZcJLvky^?|)_>o>2w6vh@$PCni%?zw*mg$Sp!Sa#%`}vM zt#IufXY2{SjiOI-ohO#MLD=H z3=(dpidz;4gOVJUo_*i35OmBy#>%+~7<^Iim~koTBTwv&FY2v)N$Dv*R@L83rpdNy zb!g6{k5&9cOYLEmv`(*~6JkCrXCQpaN8C-}4xEd)t0f~8_)U<2`P@Y>c(p1uOd@$H z?CQpCO4b#!vm04heKB)A|U+K4W_%(hh8pJ5PXbfQAnk_9Nm)VUaxXxEQ zscz4E>&PFvlpoCoFUIB#=;0P2KL`(>iut(RwRclIV*fahS#5Gj3eQ)a*EcD6 zUtA~j&W|_&=0LB|b4x~Omavn=##``*y03D3@@}D~h{b?&q|~4x;l0QZjspzurUd)O zZ^HEqA0a&#TKgP^lK#1MTl!S2>J-w(K{A@}Sal3ci4d_y_{BH-dzMn)I&vCmz^vy7 zUHPTf)i-YFS(qwpF8;}bX7_C?m?UgcITD$xrhqE85Jv(a$2g8^x!VbOFb5B}dAaVG zV$*QFaLBMTf0_1+W?DOGV_c^Rk#&hx1TA9!*{Uqihihokr7dZ(%Q|uoSR-9JQU|u5 zHl|8^H7|8L-XDKQIVSIzdN%;{AmPTr3zzx7EaSeKp^JPz5ywwuTS3{Z8zYD4oVkV~ zviplr%2I@>;5Gx>8_H5g6biSX?3btQzK}S0gbdk>a`Jty;tSLrs@&51%LO6*GP!;4 z(VKG_V>WAerLCabYzg3LhwidOH%@d}a~}(SNT%@}6(KxGI!(*eOv5{bYt7s4tq+ub zM_Z`U`-F=3O0ENc#8BPVntQwUyIo7d=thEWkc;K>?)yif)jVDnjn6%1K351V-}<6#^7i+qijvt-$#;qmC-aG` zn=)S1)+#|i(>DS4hcCJafc8j*J7sWAomwEFJpdgUea5meJKqVo*R-2S4G;Ld{kP8V z2DenLLK0$)c~vu<)l>)htrPOJdtY!nypSK_*!QmU=BL6SsTlxv>$EI!?VTqG`q#PY zDuFtmcbZe$|D-~dunJ(>>Q9BtlV2;^VrWAyD!{b%qgVxc$cEy@kZoX@Kb{77bp-yV;7a zvMt}N3rSfDrF)ZZQzcrViPyWtR)3o1F&Fz73GeOU4cB(pQcP@V$ZQ;oHFT%TYjJh1 z3*g^bDhljUhUpDE^Nv?H0MaS{l4D;LNmexel`e(-it)OsVbAwROZJoUf70dRlQ|{= z7vDKY;TVSo{qM7ZBD8>y`!4>={>-4WiHE^K zl>Qrsj>S9B?||zPy=HeI%U!#bP3SFJFh#3CDfQjdKDExQ!!=1<$QhDQbc4F#S=5CG zd2YqUPy9bx&pW<&w)hj!6k8Q{9k3YOjM#ao3KvF>S6EXVUB`p)*p zZ(KU`Hkewedz{zjJL-Ko_d;~Sky-lK@1Q?_eB`eM#;akHZv+{yFO=ZuJJ!rc=!R)RhUWU~Z?rZ7SycNZM>ICds08_CUBcaxs=`5|O2VGI1kwg?Z=AUr zAb4g{Gpvuh7J99cHt^+vLrnm~(?23AmsPVi-=a7LN5<+-s3*{idDgPZeuLn@s`D*b zenKR5F_!KrtU12^{TgFwu7^4l3Y>_%AMxeYleD5L7#p3@QLDtJmYsUA2y+_>#nc#p zo&GE^(q>dXeEXyC+OMBtWBewOAc>nW+wbwT;KC=Lr&-lgQ{grbvC|J=_q2i&x@};r zBF~<{Z74nK?VevNRFE$bk6xwSwMhy&InffE{QlL6iCZ*0WN+W2A-K@t;P4_*McW5m z4}!5CFe~LF1FkgF6k2vllD($Ei=PhivvkoBdm#bJm<+RvUZ zJF^WBXd?hWzM@zf2^|&B7oGmU{f;D*uNdZPBt~dI)kKoGLC?zr)DA&=*l6LMk{JIP zeSu^*oo|C$Y2-BtD=m+&05T=uN?(qNJ;~l~NrTp~jy>e3A2LiuT%nPr4tKOlqE(oA zD8BcP?z>K%)4KaBgE_}r2k62e=lUVb*XB>2G0t$@ZWG!ud)TDCIKjGO$??0>sF*)4 zFo^EAmAjmmQt;kB6P+-(e)JVs)$ezeUEQB|)bYHi1N;UDjoja2Vus(PX zTm9*Bt9zK%8l2g8BjFL7b@h~f!w9#tgVmQu3=DNM>2q_3Jboy;LLlvyTE|^^-{3q= z99*e|y#&Bm2I9O@?b}Z^o343Z!uR-KQ(2$7R$Kz6ILbc)Tu9%lVdX%wSXSoW1%yv@ zewCbsjj%L>``p*W*@(M$ZvrH`E#v%cHhuG%gkNGgKESaIQnqbnY43q0Jxf;i$6fP} zp9K9U&LpG^e1zD`Q-pL@mZ$2K4iRiYrMhLm*9Q!^D2=m?qCI&V7qZ_3y}5w0^4;nW z%KgWI7|@@&YJvzfiMybb!MY*+`O|yq%?r1~9Q@}EL5g?Rd$Ri_AKbfiI`iQD;6dS& z)Q|RboiB!vK6}Co(lsTP?J74bZ>agvQ$hwy^L#O*>s8-}Z7GFS+M$^(SomRj+3hDs!DAL(SuVDSr0J^#uVQY+#zdLDW=;9`wd{+Ll9HIm0@T$- zX7@+#yn9D)SR^qASy3DMUfYj?;Bzv3i?i320D)4gR))&@NgKsE*Ehu1LX{g+z<-?9 z&9V{zpVWTVeb7DjVkHqoxHcL1)IPZ1V()V1q+{ow5D)j?tk<9I@4UuKFOyT1%s9zh<~#T>fk z^et=j@c*-t!AO|k8eWOd<~x|ymQC1K;IHMthRk{)y?;6_z z4)R{t*jQ})!7rij>btc4fpR9axPe>XW_!ei&Bdw98$JTp69KP%-xE1C8;bHwxzb3F zKmv2nAv_w?N}BI}tbTDHe$q455!5ATIG`O~XW4d$FP{C|qrKGSwsS=yWFT><-b?*u zS@O_)>vnDx;d6nV`@sIa^21Q2)@2lvzUorQ(wbC=PasKYT{ai4z99B^mkAOd`Q8@I z_t^Acs<8jDvl*3IW|wy0X)m|!7$Qm~>JR*MU7>@YJyASX+y1q8_BK6ORS3B-ov860 zMaVXNzkY2h8gDdTM6p$e5D+4_pH0CJJAPz1Bk!rME9iUlL$R?ks;#Fv_-GJ}60Vj@ z%V*%CQ4r`pa~cPE)}JF&42)%vTcQFgZpy6N^dtmENE&PR4cQxCewNVDp0KPv6bSzq zIQvFq$SDyv0?rsU#;V`X>pAG+9!G5)gsRLRq!+X&?5J>QT($ap!0)LlUvcEhv_aTS z7h%^%*&MbEG5Io54lgXs#TJ|T?Y;TN)3)cDnW5>rXLzEu)1TH>75L!S($fAMp5E(j zLq!ii=-CH_*sQ=<$d@BLJXoFwvi&0d_>bzTc`?4*)y9==$m}D?4Hg(N6ZmqaJl6S3 z4}?lW;$1M(+yxnKneF!apIlEBwF$lD*MaIQLQ?bwDkO%b56hQp5T{KzGI#sBdZ_tf zG&3~#U`;V6z;8Ji@pfXXJ6>>QbCJ!9>nQ9Pc6BghLYIrwDX^w6@8^3sUQDS_j9a&d zSjEa@_#nUB!KFQu#dW#}Q>j^W%Df*WM{vHkg>LsQg)M(9K3sW%6kh@s5x>KB&K`}j znxnJYxV|qTB%ef4oL%kr>ms12IH?7>0|hDPSUi>#Vp$B`Yu%&;=7hft`!L=0VG(Yt z0k`O7sjg@~6!t?CXLts^d*CylxKWFTKPsFMcOO@IESdy=Q?cCvucFy*4r_$-V2UtIE+UK@ zr^$Q0H1x!WdHvbriKLQU=Sv$GE^QcSkaXe;ZllSA6xooYkMKa(3EK5GHPzE`fFgL- zBg)K&eW@3oQ_#gUlB>w|IM&Ezo%0r|t>&;rx~qjyE%ZrO%>FlTQ51%bSSpg)I!)X-~B zTx>F9wKVZ7{}4uKE{Z>Q(&zdsdIjo3O%9_5tLEHMk5kAc-++516dQJp!In#t3FMvi zYYO+{ik*X4Hs1dj4C*SyC2}#Sa1LJaRlr}ajmt8RcbBT%#Q(wJP-bUU>h|m3mmgaf z)V3#lHn+pR`+Tq2)7k8AJg?n;!&&s6nQun?U==kOX?ZNvRq$b%ltaiIz{1Ppn30P!%9Lq2GI8J+EsHp*wb=v%M(h}(B?_qpBTF9Bfa$C~qG8ZwLeTuNd zCV$P~$>E5JX*9ES;0wfR4Sa`Blo~vU=Bi(l{4|S{HP_(L8iIZ)gX>c}VhWXyIxT34 zR2PZOOp+wlk}OT2;rA|p7sv%R)c)7*NljE^U4@@phfDh!`j>AGWio6~gF_#u8VEgq z7%t7tONENM@TiyO;+AId-?!X)#YA3Ie^wS8skA*ITZO`zIVs|7GUJeute}ca$i@ zpp@EwMR1 zh{U>a-kyZ8#m0ccSJX6@sP>ZVG{ZDg&c8t>78I@Talkt=zs@$l7;XVPaf5!N{yJBz z;5}KYz0u(fiS&QMb3m>#rcyVD*)-mV+fYN2+)NdBVo&T*Bhignvy8Si2Xu(Eep&Mk?XYw{av@)Ofq#-vDj zQ{@X)FjrKCvr1Dh1v=5KRg5*Noc&2L`5!TN!pje-=5eUs4;2cA=;o*ZpiO2R>tm^*lIM;Dbv?RE!zQ zRa5DQX-M^FDJXnQ_{c#))fRLuTuI4M!86Q(+?g#K+?fDF4F(kuE_?sURwq;cuU}85 z-QYrYg#79%;GD;M%F3fVqJG4)ojFjXUY7iOf2=|zpg@fR4v+YF+!Dj~nUtMdZ#gt+ z5ZAyM+sdyP>S?C#rEey!%=H+es@}-C@^)u`XQ@(RuH7T6F-q@In-O|imZ=YjB+Gdu z^?0%{g9P-cXB)PZ2weu_)SP8i1L0juheC`UKN&G{m=c@TWgfr2ZxTx)Zt{_MxHxN)zwu_1G82B*vTqK zY{TOp_#QFOdCWl0m|2g|`ynz3+cD5MFTZ)kusns7jpSYkl-Ib`+5j-h zxBQp#v+(p;aZ05}PlG(ArSvqwzddCJbTLAEhDzr^rb(#i*p!<54>efks?;3BW%@%nuzQiTFxdvXlt_88<{gt1_ay?9o zA$-a|W=wgn>x8NqLzA1Ky zkv@6axKd7Cb_mADH!($nyefq3LzlbTV;O19{1=i>9|1r`L05m>=syG-xQbL%omNbN zq{H7caz^rYUC~y}LHgT|SrtbxD+3Kncmua;0;iYXNX54ptkj*%;Ss*0q^jJ7dS~gO z$uo^toB7_b%B3paFm0HeU8zsd^K}M2yp`6pSGWA@ z2rh0pYE4iJIamY18DNODMUUrYzD5~2OLnK;Z)~vyGY>?XD+_y_|=XLFbifa@S_Zwr0o-J(4!p6T~G$OWLtyrmS{3th_Mf8VVp1 zu2Z3YqrXo2yupbdR6K*?R7pRe?Q92@FDnoEC=+RTs~eK7JDyZ1gX56o$sX(f!!Co--z(BQbm!XxV>(GXcoW$Voli7irfLhjnKb=tLnNP zqsx-LCV>@qak(cI%XOoNZwt#Bd0iG*DtAF8{_dG7eevPr>w{B^2cNt*-W{#Q;t943 zbv{OiST`>KEZ4yufT)Jm8z}g8Od6!Sp2Pk8C18H zx|RjIKXKw>vz1d5wzz1hi647hESHCGL$}7O6q|0c8!R=g46?6KlTQs)D|d3n=&UP=ht5e6@VWG3sqlK?#ib-YvAszFFZl3b`Wums1z}Hv+}%m$AEQ( zB{nMyJHz!Z_wA3$7EBscVj2R!Ck>h^mC2-_=ec=Jd~ZjHcrWtVgtO-&aq$lV%(oOt08Um6VvwXWuMm- z@Hq7&_!7GHaIKp{OyX42K<;m+u@5=GxGN7w5K|qo+Ar%n+90l>t28Vc49%tT zyEgBlf2k{8=ir1TK$zyzqbtafgJ9dQdGX< zmR=m%Y&)n^8e!~bCj)S1^l_V5EbsizSVDtRm)sz8(6G+Zyh58y@Jaqq1Aj(au(s4Y zydU-`ChklwcqvILTE^zB#bLn|>NXO4!W;>d{b;nONj2R;;at0MW1*E{?HMLM1#+6EN!`kOpY|yx0a?+LEVp9 zm(oQvZk-&~+R|IbLBu;dgrQ(p#C{Dtk_l9?0X|*kr{9=h+6djl?X{2T2faST zQO<6jjBuls3u={wuO!&e5R>&)Z>gS{8eC~Z>?Cx@-z$c6sdV<#I=1ffqc59U9jzqv zxVEhiHV^8crdby8AwBPn7EcrYN)@AIzf;x(l+$?pUIk7@qd6smNJ>kC%a;qKrKnJ| zR0~MsY`M{h+S@x~3o0ta-ROsVqq((}VT&QUf_bKjd9dA8TSmf-G|M<`Bu~in&n1<# zP>qwJgOK6mtO3$%X$^|?`um_RmU0=JApPUdFOe&F^f9NMuFoZ+IXuWn8=vU4_2{oh zW3E)_CSyC+7UFam*VcUVg&2u96g01fBZ{yrr7v^PYskUrfj}DTMZAvkMsk( z%)RK-toG}5E|3|R1Kkg&wHLV)6i6+mAF8HC(&@Kz#@WmLhPYIDYQepW2A%o>26+<2kMV z9RKX(-xrAjp*z(}htAoo-KkQ{mT_Y<2g=*S-pA~w32sYNB01Mzq63Xq=sK^@hwgf% zMPl2?wmFwScSC51u5^BThF;Xo(4s})go07xSUC{$J%Kz^)w~`Ko3}Hj9neH|X;oZx z;$`kQmEUK_X_Q)@h{g;gxH>uLCuDKb?7eCBX;i|?qPc?u(L963c}&9JATJG#;EndU z0Dhg&V-+;F`rn_4{>G;8$=VykL#o)sj-s^F9)u&mIn4^(X1~;Xu*u79UgeLTm!ir? z3A5{bXX!>NHa(N1j}#uzhZUC}3sw<)7sw$8ZMFjGUgF)aE^k>*R&qC9*^4#GYvf4$ zYiVYs%O63`(P)0E3Hzqo5!i1r<2H;w&TW(zJ&LGBU(dBT2|L6npya=w z;_#M}FdP`t2z>&n>-FIF>cr0Rit|1M_Z^!JWJf|q_i`8LF(7NGo zMR2O!*b#n9o_ggognjz~1+Tp{hd#0a4nQXx^%ZkbpDy-hyEmCIZV)o}6p&{rO9n&7 z*lxM{0m@ke=*$CPzC)0wjm2xcDAjjBBvMRIWsxL_&~dYN?z2_irPO7M8I7Qo@{Zcv zprbg{HJd%{q4!-3HfUP%MO@fR^16&ilUr3 zRX?0x(A(kBg5y|-G3X^ljvNO zn4-;MGb+@n@DojgJ9!FoLEb(W3Pnz5?YSr<-LCr0`$+%eGKeP3HDc^Cb^*LEo!4VD zAL=(D*K&c(O(9CP4=5yto>=c?5=Ci?MO-|V->OgRMhDvArOTYSSi2Yvg1*@6Q#v4FbD?5A2YO_Xbm9a3?&H zPVH}JxAg}K7K`>o@1%mp@8zuVS)H4w8m-BQb$+(l_AI+MEUHjQ(@9Ydpg?>KfKX3kS@%p zMX4~bzFygMSkt}2pBOT_(or0OC3aKO5>+oW2bC&l4+C4GqDuK$nZVkxMLq*f1I0Xz zh<8GIOYbE~Mq0Fu!*%FkD z$nt8E2*xB+t}ZX&Eo}*&M?+^7oY4=jG#RW^;!FSn^YXHfnxz4ha~R7xYzv*#ZI^+a zefe9otbW4_+-DmgCo*w2FDsecA}|kJdAFjtm2rb~85lEz-(K$}1(nu*mfiy~Z0s=6 zo+Fr2&sVG04p!Pp8f~Sg8)s`O77EHEIIQidyCMZexxR1S06JVi2kqkoHV2r$KhUJ$ zy{!0fp!VwI$-s$3I74F4cn1wJLHoy_MKh;@ z1ScD5DSv5wFd-M%t4O70jXKkBFwpub#LSVfp^czCK*z62yv0lpJL^2=Mvu6`QR+Za zjT7f13Sp-P2Xh=p@CQ=vDKF8J_m}Cx!MNZQeIF3b>EhkC{>LI~VT<5o1as)B(GncJ z7ore@eZ~AE+d&%|+3V zia44>d-Ox1RhKF6%iFwNR$w{Hx#+vN3Us^?Y&Y_6@p@~w{bX#ldNM!LfKIf$q|(;f zKzc6WY_CHB4b`&eQ$Cu2)5Ah%%;Q4?Vj;Q3h^#bls8eE4FLHe7o>*Sv6#My9f$d@w znd3OvLBAGjaaed;; zMYr+CKeV|OOtLr2Ij8b^lzjKMXO`m9LW&9|f?Np#53LRWT;;}#(r7K!_g%laUTAZX zSekh^Z~ex9Q=qhUPeyHS7&%V(R$Z=eX=A5}h8Rmnw8uk?5o?ok?WV2!q$)5ebDwwT zFE47kH~7p;Lo9o**)&ot&MjXhXn_JCo<_@MQvg)HWzQcEFyB-xzBOGPop zsl4hLmRZ0u51ydBXYnUafzt)Kv(=8}&L`_NLt6ZntzL?lUITiKqxmkozGhOgWT*`x zczXp#828?Bw4FKqP@r!*?mJw5%tJ&7oYWcRUYDf6@X#+$@mM7Iq+yFR4EdS;Q5zlOB{Q&#hc@!eYcj zS9g&DAR6rNcV>z;Vs&CYQU%`Jyb{u*0KNjQ=cfQWcs)%8fEjAeyzczhi3+8b8)mZ< z7D9Xx9euR79xc`~pYaC0zxaRMNfZc-Ne1-n^DbGUkI;WVC!YEO$FNWN)~E!LZs!%( zV?}K5ZE7QeUiEoe!xs&Rl@S?G^1C>{)rySMwBTQB(ONe)u z2|(R=@K)HvshgEb8&%o~W6Lf(5PqNOI;a(Cj~fKTE&6oC0X;u)1z2>zJ7Y|5ZC!-V z*GHqF8fx&d#HaK2F1A9~&?9jI8*6EcN4ER7^FXb2yZG%jup}A;^IrWthJRxN+3~kq zE5i+yY?x>!8Zz!LmJ_m1=Qvq_UZ25DcqYX8<{sK57H-M~wDbg=ehS<6uKLu7s^6U;k zp6tIqr?JSpYsZmtZj%l8?jMWy#X4K0da?@T415#f)RzwJ+vB?iddqfWwSO#su7X)X zcM#n4=0k6JT}dBd!`r6-&A{>48Q*LJo`&xLWTtJaucN)(EH4%BezShV2|!iZ&3FAr zPgJ|$#1(!U!Bo&b%b)Fj{vtX%rX+8k!R2?RSq7VS$qS*tNLlo5ZhJErYJ#S_FExO~ zV~R4k0alL`RNCKah$oI73Q|G;p@PC3A@7;E6-cX(pQ5h6BnUrcW1Af4(yz+S-#^Dj0wWR(tjc+v^AGb2T(}$ zjr1G;ZT|*>f6NwVHQ8ZBp2>w~rh(T16xVnSU<*cEa1Py+mJ-BE$V-;kM)#S~R)GtF z+Q>?%wvX{m8^%R24YX|;V_VQwzv>2{(;pUfw*RW8`04tT5v@Tx%A@GZ9+S1G1qb{a zk~@>ue*_SZ<#XIa$cgEc*YveY^F}NmEj8H7#}X7-e)2!W=3D!lq^7;`3kM8@{#;Sy z@(ndkbU-7(>;RZ#v|qak+ML1N5YleER~%Fd8l(n8bpP9?PKK#VOSc5d4%yi4G`rHE zg}OF(2pd7c$medFiAyIX%UrF2Q-{F)Hg~4Zw#myqk=EFm^VhQios%lVWYRqKi09P>D53my^unG&QgslH`^d@m+?& z5Dwxz(yK;Ky%rHlR!sp@h-X0Kt?=jL7XW%|_~A}q{BGm=a{K0w<|Fo*Mer)8$+w$< z0la{=sum6=Pg>9A0W4q>@F|1h=R5$ycp)P*t$8mKx1iLo3lQRq4BrSk|EU`0OU`Ly zpK)?-H)+F%F6a{+adLfd#ciz)G}T({b4`uUx~-gkv!czD`89N<-q`1MVjty*NnvO4 zzN^~1RyVcp&LXGjzkQd{-2x`ewhaz%rt;yEe7>ZG6zl`l*Ft#*6iFNq>;E`?{npJ} ztMZ`1yV8UiZDza$kvq1}V-d0-VK*14S0A=G6Sz&ZhT!atedj!Fw(1I2(9SlZ>+%z} zYtm8xXzF<%wzwX)_#sK+tH7d~ntD0N*GCPEyb=r{1pf_rPeKUXH_UETJ*V+c02)8F zqRV9;E5Z}s3Odc~5Xv(%6Q9-LRpD<`h(%o?UBzY?kd!X2G%5nEcF5l`$BU>Yis>-=vu6418>Fa}8mXYftucfg=T>wBo-RGT)%Nk>+YN6L6ln@P&DKcFzx>#z*M`KLEDle@@DDY*ah*frwIes2G|}~E z6USHdo6qL>tpw2VjsVA&p{h_1&G4T%W)tSMlGNi2YuQnuycf`ZMaDmEm(i~8( z{0dV)wKA^2gD1K%#u_&>R9enb4ajs887jrFxt5BCOLzq-|1m&aWt-yhEvPhWQU74E z3_s>CF9|6c!4KV)5g|cx%Jz@`n>r=|^2LmrCUbxM+LmtMrpovD)ynE~E8Jwh-~R-@ zXK**qlt9-}+KM%~o^|*8oUzQ)GT+fE%vXKhXq5<18&O)NBjRicz-mz272D}zvGvJb zTNCKnv!J$8K+2*s^cI{Lrv4{P{d?ZxCZ|#guK?EEldEi2al;S#+eGECO%w=vcY-%N zC`yHwG2#qo>W`X@$CXY(hC@o>&c&x*rt9}8C?GQ!@?U1K?EgJ8c#D>})POo2r9)L` zARg!Ft54QKrRGk5FV2Vrnu?q&X*w=#bdam7wm~R1iYT%td)%C}OuYxx9Fk+AG7kbE z)xl6H#pF<{!N_25ooHWg)PvJLc}2+wvX9epJZGbTUIcnrFwAxJ*S~o{qF|azjScL0 z_2#|zVL`paH#PH~p6h}{D=0T7A87*Jdw)Ay-Gv0xXkp3^?GJlkjon)$Fr~> zg6EeuYh40FK?~hTFauE!xTCfv-clv_9u@WZ4e$Mbx*?!|De4XbL^)|(if14G30I@t zrsrm-p*Pn1fi$j9Fyp5taomv+nb3O%yct#=GohR3bsmz0yn_l}bwIE4;0My-Iv{`% zI?u_aOom>?-p*s(h`8>Ys+Gs_p9~2s5fyct;WBVTSy?cnu;we)^Gg?Qy73Pus_##G zPsOJAV_U!sS2rSl>_XGt!>oPvR+6)Z#5@k>M`NfKvd5#l51&xWaRAyV#L-ic=UV5# zr|sLoicYBK@rR7Rup5{q*{u)eK;Q2#(a^6-+>@4OO%=De?uj@f-vAJ^;o{1_{xlcw z0-~V821h_LrJuK*J&yDBJwB*s-iYQ2Fq=mDTTWY?^uz&bAV*rvU=AB50RZ1#tuBv& z7cg$PaF*~-cnmd0$o}0w;i3#*a+1r8J8n)E?E@%it*_(VdwaVDGE>CghRf76((Go) zH{$MAZIa)S$GHEV|LwxR<}^`&5Q~et`4J#WD$<+*L`j9f!}l;l%ygXEBc_NeC{KG^ z6b)Fk3cU;EDsJ?eYxX~X`1n}E;XUg5^VT>w3a{}sT)VWncL`{{Hac)%9Y_#$ zGbUpW^RpKgSQr9sTmuPX1XuzgDK?E{*Xz3*#_u=Kwn)!%~!7B8Gy*j zmwh-cLCT^|aU1j}dkXd~4 zVo*W>ZFWMb<$CWACL&8;>Hl_CEs4}*2Xs{?8}IyqA$xZiNcHMxQj?`xfYaT!y*o#= zqbJ-Z-|x)jY}ph`g43J`fDau4G5SLV4MjOm+1I{pPQk;6`vM|UaZ$LINZFdpa^{S7#bJ- zjHZz;W4PGNCDk+L3W}nP1%sdDRf7qO=c-i84VIpgn3(wcv3QnY6{ht8 zjrl#m+zVD)#=%2y_^wap`_avcv zR)Ct$`&1(5pKt>8X?F6~YwNi%bsv$ylg~tJE$GN{kzLz`{Y1#DT^~lmPL~k80;m@f zS|;lxFFeYj*}8KTeu4i+1}|nPixV@L#SM>PXrwH2+S#vo?i}#R!LI*mVOrOlE7kY+ zItzTGU$%0`SO$0yMo8elxp5h+UuL*-YLR9^N7R1+4mMETfx3_61L}$Pd(z6ySdA|I z7EUVaqf4FqD*@Zn91UIzeDYq4Jd<$Cc=>0;(h zOyO@PMzMcil>njsx@)OOMa@QYQAT8~UowwU)3O8l;LCT>RX{WE!X%t@xCk^XM7gKAKWu}f!$`) z@7IB)lvYCAsjZt|DNR^*V8OE`CZ%fe*^GOA&0D|DY1$mjTdDRWp3zptEVNiS1QZ-VjFa3PFHUX`V1x^R z7!#Ts8s<5RUj?gx%Zw@=4S~@R7!85Z5Eu=C(GVC7fzc2cTp?gmajSrVfv?%q#W4id z#v5(PjfTKz2#kinXb6mkz-S1JhQMeDjE2By2#kinXb6mk0Ifpcz<>6~+?M2|5p&~|9&C_4n*qpZ?F353kPn%=LCiR_lw{_STGpE->(Pm(?QPO z6~S0=@PB`NA(G_({Vg0k69t@~HDb6g)!$-2C<5lc`fq*Ukk#SgksX;#!FcfhVkC;; z5u|@n^cyNN1ga%U4MqMJ$>IDWg#T1@NfCT^bGf2;Dq)O3hsF4YUhX6FL?lU0Q_|B; z3~PT#ZNRIS_F-lV*b7@!(6HtnuX+i(r)6JyTzAJuwXyu`00uN z@Qtq&a=rxq{msrL(@d2y&#s7LiuK%0(36^8Pfg`bS@?!i#(6j#;NIg~l_o8bz}MbqgxR0#B(Vemi7|2s|Y^PXwIJ zE$nWgsjrZk7RJUnrRgT_pI$780Ge0TL|p^_ z5ogF8@DlkK7Z-}#7+z-W>pG%?2s~3Fe*9caBmE5RJv}{As@xRc`)*;!JeYPm^r27L zze;@b#G;zlh<#zmtJcU+Dgn{l6sH_6t};+${zH7fC2U14Y;r+QF*j5-N*kN~CF9V> z-Duy+$``4^6_C_|rhbWHd?f;WQAh zQ;hVuf5a=K3?)D4ZABHLZLDTl$c)P)Pw;H}EFB2HsZ_)>4_r>IMFMWDZ}r?@B|vMN z_H<*x80$p^mbNjv|MH*Fm-xY#YS?Yjtv^shE{4Ir+ybbdg(s`J$ej-sn?cf9AMJlf^g(%(|MR1KnlRh_ zX8{w{nM$iTQNeIc$J||k^Ggq%z=L1(cA4N<0X6Qzx6RT0V^59?i)kHzvIlCcOl&bJeY!cB@E;mN}ye672hI7C}-u8(kD!1);o=HWEEMP`{|a75+7aMCUci)y>oox zZ~q{F6~*uX{qRr-+e&6m(dNl?*va~?04gCX35_&S`q5gCsFfCXCXmf9&2rdr-w&&^ z*E#ei(IAk{G^eSC;8$xHa&6zWcjM7HGn>!kb32(&YQKtavfX5qOX5$E54jN%6r`^% zBdWESP~Z7pa-pz&e3`CQv!7ADT|2T?!~;7fbP2^4Vx*Bu`P>`JC_V6W5`B2If7zwy z$f?R`(%b#&hUWI>GG@6mTs$X~FcZTnd2{Rg9zP(;;}y`z5pVq2vK*UP6~7Z+dJ0-a+#7G((7WUT_x z6u&02-v~f(TbyJ1+zP=+XE<|y8RdTDAc3;&?NzY7 zEmcmNskX*}GTpP3cZXarjjZHtuJ!S(()RY9zh%@5eae&1a0TJ?ncXf*v}D$(iFe-P zgSd3~2am27C_l?e%Jg|#q^(M}nIXS*Bv0}}U0(AMCOO+R@%ia7o%1&9V*NGTfJRwL z$zqduTB{I)*3VDltu7|t+4#?(#5~R`jK9-8HFBajlm};j$S&(@tl1;d^n5`aPt-E6 zNHvFqk552pS{l8dan!@nihBLA6@znI|Eux4MHyL^;p4Rd(d0>%?854#v4Sx|&bXc# zYmOXgM*7CXVjPN{dB69$q84F zgr?lIc(Se;&kUZ1z)GvbTx0ZE$c=WJ>Sk_n<7hBu`c3mhXJ=5?<9*42yCnwqO4SkLM$P`0a2?V)Ik`%tY$eGm1Bq$E( zfb)d?km9%7niJ7@*Z(>G%t}vgjx9F3C4ta5B8^vEt#tloL)Uq2vBBU_Bn-i!V@X7@ z2^{Msf5mfT*tYBzv*sVgKi_!&(R|!ON+G=&xvV4zZ9S*DZ$r_P^5T|>4M{#l$mWUN z;!4M@1~x1p&@^3mdofPKn{OadFj~_EmeYqTx(u=edH$~L^uh2dXeX|AMq>q8v#|bY znej3WNrnnunvdkc7(2R>mT;!yU&lmfjM`BXXzEZvLUmyDMH^|qG^LNzO@?V)HWB-t z-j}_n;}uRWNi^EiBHlMIy{^Kcw;n%LMyM{=HU_kuGh`A<$JWr0&82UChxpu!-PvbfH^tP5yvDFKoL+LG=M@ls8?S_WWom}pvEhIdA?rtQ!W0}ni z8*J@sPi&X638v@b``=9Cmag|sV(cx3H#i>X)Q`9F*ss1mk#H62h$IX)L}fZ=QL0^R zJNlY!pA$CHOVhGIm@*}{_-4mN8RtBNWSRpT>AucxTV#HbI;_RzW{Dt`5X^S5%Ay>l zp6y`B&I?8-9 z|K$^o?D)ME&!qkuIhp(+-;wbqJFCs_@k~B$p~Q3q;Y2gZsQCTVO=I38cL9=S9k7Cv z&6s^bs!-R(^SzO@-dI*fpLMGMgh!ECS}uE`OSJ)=Wb5YqpfMBHOl+gvr~NA7vWxx; zZ;2fFYiwT0X*y<%v|lOST-bHCz#Q!NE|gqR?U(Bp4>FqRE!H2O?A=QK;0qxdb{Pede|ral$pQEf|1}_e5{Zw+3cVUrJ-rLBZ7hV zG@~u(=yQTPn9p?;jQ&J8Y9VaHfun={<9x@^S}qQ3$BT&F+$p2-=Oi%Dp20tXN4(Lyw<`4%rGnr6wi^ob~=Wd|e zxU5=B!%+#%vBhO{igG;P8{Lh~Mh`Z8T52pJvwq^}IfFo4i3rQDPTB)8c!9mj!^bD0=sibC(44Uos zqFZIa9DlX6$yTvi^@Tf|e*utxbb>)5zw2>R3Vm}wVT&u%OWpq1W>;vA5||&Y_cBa1 zz|pk5)`Bag_X0A)<3=1w{9_`5 zbEz2G%n&0asYAU=Yo_|$gNY)ZN5^jn$3^@W@=QGVoz_7tTIGH?&pyiBx;w=>#O^hJ z=(+4&bGUw|@d~ZTtRFUFc9ZBgifcU`dOH49iqtSl#BD}!rgA?&k$Mif3PUhK-*3=K z(0zx~YNjGgI)PEFscxjiNNAS~r_sOKzZ4|FcFFVU-TOVI7OFVMRs{mdPh4i| zefmCW>>lN|0c3fN5yb^hMssKv0#c~FbW`JMG}r8_6!-hO1J1v<)6qBA35S&_SH9kO zhYo6%8Twx3*vYR!rc2^>yjn{C7+G51s@;0h(rW{c+mc9=mO!VjQd0qB=CL z@MO1PQ_HOrjre}4oG1--2s~@+EF2D_Q)7rYNKvFCSl{k!sz^A9Ct_Z=3EiM!t9$l3 zIw*o{CNu1WOkIbWVK`mTOJ1R~wJW@%#m!)to%XxG^>y+NTf@Djq| z)Zn<*fXwV7SF&7zc$PtQL;i8eVaw6-p7z&WT71c3om`Ogo!2E;f?gv4g89S0WSlGC z$^io2%h$NyZ}A=s9#Y?cnC@J?ty2_cRHkPupL$DIZE9@6q4j;DKx?0Jv~Vb7-|O%o zde2Dw{0?W&;4s>FLZeNKvp{l15);p(H9@)cyASoGo`BCmx^G~U>-R7sY2~l`z58>L zkssrBiEH;>ySVJJk&vb%JYo#dn^D+HJFjhOQv8PY=<2q#4%s{N*%|E;G+Fy zC~Ykx!z_3<)fll@8|phr2m``m4;dciUO@%pVhCUo^1kn`-M*0wUf&tHtN~Mv*A+W{}~wD7YTc9kR2u zOg65IU$t!@3T${L*kf-$rq5qv+~Z;eA{jdA*M7!=?I_RXbW@0UM8;WX>XAu6x3vtW zkBTr%3DtQjANi3DlQ|Y!wV&$LzsEf#Y&#E|a@o^c7P?<~h4DFRA}*Ue1jx^<-qyR* zF1#%2%iH{TiI9zNzMd&z-qNug!6(j39YZ?LJwWK>LSJZ6)LzgCHToQ;BOj9Nlo%Yh zids5bX;2y=+Zqlb53ZaN>k+y+DW4~N1M?$=jFmDqp1m!-o~#(!JC5sYVRAA&KZ||h zDa6hz3ca2jJK*_!ZZdkn0+>98(eR2GGwbfd@7vc5c&s80+f^Mp{Ywa7O)jbXY}%Pu zBAq6a?_IFz-VdF{9XS%EIn32Nq-TclvbH}uM^=Oq$K+iZ08Vq2s@Po5y8;3K^jReD zfP|Uie6Z5M@FshD^ma}Js2(twho|M5m8)qRG7Av-Ckvdy9>qR;Ywd&RuDES!si#!( zA`Y@~qnwN16!^I35k&SCsD4~$ z+PS6SK4wC2jAcYlnLuYjZ_Jeh=Gx7TBGeTUng2g2AHD zW{Usl{>>Y#*8jk7z7PBT;CD?0vT13PAjEnj3Yg1s#eJS_tMYC|!Cpq0F8E5lnK)m|OCzEfq(Q>aof`oZGugmXv zuoz=eV?8TAVa}bySJ|Z^V+V?2@4*2HNrjReneviUzL!NaYXnZo9Uz^>3D2M7BX4R| z@pZOgKGxQLE$?qt&1VZDJ%z(0F|4~p2V*--+#Ob(Xc$1L5+ZUy>a$|7bXLqQ&D^0ORm!@yd*a&F>LyRlL){^6x%n z!F5QBra`^d`s78`qCkzUM(EYQag!Fq9|ZY6FFsxTHk2zr6<|%GgzOj& z64GpP9qKE@NLh!?8T%X_62mt;&i6Y+w^{fmkhzvt#GB(tTilY zQuVoS@Bu{?S6LRX#d14I^E|C0uKzFxr z6qOi=6gXO%zR8Sjp8X$)Xc%=}ev`4;Hsxym)rT9vHj+X}Zq|i;aVfE;hvy=^5Kdiv z92SyQ$8_DvH@q}B;FDFiP5xcA2*6JT+FfYWnqBm~1WA3)l)%Ar{ya*Y)dgI%iR)2 zs*1`rmoVF$UxGQzXA$*)2PKjR_Uv z>1Ee#!bzgkc`BX2jW3t_Tvj)$X8dUXhN_6I<`Wmf zEq*7UPn7zse%Ae!cHvey&#NJ7^p*r-p0T{OX|+*|s@`cl6{6Ycyw;IH9*Qvqh*xUA zLV{E;D|m|~zP0c(+AKlimAr%i*!+ZI_!R7}KQ-mvWZxs})-x6I!g$=t z={IC)B6>9aIpD*6cv(ry2kN4If(^4)_gmGw={Znpx{_91(GKoL#dX5f`JOCw_Z_}7 zu+dGDV>87MbdWi9nir-cX|2(HB0Cw1vcB?<-w&fZQ=M;eeEI(SPitXA+0;;iL4{%OZ(k|C1RMA3gwD~bVYwQU9ewp)FW`${Pk;{<1Gf`N1otKM=)4f}z@ilW1Cl3xr z=j^js{@cDmu6h+^_TTma9n$yX^3cb^bl%PY9nG_tNjRtP!q zOzIq3QmWL4FIy<7X-CJu+&9SE>1lG|LMzrdImJnp3!UPQUk^= z5%djz$0O0%yV`#!E z-W<)QsR__BSqvcbksgBGGs>iPIlWBxXsGol{- z+52PfwQhy9nA%7l8eCOl3}_~aM2+_tMiQ4zNY$fCVz=DKo2IR0u1K*UTo`l5G)|v~ z%ak#{$r&^JdRC4V?+a@O))!iNr8=#0>B2ry*PT;W{oP;fp7R4ehujLN1D?y-b0XpQ z6@nkuwrg#U>YIp$Tja)k2&iH?=CFPq1<9y|?SePnK_EOlJZ^f3c8}a*K2+1kmvwE^ z)xKl%&h<38NiF(ENbfT-!fB6F({27L) z?RMScul?-t7vywKBJRhVNS%>SG&~7Fh^VXBZK4UfP1j=OSsyAdk2K@6PlZT9^H(!q z8qgo@@aORG)n$yD4H)xS9zIbJ3JRbFeCXU4=ZCt0xz|eXzIs4~+8QGv>h69y9Ly#{ z@18x(t*0C2U)2^Y4%ITqg1-KfDDj;RitHHS^uqIm6Fb{c>EML~hO-RR;rjjZn_PJg z+hiXC$34kKZ?~wJ=joEDl~5E|`1MS%~D?Aa#)bfyD9) zTw`^*U{w^*SI5a;I>p0`gpWnqUI1&!cQQq=$b*0$-j95ZO%@SERw{Dgdh0|Iwpeu5VZoV@vYiQhKc;Xf| z%VyIK;<)khAR|Ec+S4e?JOCi%{U4pQ9$J-w|tFXY`Bk13d- zZN@E5zIMhlITC@-RFa|eo%JoJ+6DFAB8KHRr+IX4H)rwJBN|^DLw%Y==3_(BkG1M1 z?$1tW&-YvDElt%zDb7=aD~gViqQ4YxAo^KvvwS=X_=9BwehsmbP-BKNkEaXyNN53# z-GE?)QFH?(m-{PpJW^jc@|<`|FLOrSXCiM))I)J-KNjIOY<;u6Qa$v&!bUU2WTVUs z@_kQvd#{5}OWot29^_-HMrV(2y*tQgV6Cvm_2jw04`k3h##8LKe-eU_6Mb9il#wD; zgjX6TL;~_|u@%Ft?=(n@g`SreeMa8&UNv2-ZY9ereuV{_sr$wH`{eoiFEJIAtph&_ z!@ycR+%z+@@Z7Q}h|k2vRHUBfDvCO8@}->R*faz>>v)`T@Mlza;SAS)Za1`M=DTi^ z*ZV|_vw?~zb4o4AQ0jHYyHPmj> zS&P0*-9OFq`A(%7+{}gGKV0)4(`Y@I zrl?s_^S@0a@L-xe^f2>(F%1D=nptzU*ncsN{DW!C+A}r(+cY%~rm2}h%lQ|ts{mei zo+-;G_;0UUJ$SwNOw8W2;7L1lxz`wVrw>gw90`xc3?HwE@Vv(%>tact!t>0%VI6pGw>_d_)6wlkU z)FV7*$$b&eY+)gOz7y?v&P$(hO=Z~4PheuYjD=X>mD94R2Vlx98N95e4l;`uwNKwe|n zy7~!wp1ez9_lSDuwSB}MN+wh|T@SPa(&SV5*wV+UjhO81?AWhgSZ%`d9VcV}@PT5_ za*$c)@)PgSJJ_L0QltJ8Qamz(1`nmH zyA>oZYsHwZ@Tc!&h!j=-N_|MwK;DD*6ZSr0PI~TEvtzEXrwe^o&Y;tvZoY49tTlUMRnqocakxYpZuZ3K3o0%pn!p<<_ zWY%;cxWT{Tk3zp1$iD!Eve04hOKt3<$V7@hfL*04ok)>(Gu_=CT(Qn7TDsVjb#b_? zSh}~%@X)vx|9Bdn>@s{K!L;V2!(Wr#A|$Z6Rps2Sx;1X)dviPfYd2QV4X3D(4He<< ze2)sGZ6EMevj=7Y#7?how~GsHv*_a^VY?Z@2A6wyiLj?Zg{rKQ%?;J}-ABzPiw2ri zj$AtR&M_@$=UV`PrF@u)`0{flw>>#mqc)vF^6cV*%M}bmI-axqX6G=*gwyLM6~FsU zEME3af0dcCa^L&Yf|uoD4cFE2$J56NoDPFj{H_<^X1D7I!|qTiz6#^qh}~qB2F`*z zhQ-ZFLbIQ9osO5i=vT{o@+deRd8o!RirqB>^vc;Ey-Ql1Hdy77%M}|Ad@e4eJjwj* zH~L?t3RK1V&nOno*0GS{tQi^%f^?)vc&+c+@Q)@9B~TUP`!{dj#qpW2_Y|t0N^^Q0 zy`DG0Bp1qP3S#=P*!_tJB1b&I{RZrb@wcf6D1j(|Kx<7kBMDeFSR%KT^Q6~g#%`ut z;&tU6HrTR6+q?VHQ&UrOY;R9CD}ko?DwU3|8M-PR#s}xEAI{QA&#-my zms{LZ*E{@UaeZ=I($1BRz0KtT{KArt?|ue-7S@}~ibO6^V|XabsQtWjZ>UeTBv1GD z!@Y`63x!6-vns20oX{uK4NZ4*K}>HdbuVN~6 z7?_+2p2&ae9}gr4K!1n!16`QCvGnjtw?e+ohPUJS0>ZSeNyKh!W>OFpnhnOQco7?U zO%9(zJlJH(Vs_H0)))_E4bu5GZ9t+OswJj-`+HK;MZc`kaF4Fok+!RpINcU^@$e+4 zXEpMbux7i%kJ?~GG)JS}&CN|IM42}^l5K?~42IyO=gwt6L>dnsyS^5>!oWTCU@X3~ ztbZ>$a7oCb5^&KHx+XKSbT_}Y(L~tfmEe7{7A7!}uj%(*o|I)fN4-^}tkTv5{(kQk z$$Y8`L8rw%pf`a@Dr{XAwXD>^yMSXoA$_~CaC>pQPJ+%b5|IY~M?%<4jrmS@JLK}y zdDZd4nKoa=@R?j63jmHf{9SlZC<4{WBd|MgVm96R;m7~6@th=^wz%QPB4K~hYka0y zoNK6$^5}T|>f+P!Vo}_ngSVq$38gtfUy8@1Iq;tCO>EZ!#M7KvpMi6^^y<&AJ{=16 zB}bAO!Iu)ix?*F3MA+bLlQ^xBCrVXx)M#tGBcF(bbD&Kg*02z(0Cd4@NjQ&3W%MF0 zhL}MZySa_J(@cY>4Gz2DAWXuD-Zx*Ff?T&^8DL9kPRgqeR|mghE~eMWU0iR|E)RYg z^=f5j-W-3V^vag;fk^2=E0eu%8z%BSHn{I!c9v?C(vS++z1kDm@PhpxN7{$fr5_HY zD|BSXY8gJBB_Ef46-tAkpM>>0@!rehe3$3_Uq+SuJnI=e;y=aesXJ%A4B5%sMMrUL zxd6DMqg&Uo!x!~*UW5Lm`E2Y7 zF=}TfH4P@-E{BF)0}0Y};r=}D+9gG^nRXWjFb8Al4LtdM>?6P1FM3T!^d|v2hKRlq^H*WOM1iAQ zAS%8z*gh-+w*%-P81+tP4Jm52m;t(O^a{#`3J|(&T6Rk)H&rltn@e~Wt}@7Q>h&+W)Qup z8}~Ud)hu(3do1g6cp|LReBoa6wblDNjl)=061y!p#%XQIfyZV(PO>fMb<~@L^KThO zPOPx4%7NQ$*v|4`$sUQN&De}YgN*aujKaW-V@&Jy#$CyRq3wT)n5K9_S>6MZgi@o% zbxvrc%?;eMzf~>c{;jIH!8|~Uddg9j_c+OUW!(z9vl-v(=0amfYKUg&%|;cZ&KGDh zNZ&nxzIM4TkKBIp?F*<4>e);*!K33}mQ{a2Na3=Xf4jk5@)lZY+z2V}wO8Pn#33TtX^UK=qcsS7`kUu<%cBpgU%x!sW8 zA=um1jTN?_)abG_T)YHf7pEdjm-E0Sqy`KhwsjUWb<%K9K8bj#~T$aIxP`3yC*^~nk_dCIYL{a$HQ=T5Wg+J{dK>9v>(ox)zoP7c*C zc;3}Te?qx&zJM_lj+Iq68TT|XJQTe1w{!~W36eDo+NZ6GBtqU}4MBL+?ws?IIvyCg zZ!LI~UfMmnb+QxG(GNrTd4VYLlzeDRm;D7s`LYF>Czr>BPgx=aW}DBSd)c|^wngPKlaslJwKKQZ=&2f@~O24ap}>QVZF&sINT z5*<~J9^Q4YNNA@&Y$Q{f@Fc=8(i5n*ZnS+)F3gfD^`Gdp*o!wLNaADfRGRd?EY{+T zJ|~jyXV*%L?vNa~bg||q*W$6*Gvuf5y*pZCvd5LIr;&+Id?+til75~$;LVllk;O7d zDt$jJa@;Vf_5Sa`Rs~_O4Hs7%kko7UKH2t+xLT{c&Y5kF?;u0pnf^wGE_r5?4b*=+ zp2N{>d+>jtdH=vc4!|%T57OJ!C(Izr%R#L=!NYP3Q6X!&+a(VC?NNZhpZ~r(l+!45 zKh5XgUJVT9%y7{E3z8%bU#vOe-rC{7UmuX9>YB&B|BWdCgKiYVq=ofL8h@`} zICzT(xTKgR!2a)(4IDD>1CkVMZvEz8;1W#$F43~+gKGbMG=KwPJisLxNyKsg0+*-) z>EeG?`(qUI|8lRnQ?MLw43S${@x-EmA>Tk?u^?1qzq~uIXI;<2mb=?UzcHfdYghEX z7tMftGeA*ym#siLz=F9K=ZnB|pEL?Ng2sXA&m}$36&sOwYjRjd20L=*y!L7o8?rh% zy!eXZYq?MCtQvZYkPyf>9Pn3T*@~C*1DCSMC--Bg`^GCPE7T%uL$O2bPqFO}5GlwvO=Lin_;?5(kOdQcbGn~-SRn!rLbJ{B z(}-pAPr2V`X_{+p4iRkjdYH|;{CMAfIIr7^0YJVNIQo*#bS8B6&ZrV4H$Em*CWn#| zUytuKz9!3L7j^P=Jv)&G*4`(Xe@1G5zWO%mKx46D(0jz&1er`nK7Kc)MSmo+`V#1^ z^q{y;4VU0`4M@n2y0r}waH?>_9|08;u+@@yw`oNn3ZHD(5+sAgY4_uj(dP5cAIsr0H76A8DlwII@p`$gr!TqZHag!?sbGN=>XKFbqcQ2}dM>|6U-@`BHmJa$`XF&iz`CQ(dP73d-LDgsXmynCqg<-oE+ zv>Gb&{W=7}1h8Ij2e$H|lWR8eP<$4_da;z`#vw`{tTHsvQRnfcfW%{#v&=q-earPI zA0k&pBNaf}oR*k`&_KQ!f}gW$JQgB*45C=Vgjb1>iU3dj0yoSkcYa@LYHETdsL}r# zCDR1^+B;>}Na_H|*k8(5r_S5WH7dC>aw>Vy?^)_C37?_t8BxrqjKGIZ`X?G+yB>Ar zIt(y`;u&(Hp)6{Mz2?{p5>+VE^4u&Pp4-QP4ZBRqs$6GL4d8Z5U%0UCN%9sYULN83 zUy<_J4kO{!nM@#3qiA$a-cV!^Hw114xWoZ2F5!QVe!ZQf|@h%zrr3x28A{sQt%80Dbu=xAQ8T!I#_ z{>F!#zp{^$i-BSr!l}h!`#s5a~c1vJHqtM5+vRSal|VwKk>D*C1nT?BLZA7g9Sv9)AL4 zELTvP?!w8hK1_Ruqj*p%bwi~`BRlIJGhntSLr>NG;gE?Wfk)a=HVO6r4;SE{&M)K# z)-XS9knHByZ3YacVPGm#LZmr`LGGF?k9pJR)OECi-f3^oF z)){;zMdp1D^qiB59_8kd(UO(2XS~NlzOiWBO(e8IsJ6?}5;oyTQ zpiPcq!5AzoFnM`T?uK9VdqgoLnI6z+3Vm$A>dgM6n7|$mq=tC*9&7{tuK;giRBwS5 z3l=f84S4N20=)NTpxxPMeI3$UsF5s(cou~WmaQ%DuOv+f&?^dkV{p|cDOJ@VECviP zzxJGXcGfRS3jKX>Rg!GNfaHJAKXBF z<^iZGD-Y?DK?2RM4){I>*4i6jb!G?v_EX1wP?0N6B)b;`%(ap!dp(3eBQ10-2f@#J zQU?@=03QZC`Q0Hc1RjVW|9Vr{N4STE-v)%3c1O#+q0rZR(DF>; zTipx*qUw4ctAgO(UnMkL-x-i0erG4jx4DkAe~24;fY-uy(%` zp&QWL1qPAh*%|VO%7V8abllA#&%&V4w|fv93r1kWjRyzbNdi)MqqXiE46tRygIm`b z&HO>)fY`}UVoEe%A+oTMB0HWC16K|>fE4l#KZt>hmH(iksMiM}ZOn%OV$542Cpf^g z!$+fjkXqNu3^)q?$p?ct9cS88=sP^z&8;Sdi$??kI1Au;WDUSurf!?Sxbz}$l^6p^ z;V>$(02Gg+G*Dw%)U6jgzuX@b^ni8eBOt4%_{Jw-fEzlK(GlSQhXQM%B$*KKW->K! zcSXH)67<`HXJ}!set^J`f1-o*_7SjNQvfNx#IPPc0zAqHM2E6sx&h|ogMzc-BCkjx z-v>Uy{Fh~Sn$vvIOH66)Jy0-fv0!9X+Rt&1HdX9|jNK6P@q3zF37x7V^U|geD z>2g$^{9Z+1+TOLx39zFY>`xz+f0P1PV%3%-pOd%>WHQ8pj$(F?H`vTpVGLCOg6)31 zPTCns9$cu#qR{N_c<=(cKkf0cSFMn3|Mbi?r`og*6MRGSPaKpqzwiPEv|Vpz8@w=u z+>TP(g3x5@?ce9@>C7sJEC*GYd=e1-7V%KrWOd#6ymj8=hrD>tl6_2AGJL)-LJy2a zzsb@>+!`ywl?cUSe~sVbxz>;(LMq_>p~d_5RR}H}RFjLb=TapQ`IoUS^JkfaB2v>9 zU_2vhO~yod@s4`6PCH*;m)}!g<#X@yVrNK<30>L>Fv^;~KA1+qRzB%wQ)V#z`jKYW zGvb0IsW22j>lKjD`#Hyx!xcKs23LNre6{xM)OO`Bhkb^_pUaO9OG-*scX#E4g@un? zPALXjMLaJ8Q3*@Mfa#sPygKHsNRm0(ku1rWbWx~Qp#v8_wuyZx>IPj+Pk9H@Hoj;po9wp3hKS9kQ_`!NemDS#4=9a7KIdtTTQ;KPb%9P{2dfX=Rfn z#DlL((sTuD(<|N$Quwf{m6Pq_#~-^ME)RdGMq}`(k##&6wkv#FOXGEYma2(TbiMOs zWwja@Dzy;~_Wr#(YFNkWEqn|_HJ^PILNQyb(1@jTaW)Kdk6BMOJ5LM1V^V$We!A08 z({D$JJcSKJ`q^0G4rL;}1~;F`*3TSt^ryo*sHQdflV8XD1%-sn)=${SbKe?eF{ZN1 z7&JItv*NQvr~fQYQ;4FL%-j_?Qwr$X%`sW;I213C$x|H-i$bTl+T5Kzk7d*mc4HZ; zbZs#pC)9C2vT^LSgo2c!BgC6;ziIr8bM%_hShrdsS)x^i>Fw-sB3Q+t3pH8=jg%ib7G(FRY$!+Sihoo~5NWEXR*RhgrQZL{)lpD7M>wiPU6UT28bM%q-w%6L-Y!7_ zRREGvcQh!q^|<*OBTq+c_{gPJIY^n)qDwtA`FTuL=FVJAbSNPceKG#d6|YqBI z!PhgOx5Xg$;pS*VXI4#VQ(0ndsP{afn7uSzFOP^lvrlRTm|8OfrjkdON%`zwGHNyO zqLWRDH>fZ&*!bBy?%fz`KDXiE6A&25XII(W93zb7kWhBB;wrb2kSdzAjyF=e^W-bg z_Y-VrJvJm|o3Xn(yUZXe`c(NWgD1eZ8K*e9{QzI(w+&l9nN{65Ur4wvi zZLk2C0F-=@8xe0rLJaqQKts%u;ztStsa&dHwEadeZTj){BpWI@U%}1g+@_GdHoyB$ zRT+2VYo3FWRINUV@9+U#ugm-P)^9x1N(;uiE5*8t8kdj^Hw0cDehdfe26GYz)<;cV&-_x zY4%epViKUf=inQ{b3!{DnA0o{#WsC-6r|Cb09tV(cT!UAjuzW<2yWPz!J!F-(lWOK~K5WB_V zT=bI1%Wn{2#Nkjmlz=4^Z&i?TJ$vVv&+#HKMO&}IQ*N^HVvWn>C$E9Tpk-Nixr{5| z#T*OXRuaSR2&tP2qwk-eNr~}$*w?0LJ1}ZxuDuULsH> zmOiRy{+0=!NfRP3d@1k))GWJcILc9Jb}V10)J#>Z!=<2G>Un;#A>7PA`s3H_>Y{+t zrsN9~ws?Zwm~Q9SMQDwRN3NZVn|NZD$IlYehr(?0f&|_ltxByn_rCm^sZOsSh81?c zF{Ia1aS(Vi^{d$4;5oD->@hkpnaiT(qT+W`zX{7=5w#N2#soP_ev9wQ$?2O^sw7A? zCO)`MS?m;KJ>=~W(_$IcsM^3j27>j#iMftk$epnpjbO6r{>#QAfYZkw!O|f}Tkb;I zdVYBhv~l}{rUWq#K~zIr7yI|~f;5CXzycRK2YBI(8|Z@FOC4y)JNAP^8*ZeLRw-vY z+Tt`ac<~|%z@)SL=@VL?^&k-IK504J_X3CRcuq6_eG^UG$Wr_w&RFC5R+DCuZgd*& zbNX5Twm_7X4g1a)QH1Z@41TtD9Cxl3IP{qerVB`QAuVSkJEpa_c%ADWT(EZCDcWNv z#&864y%An>sD^BGPA&C&gnQAJykL8%QQTzSrPhl>gMESVuXi`mAIU{ncNd!@l=I{% zgPgt`Ipt{4(gBg@L(KlQR6S((nTUytyz};I2d-WkoWtnWmWrhMqFWBXMn#9C{?H@7 zC+iZv%Q+T5Cd|n!-GE)eTe2c&P zF5cZkxE^pnC{L4ZAIyrBqVp{X${>q$mF9DRa&))CsqVDk$0#_B%}yT>648Fdejn( zN$u}7IJ_E8Bny8(26VSQ3;SUprFJk@{52_r?#_^`rAvx5unJ5xvrxjcwFP zA1u)7-=9$4*7yYH(wecgcM6HoP z>ZM|D}U?YZt&>p$Z{MXD$;h_Oti zSVoFfIgj;C<5GUEQEUXe`(Y;Ay$Um~fLnqlkETX~fZiZo-ge__wG5|#7vq(uS|Rw( zy~H#x7GE}ijj*KXF|#jryz~c6moGfO^PoPS#c8f7fiFRrH}F-tN1MrTunt;p@e_Rk z{JyN`kHxBj03Ci|+pSFOodqh{a1fDZ)~ucRif%#=*Hyq_q?&C%VxZe%GF@ewktCR! zvoWxqZq6wQw1SzhZE*H;;60BJk5%Kj3NA8wKhIWM_rLs=wY-!_ZJa6Ls=NN;FURR1 z&_IOBKFr?W9@hT0Zm^Y~z+LoA#emDYEikTEmGgijiMtjNdd(wTTj1hCd^om3nX#44 zAt~xmM9?Ni11v#9MHKb$HU9=xYlLC(V&{lNo_;L!!wHK-X>E3GXAC8GMS2&!V<0>> z{e?nUbnvx42nMi+PF@jSDlNe5w(}}h9Y)?&v%diSbtqAogy?pSl)2s-LgY5 zFKgx*pE2axj9} z>#iTHDNA-8)>mQ~L5jbDCi$!#GH|ACO<9~2FSX9S`Lf>#5pMh|s_RlzA;db&>StSB z!mOHzAcqU-s)zPNI%WYU@MWK!nBS0wHzVe3_#R>Qm>7#DZHxCD7B>-71~!$V`cH8d zP`6{H_k4u12dkCK*9T?})^7^C-LJ{!=U)RFJq#CCWHZm_R%D!d2odxC4?n7<+8VhI zsspjP0MNhZHn5-aj9?=oCGHfmeQ(&yvI!8=J|^*}g)AdJC+M$C-@~3K4R31={Kpd{ z0jpMVpL(ES+u36uD>U%T`nOuu5fy2%;WHm#Mp8IdoN@y~gQCabiVzn~i;g-7@eedk z%Qb33%!^HK`Z1RBOzw=enB3_T)y`KvoXgjjtKBW>#n+KV!~x4AUwUIjZWhBH5!5q~ zO+y+-i0(7KI_TWkb7{MysnT!MyPmGtVv=2EbHTGgDlBw)foL`OtT(c(u$59Q^Crs1 z_XEdahIQ35RE0fRN6pqZlnzm?E*DNpFkMzL^ZLG}HT&rB60xm1(`9NjPg}T5aRg>7 zna@=1ftYtpv)kG3HB1k`J5;|UZooB3a8p&6sHH|OS4jETWN7mqhuvL=!C3(7OeWP` zBI^O(*OGrCnSyX>kkDMvn-&>A9@XY+WYlEi3UrX{s(QTbSeY`+yr_|WN-S?&ZynR7 z6F`?< z9p~6+UhYZ)f-4Tl=3Vgcez>1=Qyd``-0_?B0FCKex2=9YsG~QTy4jl2@GIfWKR`!c zmezoLtkEBLU0^bX-CemIOKU^#<@Z#K!Ihh3Mh89}?5I0ia;h&MuKpo%v@;`h{AL#T zO%oJu)uoWYgZDkIhCGW^Bgj~$0P!uFEFTd0H z1$GVPg2(oWzxh@j5#sK4n|0Zub3JuL)UX$=1M6BKne`oBCS#O~x!zZz_nHuDGE`XS zvA`_rZlqxMNlyI5pA(rGfD5t^46(E(0gIA>J@VBXe2-R(IP89>1OnagyGoacIkBa7 z4(3hCFOscsNBQyGS?P5^-3uh8J+hVRB7*oV)&(CA(U+{QkH&m+TbG)m6M|j*8f&y* zfMn!Umc6ioA-%;?3`39OrQ*P$z9*B?TJ_HD-rtk_ov9V`cYw@WpxmeRHss<* zD|=R1t4|kIGkJPEm9dd*Lr6_$(HqBdIn2Sqw{P=4k8{+Drrq#uao*#_FER|x^h4%j znwHy$+Q)=_5yzP_XMP1)fdTR4xW}mLg6XZ)p}CK8p6h{&ND@XlZz?|QAZQ*me+Jy* zliU-3*(8ON_#?E+*G6c`q)flek$r|KV$V~Ld+(_MO?@rK<+ft)?Ajs7y5V{=r?$th zvV~2nW0Iv|*()b;zYt?vej9eK= z!k&>CYMZR7_-MQcoQajI0-&65yPxlrC7l-#L1^OvGU@+3J7>1LQf>$120IS|-r~7- z?)86DCc9U`kr(E6DEyK!Qg>N)==#G}_oGV=&E8nY2}RLu5NKDZz~z?cq*%f=eyseu zYC;KbUVM!~`ckQPmI0_*HZ64jckMYiSaEhQ-SS@B$kE5s*6JC0KDxlla-yS=HhT-p zbW!rH0__$vr)KTWvN7b0z$uIoTrZ}D%Y8*05iexOwt8FZH1VNGPXhH!P|$aPW{4>O zCIo!8Y1Fa#*zL;HoX$8AY@^EqjLNG2M%}7-_*m)!XVo}_sd5>(@mTYVK~(572+_eu zW-`YHiy_k=&lwwEsW%cY8wC%a@j>{T1;|3_$XOas5L2d-^5%_JX?{vq$3;j*q zk>`Hh?7}q%zt(=8#c!QbhLts+!M->s{2YX z2TVCj%wuU5(qgfQTYxtSIc}~h5^MI50@Yy>GqJHgHa2%5LJbtb@RXb!C<=jkkX&Ih z^kzIdORL9NX#HF9Z%K$sX&~(EJrI-Hp3Quw2B9&(-Ip(KL~NWrh&^lj+^GTz7eg;$ zP|$3TWe2$N=`-7Z++d;Y^f3(ynt}q}_R&O_#4@w<@Uk?YO``WYB~wO4{qFWKP}O#X z;6UB5Hq>~mgor8DZf!jBzr)Z-2T6OX^0OZzOBb2#(!*GtBa{0=@=bJz{K01GNZqE* zv^Q%o_oC!sLT;-U%S*#Ub^3{M2AxW7EkC+@r?Z?g0RZ7~+avIJ;7PW_)R@j$QzIK^ zsp<$JuA7YQnTladq~77SoG#<-n2@64SmTY~Uj|_0{YP-1oebc8|GOyJ(!}gt>A`W- zts#di5)DKwb6qM<^01XeMaj$55TV0$R&E)El-TyQ{En=6Za2<>Nh;+BnMtPBr$mvl zEV6yo(eYB9+S-6JHEq1?VicR}m~8N%UKjQLRvr>W>@%La;#I&Cy~7yMO;zSWBk)OW zP4MEzvTrL{76D5?EJVbv5JN+OGBd$@`~u+ectJY9ruK;u3Fkc#al(v& zPJ&tQjxzvRY)KdZQ-FGp%IoO;NBGGB62PnHivh#iwOj*{V&ReVt9eyh14$H=i_-L< za2=P9+r`wSH>8NyHVu_hO4r$edH!J=8kQM8G6Yb5 zDYloo#w*khM-^TI6I0N;N!`JclaR24pBhXAlh~F!sz49Z|5AgX?0ew5OojpZ!lPsB z53@r%VX`YbH82HLIHTTfkCHWE8|pH3M!KI?s(v<}x$dYcM0J$8hw~oHrblCjDdV30 z%_e3b)8TvNRLYCctcFzoITdcVN(sw!_S&OT-d0~Xxq>hasZoGC%NH||>}4PXZA!^z z5)l@1{L2|Q_urGVn0~oh17kNXf2nB|Ewo%=PY&;yfNXBBtI|^yiE(fO%GkcR*PQd$GeK+ZziwtnfGI&T$X(evtKIu@-*D2FnBzqm2+rqKt^_+&riv#sT*V1_Z$T5UsDjS}uSMztc^r&p-Xb@EP`Lq<0`B&Eb= zBZc<92#^MT#D7Jq7?ar+r2a>$G{8F5fYG*vwJ;${V%X{`cPQy*PP@OXLqY`+B^%s1U?%>t zKvA!opLb^~$R_>5P?Y)UY@v}4KLt2H&@U5IyzS7knkDL0QHYz-97~d!5EIX8{NNtQ zx45YGX*ge!+{U@Cz%`;*LFSjk*t&sk3dF|;d18n`Eg0OaRJ?(v5P!diO>l1^l9%*rKXLA1=@%^J>E5 ze)1EmE)5_(_J$4M3qlJ^{>87+_*va)V6xEUY@i=0JI9Sf{TX7Siv~_Cc8C8_j)Z2 z-8wdAhTPX?-~Mkc!2GB~&7yndXZK4;R6Qaw(sUO?q|KVg=(Vz4!I+0KHMI15dTD4G{Sq2BuQ-y^y3YKFYfGmeX3}} zn|fv*^G&4Iz6MLXf8pahE#r}~F2$(Bz6ABAwOYBXTKhW{&)OqnV$HY)o|s;=?{6|o zJ@bqBSqQ}EpGoMh$`9xX1l>8M6<+ImTy^tABY2uffL!l~|4DipoM?954{k@y-fcNr ze|)EGK|Y+_PTF$dsp)WHH=C5sV3WR>SA^V+7HrpBN_Ohjw%*S)iEbz58fC39MZ3J? z18P!(4DwbA;W-D<5V-T%#Zm@ydw{0kn>&Sfw-7pZ)%U*^c`<#T@O(r|S9zH3D?qP> z2`xy!fvv?YB# z0OYGlz1w2(5Rkvv$jHY;^vpCsD7*r|X zVI*trfIEa-wP%?{KL7u%5O^)GTvX#l*h2z-GwL?|@V#+OVTb(s>CphW%R~+P!dti6 zWdgQ|m5||k7|!s<@oGx7zJ*>>lhL!L2EP*g%{T5It~oqw>^ja?oGvDHr1iXd%G;Jk zBzs|@{JQmRoIGAtEl*m99`b$Tt-1j6d^&8`miRN|EWsZ{ov6ufiybBvCiXU{J$6z5 zC?_691w_siKe!1^wpCB`18tC3RTa*TmOnj!4G*UHm+e#V5{L4Iq zM~*#<<;2%1QC^kZQ>oxfrOm#PzNBY@v@he$P4@0^vbYY;Q&%;vr_>LI3Ei27-B11) zN-hqRMWm!$6*_b){)pJUQgf%=yxgx3U+?{A!xc~io%U5K3G6x7Z?`Y%&N*gL{a+Np z34ba3rhX?PrCYsrUXZy5q?>3-`bUz;wy(UXLh_BMbLwHw{MW{ z0adv*_ww7PzU$qm)Q&+hH8|ge(#dbW-xO4&V_wWAog-5VzC=t&BcP5{=00H47mgz@ z>8Xk+zO4zLN)khtH#5PP%Xg$6fmu0Y=to|ggD$kyPommB znfh*Xl!PjR3dS%dh+kI7H)I5X;ZKh<-|JJ#VEytQZK~3nQ!{u#rz0N;z2gi&zp7l3 zzXCTgZbp#ID-SrmtgItL@#EHiF5>A}9gpoJ62G&f5TVNexd=1smyK=^HOs(&qJdvE z%Qk&Plu?zGQ-VgruW5mq4kiCU4Xq-L$Wr>P|NrY@K3xkY%Ow9qyQJ3AvS2#zvD5S+ zFzY3!C@k;soz}Rt4@2S!%ie*7KXMj%${a)#l5d{71C=~VpKk-pSATi6{9nBb>aYG) z@=O@(XjA-nbG7^!w$bprS#n)U>iK4pwc|U~t1t%huiGx{C@kk_``%A9|7%v zI<IZxTkN3sb7)sf)z)R83}363#0yn5wXNy2 zs)QxaIZg1}j=$?=TZ>21A!p|<_*UfLsbaDjdr^jKJKg3n7P8--=ZBq37O_GpY=w}^ zCK*i8qZ?Per_UjTEyfE|9#8tk`?3^eIW)pw`O7v8L|zi)J~tnh$#3cqNWRa8D2hmb zmF&t}w77elfao6kYY{G5Ha>!kAdAnqOeuDX7GRCsSp?AfdIO(WIJ4EXGDAXUWn z57lzPVo{c_dZSvi*d&8XJG-7xo2rHPfmlhE%h$wLW)^p61GOZve*zT>!;6R11PeU{>WH2el83Le07bL46s#3~ z3~W@UzpynK$-{?S#2J1!!2S^!mWBnjOrH~F@PYM+Y|1T$SeA%IJeGS?)mAE=ZM6 zPfz#VON))Sw{IKWV+J#Q5@&3T_QT@@T%dNzOlvsp>_CapiW?MR-Ar4HGmJ`VZC=MZ(&fXSjK?^o(D?O13~ zuByrLJNP~0)~}-6e+J8(7R+{OpHn)_f>|E)iv@_s7VxAuDT>o*kb) zp}jaC5MS}*PIIh|A51^1F!MR{aofzo&kANEDe$xGffymN5%5_rl)V22f-k6Qs^K{H zCWG^j7_TI=WEEj8*&~)Y-!u?35WjOy-Zsi?<~i#l>!Y8+I%B_%YQueZCNK{_J4Znc z!tOG$k%KNi?KZ$zKnzTq9@+Nt4=o9wMys8PSa*;-{3=Zl|CraSAaW#=1WtrLMJuU( z4Ejy@PjAq!Y0kpJ9fy{Z+UA7;0~&^&C4rY`hKJ+)RRdULOybJLw&CIA1U_lSMf&IY zxv-3nd6H$oT%`wQ9`-2pDUS85qvGk&^V++G(kRbt3?|F(w5GgnoYWlYT#H*q)S&6j zua=q9Wq2^0)N$EQU2`(~*c^#~bd6jNaORGv%Hq1wfF#5ls|5(G^2ZsAK!`by?bRt@i9G` z6d%n0QCJAqdOAqe1Ef>)33g$#*hcec_|ph(XT!M+cl2txRk87laH(3y)E447wbQ z$Mn5PyR1aIGq^N_w4o1?k&epA%E@I#TfHpy19!xic(GG1zRj4-G`r-C#=R4p7nb&; zn9LuM7ke=dt%NNB8m4MpjpoT`NGI%5H9;}`fYWWzOsmXsWOb0nhHex+?*l@TPaJ@V zp>+;rPZi|2vrVV0cm)VuX&*_#XBkJUTCiM_`=3PVZo#osr{5*BJRAyZhntOt)7%te z3H$P9Yq_0#ZvBO<9#pagvxE|cf$L2IFG5k@FM#aJuZB)vc9oimRQYJ=GBTc--y=bu;rw{o6t(z7d^P~|f0D=Q!2ZV8IGc7J1NA2acm#>!b)0k}}1RYXk4B6Z2waciV*)<77`QtgT znO^?%z&AEz8k(#RF(fazUQaa9&=WE!T??KiNr_Ua=JmTv_8d48*pJp~exk*zBiZE7 zK2NSpPE73UBEJ|SOP6}0rHx5Mu9jG`7g}-`q9`rb43S>~n7#+RTr~GGgPH%j{FA5E zC)CQa2_o5RFs-EWzh~AFpdrhli5JZX@7_7@W%2&|7^Qx!fN#@FGP3?^coVlGt0zkX zH_TvipUg47f$p<6heMe)SF{3-4fNX1hdeTz+mD@5y!-Q(Sp~93Q5(>hl9{QjF_h%Y z@w`nV)21kn8N79PNzCF?l?6HP81dFBCI>3}6xP8RXz1?z_M*F{@n$%IR|mnQK+M%K zyQ1^d0bEphg^QeT?an>`OwCzYs~~}-+w4D6tL(}PI86(%ZnX#_w^RlFhE9`pVGXQC zWIQZqLKe_nOP=$`oy?ZC4SP>Fpu~k|_HDS;s>Yp;HyYv2xa&ayt7da(P9LU&O!pQ7aW8jB?V}>rr?;PhM z@#jt$U6f{^-M3?pRP>y?zg@7!MNnl|$8tC$&_ ztui)Dx8Yb(*^9qf}r@ytaM&Q78@#j8xfq>7C3^Y^1{*M<{Jm9 z()lJ(kXdQbD$BV``G`yG7KpNV+}hGVSwAN={b|Kah>im%-~cw?oXzK98# zA&;Sx%|>F<1CQqdw?4CL9awa|GB##iC_bMT20#g>i6b+Mn66$;*0oHx``5K`y=_4V#TYq{fSM~!jaPVC6n@~Wry3r#M;N8n(D-mmxl<5$wNEe@{r<|$7R~4xkr5DKX#k_9 z7~Rna)W6``hHgTOBLe}E2<;>Kc#)-WGUS!$LTDRAklgasJ`thY#2IeLb-p9$;Veo^ z?q_|SaRu?rITd&w0Rk~U7q_McaG%>KYxaZ-pn0kUJq0`-b599LZ217m?OMp`rn$4J zN6q2AlLWzS|Lu{F%Tv;zpb21|n&QHV3Y}6{>ugaC(|&ywo_uod=Aw}Mw?#!O4};_q zfcJC|HBP`;=>yx|4`vZ6*8&7;9gQuyPUu7TPoAS?N1AH{MK|szr#oBlg$H?DIdG4} z+c}+A4xD7^v&O@qe?f(2hy1l*9^k#)NxKDL&r1Mt5_V+FjIx3><-BrO^oW5Y`6mwy zI>&R3P?PMDkmORE{#^njk$MjYl8^p*@GStOP_$5{$z3Ce4*_oJfRJm1I=~4jmJBoi zC!}|ELKFT?KV*O%+J7!vc$Z}J>H=4g8-dj7Tq8=q_u9;@#ixXkaQYVDb*BwPJ`rAR zJ1dy|Y4A1RmSG~4!rdew%B6X2_C+_@2?&x3uY#n)^AA@3SBD-Npw2|`r}dvanV2_u zK=ZC(XVWJ5NLP}moEkXE@-kbJ8(hG)E5MZ;2?_qwVeB13k{7%{=S_jo0$>{ISK6+Q zS^I@0LNne3gXl+Ky88MFBTM1!E~d|T?N>BVsvS?RbVY&X-t0p{5+Wi;#$-pyH}m6T zjtBYagNcr3np?T)z+V%gC*`?kLbTssQx&J!YE6gw6I{KK%H%k{m)v~+zQ&2w#*5kf zOg})VT}lAz)NG}s&SM7?eJ^G5r7?8{OGgI7U?wN}L|}(a5%=Cm-hxvSqgUxn@Q@m) z!h`Q71Db)cf}ePT1Vw>-pdz zv3$q0iUPk8^8(vVw|AHUZ8EB#J%90;jwQ#*^*cS?4<#E_2ZT`3(4Cz<`0b(7`2*GI-9Xb>1qH zbQY^+?q92)z-_DDK}s8T@8;^{!`Sm4k$yj1LyIaDu$;H>0>?Gq6#Mf|Lijml{ zT>Vu2gv3%8?OrzdrF&ZZ1IDKO^~B6vL*z2wC&tRGXP64`8Z$4)I)lykoRF#DE42g_ zEk$RrYJVVn3ity?(GYH8bcd*+2^yKuw>XjHiF&2Et!f;b11Q$Cbp40xMCi|izKBjy zNC4C;$@^~*pD+i^ZS4B?;`f`vMC43HJ2zNmiO%r50xsVs#h4ohY)2V$9Ak8d+~ij~~iQV%$;A6)S?R0%+<2({YHv{RK;3S4J7FG>(- zjhbRz`ezer$u>nKG*NPky`0;ssa}epBAd#DW>c3exI+wM`ov>H(%wZ|byE8gPp8lz zW>+~E7nkHSGARWV6^{RCUV6LutA(za0Ip2cz~>9ytxoLBYutvsosYf5@XYw=RaQ|h_F*=0Amwy&%1DJn9HqiJ z3|a&iZIQh&<5wUyh-gPid-GOHHCGO2HvYNhapvDnXFl8###K@&fb@I^^^C5yA00|^ zYw*XW4x0OebjTn4aj8X?sVze^s5+RP_KeX@TAz*pcw|pf{w^UX2J%a=qnJv8f^FcWWIqi!{yE=>5GXKEL6H&3^@2E>|=V zmoENE9cCaSp}T5&?qzh7Xn*WaOOuUV`isz|B)91-&!i`2es+xC%}O<;luC=zZT5_> zG|7I{X6%@Uh;W#++sBMF6BK*&c%ozdYjfP$42J~9w$gULSl0Nnd%2C@(IJ;>l52z9*NaRR1EAS2mX3}TNBApi|${tSr`+)5V(+rq1KMfVi2w2le^ znJRLzW8PNtNq%>MTb=c_njEkVqW4Er`BRUP|F{3C9)`0Fjql4J{J!vTK=DVcV@sOM zYjQ&ZXuj>*)`qJz9FE;l%f%WBwuwD_KZ3>HJljq~U4bzBZX^Ut?CSrtvu??L2&*E4 zJ*{;GZ(WA@npFJWZdANFOmSQwdrXkJq#%W=X}{4Mjx)Z17iwB1qODvc1M{eMk!a>Ifs!9C?34CHPg3QE2N9EcF~dd)<``- z<8;l<6YqH#5#V+^%-CR=foE*zXw*bmX=$8&W_l&#jAofx34iE=(J3Ydg%y7TsuhMG<`b#7xT6D@+?sCOLB?zmiO+#Cx82@VKJ=Psn4 zX0|h9XAWFCa25?qZ`#Tm_h;)me8tQ(EX8Zu{SGM|8+5P;wA&@A#cN7_P0 zOJ==3>T5{pQw<36?;SaCR3rGt4^gnz#2m+o=z_g2yiYQ;(I2Gupc1O{eZPf!&y8wS zG9=T}-b#^(!l?Y;*~>D9x!%lZrgG=Rt96>atL~B)cm+TqYn{K0bRVyzZ!aQ+`<{i9 zI%3taI=$&!T}dXo`v8AtthR(n8=_K@%ko5pSWuqK`uEm-iQBk zyk+12@y7At(xBJ4kd4dM;3mLOj-F@u`q@7UG=SEc-8gR0l104=>xNsZEQqhXoV$sH z$ov1ikxV&y{{}@t z!(XLQKyLQ^dnkTiBirPS-Q?vQGZ;P0=ec5hQ{y~skPk5fYY+Dl_<9D}<<`Jlisp&9 z-OLb$(|g>11)xru8|qwLPj3LOdhE(yBM6!hKRZ&>2LZdqWIF;B`}!j-xkirc^9=VM zcj5k@)UgMbunL6C&XD*5&qjbL+K(KXT?L| z4b`XkuR1z;BydjtsD%8M1anZI_LpRHrFr`RWMV{6CfwKD%rZKmEaHO_Q)nWkTqiw2 zeBX!4o`Ofi*%;65Z;&%bf2lkp#SEGl+{^sUwzUOkW@8o7M20Mzx z4U+}c)jGoxN6(uNtHevgzc8^G*3746|8FgTnWytjjrPBByU!&;G7-g9auTcA>|5(( zBMBR^ATyyRcvb)5$4XHc7IlmqKgs-|HeYB(j0VP+>j58R32L7#7tXgr4C(6YuqG1k z^UK~r`Q2f>3L#JEKGOn0h6u3g9~%Gc-5yw@2l-d5LHAEX*c-@|5fFeTlwW7>X>WkL4ur%;4zR8Gj=vYdF)Wa>b8y z$GX0gae!KPO2)E6ghBYveg|9XsFHnGiI9C_SM$-;0@PJlr4&NN4`3w@!cb;l2?aA1 zx-c044&Q+fCX&E+{Hf_G0f_WY4pvj~A%OO-R6xncy{cEsa$H#f>4TXd<-%oe<(X74 zo}V&2sQvg=T3?zAbY{cLXRq4f549FZS#AWJKkyUlnn?o7#_j@Pj5W^{2vS?5+k52d zZ9(o{@Zi?@ZK1BCGlfO~LK=G&^#4 z>-1tQHg!)Vh|uO<%df6jym#;3?Y-)%snI>#3q$s!&skWri^fixEjWj%*UU_R$O3mu zxLkT~0S}>+JLk82-y8eo&D)r4QA4(DWl7)EC>5O~?#czOY*UI+e2&FCPjq$B(f4`* zaw)9Z>G(j^z?ZzFw5 z5HEwHFP^=)XQ(Z1_>Sh8k}}_vmRzi>ToypZ z&v`F~xyAct&%wtsw0iH_++zRl>Mbe4v#SY%1S;5Yf4c&)$klgdNaHf=D#Ih9Nqbb>Yi;E7m67)2sYDWCC#J5PBoe@rl zxIwRpe?Ax8mtVl(#G_#0ZSp3TnZ0kM9qx+f?6^!6I6GJnv0wCl=YNnG)&B=*Od$7F zVhRIXLbg`$a}q#_Yh+V@n(;0IQk7t=`fSztx~(#=_BjAA%aSPR4(nqjM&(&NHm7;N za+^IH_(a7(SrwGi5(s*Hm2H)?w)lmF4u$wF1EJNr78aSF=D$eb9v)1M5*n}^aQhGU-QHGRyK=Kc zXh(7lB6P}4v?D`9)krwyXr4e;#~uLMg!+MhH)EF}hU7kjZ;W6CP7q}{B_LrF=Q{QL z7QACuta=-v+ty75O3LwHe)bLm!$l1g$;tuz3xfxv%%^LFAkvidikclN3{M&WKAKtv z6I&o6hNToZ>}y>6*$=H3VGHg@wPJ+6-2}K^pF1^!an}uC16tRM_Ujcg#x9&aqi>t$ zmM-9SJ?;pKWVf)r+A6QGMAuYEKoIgu!h48iKhu6mufDB$*!Uknctw{-oe${z!d&ekG20BE+>| zfEk~k7IJ_8!1hnjp)&zAr&)tE|BXd(GXJ8ZH}Jqx3Yh9##N63CLSLeI zE4-nVKbMERUK&kVZSeKdi4KfZf2F?Zia?9s6m3Rw;;C9B>Ar7?HY@0^T0AKXn;Wu9 z&H#k3Bd70IBpM+<((e>y3ErWzqz~sxE{%`*R(dqD-?A7X3a@^fWJL2ku;{Qke}RtI z-nSy5(%#q9rlOt5kp%84+aPM#e<^)09XOu;;oARqN!ig?Vk1k(=By^ynhOd^;1Rm|R<#qmls2RlL3z(>f+f2N&G8YyA_KG?yjKnBa|La z{aoG`GT>JM9*93nI>OvCCG<^?P5pW0M~>RlrGVfvq1kGNn-pA`5SmiY!kMl7i2j7z zK}s7jP($~`Lk8scH>khOGa(BmUOW&UncvdZ-5S+&E?;DhHU@*fzA{MIu1`OX7F;ek zDJU&3v5Ts;|C?>2C&>`|%;JGdn_!ibg%4a_*XP)730vS4za^pd3F2Uede3s4Tv4HN z%%IVV;qK?m>6m%{@tbzz1w3mR`+nm!dUbVKRrBD%!xVEn~<1o4YB(^0QPM-R*Nm6Z1)vtwK@0V?gub5$ zmIA3t+W`nC6=_k(3|FMNM5UjT8S=Q;5_bIw zPKG|c=^cOn93*yZFSD|!gSHB1YLdTmNnEg(r^C0Sb=vPU>l>{73R8V!?HTwbh*@3^!|jO8!e&pL6)7Md9M5Itd&#zB zzDC@7ZyT=;y7h|M1@w4aOJOFsHOp(o2WI>ZKM4MiyTc-orcZyI54}iMfw(5bdm0Sq z->pStVFy#%VQps3r&?+6D)Vj`C%7ly0WfubF!gpX}*3Bq<$RrPGt^fJLvW|p9+6<#wKaCO&Z=;aj+XFdUBrygQZu)EK|8U)wkj(*{GM}65c8OP5iHn@e4A;R{&@2IUe~U4 z2}TWZ%kFO!7-9GQXK+oCTexwJA!4{Fas3$Glz8q4B(XokMVJmNXxA*pe0Cxo@U zfBu6t8?q6LJFc1r%K{EFQZ`zBP>jq%mmTeY-csRnqU!7BgR4i)yzm`F3wbCPA)hS!Ms5Yf*)m%eAg2m*#6*Wksgi_ zTA8ayC|}3@XbC0Aa1+w3gmsYn-n{PHrI;Lk$tPAzRa(|O$v<1FDH9Wv86pbzzu_@) zB8mfaT(3D(Ho2199bnGaPM)h(ku8V`Ybv zJf@l1^OUPnbml(tpKnc8U80k*?m~~2GcR8s{4LP`v425OnpdJSZ*W#?Tsk6^7?imY zl&JwZFb0ArAT?>myDavl5PTIeJ^@8mumNd4Ajg#k@NRjTrAw2O0>~=yr)Y><*Z&th zUWIbiRB`6)JFWQDx?Ytf@c^ZVYOO_Klzuoj8R-ollpy`jtdG z%j26blXoQPFNgl9B2+R0QCipkKuPjAW*8{uaoayj7Ov64s~Z{^BtARZ1kV9Aj1zo@ z2-TdMU=JOIsWs{X+gaTu&-jZ2?srX>T#3`sUwDYnZ?o2&r}_Ly35XnNAEolonanu$ zYXAV)ej@NxfRN;Z4M-PO^JcF|1I0$GU9onXZUcW}q%SPOZqqZGEC67;s$B9C| zKDoOs1D-WKCTabG;^w@>`yKWsc6aj9eU;gjV!q{ZY5WU(76MRElvU~vOpXXpBehvS zvl165dma>JY%^8Yf6t6}Fvvp0&A0@+L7*G>TWnmWqh~?m;45c7wwOJCElHX~@@V1z zg!O}oq~GFzphCDiKr4fiByuZ| z6v{r@Eo&np)&_y6zgVVU2h>fB9qcgj?OKJKC3v)4gh zE53cYYN`u{mR|e{)}dRRXA_RRrR9G=LXZwFFBe)-Mdq4UIp)3LUqICQ!-ychdE8nY zu}TUkXG_S^cRtpy5Fn=;qe0;RT`Ht98RWSjzl>jH_t`zLZ&R!ET7Ji1?0fp7yc5t> z$gbEJe^*^c*+A-qk%+l$dC>Aq0PaJ;&TspHYGsyb4=&s<-uSKVjo6oYuQJL$(E|hl zA=a6f0B7T&g)Oytn#54qG3ju=p(&wQZXDVh>^UO_jx z02@ZW3bQ_DK029ypd!#({{OUh=HXDbZ39O{c`OYQm8C+6QMOVT>(f{&yJFB2DQlQP z#xf<1rA1jr_85djWUZknLWUS2TPh4w6vHrsd9SF=h6_;bSfrG#i83iT$9ytpqk_ z`=s$;R9E_p%}CKG%gqxp9`XBp&h6W`7gj{nzXI^YmXK1e?BvsUFF92;9EO$Gy@4CM z+P&Idl&$Q53TU*@`TXe#JK{Z0(&UXVy+YGg-1O?UJNH@?vc780P`u&yKr>#);kGQ< zkTFa-g+d8qkJ36-9+0F|hJlA~itD#e?a&QQX{aL4S9LDwPopnq2Io90mL?oIdo-FX zLAPM?)Ar#I<6QDV1%?`Oplr?mblk?1+Yl=410*csCA=w9QRoCcjiDd%aYD!?v=gmJ z$Bj81WweA9W}nYC7$DS@i||ngT!WrIuKT<&8vk`XDcfg5Hb2jFM^6H;K=N`r8QYUU zFPLvy!yypmiN^aTM(Nb^et%Gj1$uWj$2_lW4e|%AD{Dd;iJ-9x|$n5jrhLQT8eKfQ`A01dQ7`^WkZ)LdDxZFL5pAs#qF+c4y=4HKU_L6 zPz@fB6=AwuFfDx#PO%eZ;_*i_4H(gNs+WHBhCoBeH;4l%5g8az73%{$ajT{f{g4)7 z6_3a5{bk2N)mWDrS$ZW78-0lID&WAby6VyYC(SpmEyx)`E7MBS9uF&TgMa>bVIWZ< zI}F5Pedm?tzF1of9oz{31&Eh1u}yaLPsL@wdad+91vvq*lJZ}7+7iaVzb?y{f@$Bu zzb<&cCHfjGM%*>UvUaeYl^kg@bh&5`>21Nc06yyTxff+2EJJg1?^?;!l#~vCZD)tV zaW)^P?(LfHHYHHoT{z#Y2c{JTffXBO8BMy7WLIes-y}HkkYsDNqFt)&zb%a{F9$!+ z99a9UVXPIn7**vv=O7}iq|M)G4;RyH_TiPLy|un~33W|t&G`$f;;wD-xej{+Vm0*X z@b1cRyf1O0A%he(4GU2X2%%mvvJ7P zM0}CwtjAtc9s=bL_YqSXR!FTgc-t2#I*GBK^sEF+oF^5jC#v`jMKsG=m&`qEWIZWP zaH^)ww}h^wt4vZ-mgfy>XJi9Nn(}|9jI;hq8P{98$He`-2`%BSkkj!uak`LThq;)% z(vgQc?>gHD4@P=R9C5#L>9)w#kAvpeBE`-tJnD0Bym4W zL&v{EEO3-E{1s4@g}xH>LyC$sOy~7u;_lCG!}_I5H$rg~S4`Hk?yP|?o(XoYWqPw$ zF4s*8BFSr7`QI0%&|{#Yhd=MZEljlmQnHODLQ#Y4GIsVBg8JufW8&E?V|v824hMWr zx8uR*n>%`zhSovi&E(<<#`Ol#<3pZ1U~w=^_gX7d)+mT#M_v}!zJ>EMKc?9+6jg6b z)@3g|3;&fGz1sToav7~$v=!S^aFRIt!-(iUSfR(rTKoC?>is>d9WhW^?Qt7; zAU7Xf4wwifV=JH*Bw>~=##9FVr?b#SSG?Q`T!K|9T=NW*u#__vOD`qty=n#2R?+>RIOsA(`bHb zkXMHL+m~KTGPR+}Rn=vvDT>D1M9#@kE}cmY8jnG@8^XU~Vkc5E`@5Dhay~~%;;^%l zyp%EGw7mPVDYNq-@ESbAV0#(}I(Ee4$Oc9KE8tUlTww{@)BlzcbB|PurK$Jvr+*|m zX4UUGgObAkQBt+mHT&2fZ|iWNfm}GtwmjCAu0WLt7eUa})rpfa4)36;ceMH*EO7EW z1;RJ{NY&sTt{CNCYpU|kHHB`8#o{$IYD+#1FRnK{Y@t6jhiDm*c=cmF!ks>mRYASNT0|(Qa-tMH8NVs6TAV2@F*%+ zh?=avScqH#aKwDugD73uGIoUWq5ys+t{2m0HlyW)UGM$C>ej>ye6f!0YApn&b_%kH z?@C3HKOq+Xq|VOWCb>P3MO@_r$90`d?Go&JyX8bP%(Bh^r*Y03e_PL=OV(B&BtoM- zexT^6N`OIB_~Zli9c!%%k=GjUR318&hEhADu~5S^%Vs{vq@Mmz49KimK`bhC)P$v>|WS z0Oe#GUK+X3*WKSY8ui<{m=Dv=ORfcv{8?_F{JG!$-sfw~{z(3E>f0r|-RDCtIkgs4 z(7KkvFtef6KH&}XErwS65_Xr<@41M-Ry*?O>Gh(hDV^D=S;aNtO_`=ckt+f>Ke-)i zQ4Olt!yI2{-L>gnVC z;!3h!h@32lu7U=J2SvC9A)g2A8kXO%-b+fqZV3~7L^GBb-X(NAt?0vX$3{xk6lpz8 z__zDToa#yr;++!DI#TkRy3tGzTVIL(VBbB2yh^v{V1qnKYHJ;QVrsz)vVPyRWhtzfPAsV6wMngP@_ zzT{BAm@4p~QqvpAA>j&BZ~_t`;G{ynI`b~R;aM7c0bQp3jZCIXS5VW=FV=HiL;Hc7 zP=)a8JO%KNf5WdquU(jWg#ouPAD^1)`@KN+ry$v9cI1>+LU3~ZtoJuwK(GQ$<3<-f z=T0$_nFr+SRe!GSH~;`|Tyl)X{oNT_+%Qt`_@y0y;!VYc@8RS&geYD^6`b7e{Xobn zGbN%T*Fqs7AB~op*D65s4FcHM9m{VW$vpxCUmby+EZ2gr8KxTpu<_i}f8wmCV*X<| z>-u#8YFAfRZ}Opg9;tf{nwY3+$YDS+Cwv|G>OxGk2Yz8fG~1;e))TrDg9&DSA{(p*PRI_|c;uTn&QLGu-G_BG>#0Tj^`D+`9<-u6tAgnC5_+ zj=aS8cMC#dXvFEAgwB_S9mj5oN&^~Cv4_$tU(KIRtjm`x#gWsJyY1b_`Re2V zg=>$s-p%bJt~L~x;=#eO_ZZUHz;@vENj4m3~f)lgB+S0Xzdnv2R<_?PoV{g z8NcLR>6j|@alExA{~f5pid>t&I8egs9_|;4G;u+|6&2g5u#i?_n#O@Qh?lZ};l@iQ zmB@U;;bzjlBKp;z7{1~`s}L1h%F4>}q&q0`9-uImk{t8^+nR+J0>t#m3qcw096Wo> zb<#<^uTxS}!#Gj!nUQF6nlapv<5~sasGE0sV1WwWMrd8I)?!sJRq^cZ86F-c@xBo& z?i?BZnBYE`7+F&zI|=BI6Wge~FS^pJ5cGNM-gYu*l_{#tfW&`7bKi55mxsXs+a!}` zW!&l!H5cwr_G>Fe8bAX)9=Gu&$7Dd3RQJrRM7i^Y{vkd9hBt75l40*4Md^FllFLy> zcXrVtcSw}l@u2raX%pI@4#r_n@_l#cO?bdeW?3ZB(tTV6}XKizpm%vBoryk z(#%6~siV5rc)FnY*HvA)y2_9b)SFyjv%SlIV35z*SL)E+*}PkHF2EvyPi_`=bKY)r z&lgOhgb#+G%Z)Q$Ew!0Jd z9b8juek^ExE{HvGrQuA%bM^_%Ty^=$0>vRkfcXH{yf4|7n^S?ic~4R%M|&=eN4%%{ zb~Yc)WPDK~+|4;4&-om9zx@`L-AR4bknY^wPnEc0_MdSWKZFtJvOf2?^=?h7pU}-d zMvf^j%dRYu)zI9Yuva|0`gF0` zl06bhd0x6T|Io&32ip!fSz2_U5;;*utDf&|6kdZHv3ix2SaD(4`wi~Zj!K;()>cnk zvWQ~=j}@kYU=4=ai<#kc5f*UDw=>|MvV%^;s{*tnc6km9f_k*9?< zKOxtZM$NoHWCM28S?r*-n6kZos~qKS;4Hg8VoF6RO*=RykR7>L(R|faHtTSEzl$kJ z_^ky{Ky~;Buy-%u#4ZlRaYowo>VOt3QGJ_6|fjXS6#OWrwdIK^a)V|$rB8TYc4MGEku|S9z@|&{6@~1vKgv+PC)o?E zy|0VbO!--u{;##Q>FV-t%1@u!$GreL7Lw<8y0L-T5`D`^`)cSd+=IGgG7Fp? zQ0{;Ka3gVWjxyzm|M(;LSe0+{>zk6<+ytBIv1ti5ZRDmq0Qmo=JJ@swo10+M9c)^H zO?R;A4mOX7P)p!uOt2XfY;J Date: Wed, 3 Sep 2025 23:23:55 +0200 Subject: [PATCH 13/13] Add create instance screenshot and update managing instances documentation --- docs/images/create_instance.png | Bin 0 -> 70209 bytes docs/index.md | 17 ++--------------- docs/user-guide/managing-instances.md | 6 +++++- 3 files changed, 7 insertions(+), 16 deletions(-) create mode 100644 docs/images/create_instance.png diff --git a/docs/images/create_instance.png b/docs/images/create_instance.png new file mode 100644 index 0000000000000000000000000000000000000000..c1ce856c51185fed5e31b2ffb8c7016ddef38302 GIT binary patch literal 70209 zcmeGEWl&wq7yk_=gamg91b2tv1b25mxI=Jv4+M87I0Qeyf#B{CAQ0T$g1g(Z^Sd{> z|EZ~YHS=cbd8&3o)u}DrYp?G9uC+eB!xZHukl^3Jzj*NiNlH>w`Naz;5b*C9_6_h$ zV@WgGix~z_ZVZpH^B4kF1AbV%;EekU6pBw*v!s-`)l4?C3-l!U#pO>N>n0@VJomg7kB9|Lt%!YLZuvK!VZ{OR1u3|dad-F|Wg2IZeV zoLE6&f_&vt4-@{pz6Td*YRFR)#ZHjaABTAO^5S)}K8D%--v;>+yo3k0+kXFG__r19 zl!5_UGeL9wf4bG7FX0chU#aaVpZfo~NvJu{UBPhq_sc{VC9uWbXL!}X_=4B4JN29I36kP zPfwu5q3Y`DeoYrJtoKEIZhMqS6-^E;>wM+4{ykv_hu@JmBg6M@{)7hNjYNS*`o|)$ zVCcnguztJmU0MVB8%$7u3zfaj-;TFQcc5yXZuGwdgHL{~Vd&TE;JzyCIGZXkHvW4~ zK$;*J#QffM*nqD`KJ$Kc?%P{m79DtcfzCnkzcz)!dr7vvSAwoKGBOg1*Y6^mY!{sZ zHNIqeg3U>4un_^rZR|HM&_IC02}NupDdh^&kk=^WEas>;_-;^6gXbbG zCU({q>h0Rt1?=iE_&inFtmXw*I331iy>Cv&7HmLGYjX)@p*YM?%WXa(k!c94TQW2K zq7hhhp-B7K``Q>3--7_R^DbBsu=wgyMv3zwPe8BLGwSAQ_xURv)6#|pxOMUa-9NT{ zV257hVfeMYE4=bz;~0w0IM^9f>sY}}i@UIXQ>oV+z+!6kCbdH{llHZI_I!SSGQAjw zi}QJ!oqaF~b5EZ|8bk>bF?Hh*b zrn=rhm>9^%Q)&J5Z|(44$vD#Wo~q1HA^}FdcHh8Gkv@axfP7dB^N{g^h&-zl4p>|k za@WK882F^UCLSp(E31cxU9U#kVV<+HkhaE-_@_Y)aNtriN@^4E-uRgJt&Wj7ME@Nj zz}VM+tzMZ$K=S2MmWQhOqw1hYT{P(s!Z{62S)sm%HL<}t|BYSq-hkS&bSsnGg{mM^Z*m%>z`vaA6Hy(@ou zguxvzn(|p}@IW}ka32jXV9X~6fB@q>hMm;sL>{gl}*dVQo2a{jGk^3x#$C4 zeqs-Y7*H!}8djd4c+uL75=-HQ=KXl5wz8AeQC(Ff6mrF&*ldQyckFh|Z?IG0gTvxX zPss0$*!Xm_Mj;xCN=U$c)djqIXD_LE%C3U*kc*XVqYC8$7EtxaH@=T|ukS^1Yw>>C zRBe*7O&gw$tEI9#kp!b0mwgy~Jl|&O4UNNKN6FtDOheM;t`~~NPebFd#^G&vmef8H zkNy%wB;e~lsI?fv=jE)ruLbS6E}~Oyt4hUwf15x>;Qrc1^vXJItVK-X;qF!@ z;^GEwef=ua>&sX$(t?hh!j;Um?V(F^pYL3Q2TWgddN?pSHzC}o0TT$2b?ZzP$g=Mo=L^xY{go#|J%@!`erP#WOg5=u9yCM+y9$G9G)MHi<;~M$^HGuR!Zr>^P0Z7MXJG$L zo^Om~AL*%n0_CW1{n4w$Sc|YWA8sqUFPLG0o@W;{XWMl7&o;24OTp-ft|W)fq|8-? zQ{^k`^&yS7&`1Fk!Z}L$^8;yQW!PqqcPKMc^lYruvAB9 z@6S-5hpcibKS>2me5p-WQiu|f-Vk?r?=cShSReZ3>6OkCA*&~ao5iBtZUzfiRookyDILLltrKKa#qHo-gl!UqSKk?~ zw(o+s)Lg5Q2S&&e@C(B%as0R+SzrVUj?VPlSZtJ;Z8mSslBigusHGT<@3343zwC=5 z$XhBoJ85@)q!j)PU+_99H0*M}KF@W@tXQL-+f0_I?6pW&53bAg`Hq6nWhvIhN`G{= za9m!}#R`?ef$%(A$$NuUNgJ1HRbF(Ea=YBZQ7Wrj72!}RmXp#~NnZwc<-(DY^mKNs zmb&r7IVaRD+R5q4uvFdBJ|D3kwI1iuT3n6&O@aZ7TEl}ecIqYbiU_)Bj#SZy{s4qx<;UVHc~Q)tE4R4XOXs$-E;ey_xk>YFqpI4! z@%rcSN@cpMfX!Sr!jvT&OD)Z*K{S({Wm!rn@wLw9;Eug9Csaz=jDR%O{#@HbOUhGY zDT6Rr3>l4U!dI7J;`y83Gha2kR*W5qp#{g0N%XYx?9|*0r4|s|E;mFq*N?7$tKMt1 zxBoev4{}hZ6T(z(#wgGm(V?(EjUV!CU?7A8@6XthI@NU*Y7ZvT6pAKCght_fh{H}z zphWH$h}z$CVEZ`v?k%HU-E558`cltbty4d($@dI21Z+l$AwQmd4x|8f6}X)%0?)KN zTz)Xqe4T~aI-QZ+RrWzjq1=LqPObR&6wbG#V$uEyRHK&6I}^26<9%in|67d%4CFdi zppzS4%g_rBX;hq}gG3Anc%2a`WYk!f5@<~wGd{G=baNanH^$W9p6zivO6Q1QN^@UP z7E~8X=J+(&8_Hc5$tK7&rVuvLK@hFdBnUiB(paI@NA&`keY>1RBe1b{Mvp=ox_fl* zew|_*A}1PswQzTT{#3D}#4OKE$Y1k1jn!1~)KJ$!RTJDpKsSvi5u)*NrQ)@8Qc1-Q zosp!x_@K`b7K(0=?KFv7#ldXrQ13g7p5Cxp%j;Pe>)qT7}*<{MVWOoNA1m*^PAb8dqS3528wKxZ93 zMQ1XN4Co4ED0)>a<@(K%&DH$A4?*1_3cp@l7b#>L_ni3NaU8?aclZ9PYo99PT;DAD zhRqqN9YB$^H&Yg{JJI*COzJ$D;1Z{x0SkoAT4!gN3iBU&S~^&rEK zW7Qa0?d~NDSTm&!7n>daUJp^o-n5Xf1aQig`Si}8;kU)6!MI6D4Dq&4gH!th`Z*aP zj9p!y&kcW+-p3O7PHjIw0OtLztc1;O4paH#)adikUn`m9fgObs4l$RHf3qc&# z4MXPyhToAUq|Q$F^TVyv?nIbT9P3~;#W!cic>y%gHscnh+`QrL8GE6tSWRyfhJ@ym zyf8TH_ls>k2ZjA_IwuBU09~bU08zypj|&Yr!rT5_W#)3f!4aOH&$XUg*Xjs`eIMiG zs9^BD6bEH7zh6GjvR0g1rU|xkOU9f`YCR~G#U>j0Ba>Fm3yuo?Jff@hcTS47tW}mj zvN>O?sm8W6c1Y)-$LY2^%C=LMkdI`x7eC8qGa^M3xKxp{e>`DQuUz{2%&1xY9(yD^ zSSF=1PwTRen|MjtLBW7RAr6yD&Gvm55=*#Y=Uu(2aU>zTVvNROKKdqOg}Rj5slo>L zZ0i^v%H}pzgads z{jNgLV#DdgSkbZKlJhz_oWt6|%O9``}Y*sHs^(=+$WC; z(8)E14NsdU#xz*zQ~|e+d(QXCNs_U|Su`grMOHQruTI4NnM2yc{oCQ`Nz!SIJu|=S z%G+qtI>Lj6dp#+0*W)_kxk?|sIQMSwtHPI>pHWXYdc|%|)=mJah)Js>6oHP0rNKlr zVN2&wq}$?-6_)hk{^T^U&Pa+`C+N=HtAcQbSu6fdXgy?{rYt3c4bzZpPOl(z;i#xO zG|WEMh|t}}LJ#oI6058yp(h(ydNeX^4kn2w??&grmkw#->de~lT7r)9-*>sr4GwWx z`*QUNQn=W|&g0#gXwU=Moh9ua>&>Nf3ZfNWCx4xfRY_MC9p-BDEr+(WWXKYOU{izi zv@w>9mzt&oaM&P4ZTd#VqKIZgU*O!1mS@x}1+YNXJCQYn@r6S2zS7YQYxEZH!c&S% z#hNg)!L12H)CuU`jZNd4h)Z{ zWP?+s{JVY8FOIYov4d}2bm7=(9EaP*hQzf}=jP^0K@|%ih{1=#(bVI$kyA-v0o_Zi zK9CxGL9^X1M$M=w#YX$`4o*$4m`EVQetYx3S)q-Dh?q(mg5 zQ;XwA&BoS`0pmu>^ps?Qaj6gChbZE!A@_&g2 zL3<$h#>&u0ar&E|fJFrY`R&YN9_%bjDW%V$-da z@xfto=H4jKPoCH=mvkVmlDH4?ARo`-XA3(JCPyLBB|mxOw@io<8^)W2ThPhk@jx+U z3m{yXEFKjJq`kjh%~sst13Y&d`a#WL4wx8iV)!B7zIC?|XrJCq(UHq$K!27_>~v)l}0&FtiJ;E-WXnw*DE+>?|G|qs$*`W|73IQmHvJR(rpHiY8l1#oD}XE zh*N4X!M%La3IU#Z+wd(;#oTcEt3!j!VQI|5S;QM^$ipYv1@f2ZCyX!sus{X0>SEIw zjqg1T^fibsN?1*L1j^@)?PEsV&j-hvogg3qrz;(5l}6e(F2|d6hgE3pWI)2k&H6FV z6rR34XOII(O?<-peJ3A@$lfd)ZF`gFtQ9V7M$LCKO!|Ej{V12U^dB7aKkhn)qBDGP zwX)W2e>cOZJ2_+28x~RVD%fZ~6|Gpcj4@9?5=Bmpd+> z2kPXJv;d(EO5@&7jSb#U;5JJ23iv52GP8qOK?T*IXZWgik&+sIIxF%ormMI zzq>;u3S9H?qLx`x(QQ@94!F79r|Lt&&3PseFeI%xoRG8SbNZT-qkma?Pj~&(Wf@FdUN)< z&DNqlwE1CwX4KMegzYk(Ud@utY6ka_r@p%=hKP4jp^MSDF2gd@cDEH`JmGn;()eQv zGxAzyw2tBQa>)G0P<}0TUy{{~$^lE~_wR5sjQTxIrV(uSzGVIB^U1Zz?AMV+!wX|; z1*LD_q+;iGF*dNQ|FE=l&E92hpc8nYjTIyodSu#}U@D^Ql2(v#kWgVFtmk%#Z61Jjj z`6$&nvbWY3NaFNBq*yP^my&DgF0FvkHNMYPt*;PM_qGy9PC=7FJyAlsO5N#h4Tz@N z-)8Z79FTNfq@Y6)+t)7Ms~m6 z{$aO)%AwEi*EuNWq&7V7l&EGzFIA0+Gh3Tqx*vI=S?$5agx1oJe9)cS+B_M^h4>~a z^XrkL`0-WwsV<*lL^5MUjdTXiry=hbS#aQBh6Q4+NE|ITpSv5h0#$3>n@}{YiGU_t z+~$S=4Tda~A|ipGUtM4*WXmC5LzFsg_Qr|9Ix+p&S zZ1*rfelB*+H@ow3Q~ZqGz|bWM-a3uZ4n>8$FtTu!l|o# z$_X55PkJp2h70t1SP}ck2-!@x=^~e6meS0ORL=9JNTk*n(}t2E+&nLmW2q%EZnwYt z8qR#*jwJ*}J45mLw&1BXr_zHrpaoFfB6f#~-!m&NPMeeq`W zE^m_D6B)Op+#e?K#d&GRnd~fldh0@cY9E1;=E_|Zeh5ix)9Tbv%89j}5kUkRT{QOV z-NfQyl^oau;ozPM2E3ZBD$}B)2Cws`Ita*<*b{S;FEXUcLU9I{ z5yV0v7*aR(2&|?sZ$~o5e|uC_T~%iXa{|P>(^`kyOI*9Nr?gQ^Q`>~dqfd8Q|H>vC zxqQh-K@%NBSY8|Y62{bp^{Nc=c8wNX(BOjR*{8B?gEx-+jjAaWkLHZRXtYj_%i$&m z`#daxvcG37PxK0Ld1}f`QD^UmG5bRv!c0Kz6B^vp|-%@ zq<65;gU%R^cQ+&mmGRkakhkTsgUa5Hr0;DFYY1lZ{&*6r zbY-~Y1IF?;LDMe>YpvSFNr_VD-J}ReLOpcy!i24{Im6FGpWBDI$%+Q?3f(4uO*0f6 zrx$#0q7;*fw$5hjB^sqHts;^q<-QVf+caGYUr+z2#TuY?`NB4X!s9^ap=N-4L~`8` zA*tDDpH(+3$-J0Ls@!9%4M)Tir&ZFxg~Uq{3*z!e+EwPVX~Wh6WSH ztlF6-D<*RJqY7!3t@)^otXh-n<)@ZR&a+MBxLq8v4;@@;LeBHtTvROa#sKwyMWV zb{`i@GwZgWlx>}`(TM}v1O40K=sRhcK}K_Eobd}rgnp@zO)kspo9 zhl%8St3mAnW3bSVN-9gaTtkcEj>Y07Me^jwF=I<+>s`gqpk-0_Mk)w89Z>ApymPiL z*EKe3)U5rOb)*pFFR}QnYT!{bJzjYtU#6ogrBW!)O=0U+cGIJq&iRYE5et-cTElh> zB-^;>=om$1#DKJ=KZW?XO_ejXcq%5Xl6d*&hI-xbBzQ6$W>ZS+kqk3nG&@|zcd&R7 zg;QwBGob8G5b(W0lBm2Yzt~CYSr22UQ$J|f6FoPHINyi>)Ou&a?>IaxVj!8Wo?q-E zA=Ua7W^U>N1+mA%-E)ym@z^R+5CT!B+G-sT+xPJIMiNN89z@|R2SZP#s8{Lr)vqsi z_fZdZn{9c{7wtDVZs&+I#5`)HKt+igJk2t@UG>{5^ z2?r+3zlx00BMS_N!9Wq zGC;#rVs;L`_vuO09JD!%FYKh--$?5~mxq-3B@*1W2S3 zZ}MpN6X7V0?B*kb&<=V%wb-X$JQqjij=&r|xsLAPSZ{aq5tg006KfQq||G2uw1Rw-`^9Bfl`pz z5N}{9Vleeolm@RXUs29%#2)9ZTQY8XqG(_`_~q49(rfd_vtI52bsf2Nmh}Af;z$+i zTdk%KLcn?EW)rC6;6b2B#@G!50egbB!`l83npWN9#qB}-3dcwAU8 zIqbG73XPH$ip7?z+9m@6!#D??IGNZMttn0|$3x5^zf|xSkmF;@61c44yI!7m+*8#V zKv%hx3o=;xsO}7;==5n=d{J8`SQNi)b#k?J;q2)Nx3!PO3tk!k9%86^vad8=YL7k`3m$OCm6j zPWQ70P+)g6(!xPtPgzTsep%X-Q3up$XoATH_)5`hGp7O)$p$|&&X7q&l~!ZEd}G4_CBk^L0i|Q~E!vU^7fmz{9WEau zIh|mt7<@-|K{x}3LV9V(PrnLE$zC4YPO#nde4@zw9ws1VfvTH{@9ln^MHphz0u)Rd zq&91m<2PxX=b_Nx%A2n`ZgLr^pq89&2WBZ?HG1Y3G%8)L+#PN`R2{dNf-EoJ84VU2 z?zuldzcQpal92!Ly+ksF4T)3jOK@)?8`5}TPZ6AoAstFv%$^}*5vh-gfs{bj4?5u< z$zKAVpP~gm6v_ulp`FtA@AFmXgniEW93B=yz?21lb?pCZFJL~$)QUm64{@(dg?B)T zLn?}xigk-^E?Wyi{sx$xd9{ib2f0()B^@&PMbWp8=9+Me5?zGDc@-(JskD!H2j}1E z9mc`jUF=}6o6!)O!o(2LB|9=i{S0}rSySY38rrN6OaFmEQ!2A5feB`M5T1t0WQIW- ztRO9oHuKSx0t8k=Z3l(%ZtSLY#B&rNebB@iT_z*~`qu3jK25(Ar1LhH>iO~MGM(F@ zfrtl-PKH@dd=LIZn0Br>0bqrc5#t|5QWGc7vyw3l;X|X|D4ttBsZs?)#WUanM+{fi zR)O^58m6<}?>9+M4)>fE6G9<^p)S#%p(TlL&`;a~uu>HVX)5&vGG#@i)6`#I(*{iC z%Q^WGnML#7D#`X5rDc0wTnj-&{$#ga>XV$zz|%8NajF~v<)c=w`(Q^tv57`U@=(vQ z;gU9}8M>&@OK~HP(rJsFXfV-F2Q~VCvXH?4Lg^j)bP$s{ax-ebF?4QwP5JQer#9bP z82XPM%R+}rGT_&fE@06Y$im)h{spH#aRjt$8N5CObkTpD2OU7lbTqNxzx|tIi=_av zZYC+wuc-bH=7bsowLkcOfF?r8FJ7z1>T)Az{DEhPA?P_B@L;-bhyMr{O(3NwSW=H(DOnO35TF=nxgl+tc zX-=P@9h7Q7ruaO*gOk2c_csl`TQ~qz6+C8Kc;ngLHn*AqQnFtkavS!(9^*_m)&6%( zpjyL0TLfI*ZWtrta;rZJK(Nt3S68R@w}b4PgSzMfeovDkNbpcrIu`@(+nYhqbsP4W5rS3+gT~NUOe8N3*l{+fWKme4=mIkt z-iSs785xSKg?wYhrZZ&UoZ{iVza5Gxsq*;O&td1Gp=1rVMv@4c-O9x>J%5bld5D+* zC842YUHhX4F*$b2NopKcGgN6zx~~iZ644o=MT|bHV8W|<+lQ4UN*6K-xlhL{ z?fC%i^l`j_ahHx>iJ__WXlXuIrC)ts?1Sqhv99^^Q~t;4yjK7dRlYwv zKcA<+Qc^z`=$^EeI{ z=`GsG5Lr$5bpYpS*MxwY`wK zDq;W#9_ud=Zhd>MP#%*HZzLi@BXbW7!RGe_0&Nb4 z{xM^!+R*v-=TLNoXCuMqBOF#&T6r2Jm2xdHReF8+4^Bxxp(LTzSNL`9f1NN7r?rB@ zGTf!genJ!?UhyG6JQAsOQOU}_i?aeUvtgW&(xjn0 zg%v~htB2Rzt{+P(%?A>RkdZ5W^XZ}XA8utpH6{Rol@Wu4SC^}vS=bH-9?8`5usA52 zvF#c!C{t{7{GeKz_ zni|+0%ceFQjqDx2-_xfUB3&c@#XbX|PDBDgMGelkrEOW%ReV6I@>AWRr@>W*7-c{b znYn`s0U95T3c1)kq6ZqMt@UDyuX6LL=Otm@LMSe4Q#FLWtL633iHPQa?Fw%cP)AJf z!>3b6Tx|7GZ|CqBw2lDQfTaCmtyYrha?Mb53iy1-sXJ+$B}c5yqp7MQ>2w5?BLU&B zl4z$Fr&tV>_ZI|vq{W3XVsTx!acmyy16`hO;G@F?I9lxe40QGL6~FEDf2z3v$=QMM zyW2?&`UqavgySMu5yPKetHdcaj{OEO{1VITfCBn_e9du(;lZ5G3p@^+kOcUaI&#Qn zZ9D}+zCl2XWHKAvc#%T6SOSSJB7%C5B%GX$B-N5>L4-C|s_WIciMP~1J&#eJ>tGKvSEG-R zKd<#j+o)-|rD=nm(lB4%=p^zq-Q9Iu_pODu(=m%cPBP9eejx;J2 z0jyjQZt$b@^$|S%hUY}#p_=S8g@*e|i%*34sGd7DWYZ|-=y0sa%{=lQA1=VValp|Z zM=!AgK*0ZdTWX;8r+@Gaq*GDh?3TYMk; zR01d}I#l}INGxlp0k5c*GVpLa0J1(eryHJLV7*x;@Lzs;^)*z0x*@HKv%W9ulakNQ|8N-7~S2q-lP14_%dKJW}u{f z!|KaixaP$IDd*^g-Qk5U_i&q7gWNJHjap@pT9aQ2xlnZFiAP1!X;kKG*9UNc>CKbFGf3Ht zAj6drH)tg0u=HtVZhc2s0b5H|o`&rZ_`fs5+uv~ES(iHq5S@SE=TE$VfFo`X%J_p7 z#|jz%l{2ha^`H%szXI+IfTlk^pgaCStp%Y1q5uJBpp6zC^5g?}kIjV;V{wFmhiHYGrnwqyB*FYwDF`N$I zTt%bt;m}m;oNxIA@bIhzvgvvLlIk}+uv{M=ds3M6PoBCZT0G9Nf#uqn-Ju;@6R4Y- z2*a)SivWz$$<-l`_x%&WN}ErArGEQUG}aFNU(NWCAsC8EGC8_1N0N9K)*OmPwhHh& z?l-@xN>&W0An|2H@=vE5Joxx}CE3VdhklN_;rwYpKzoLNXbAwu(!r}YIPGK=qJg2+ zAKV=P$Q}z{zx8P-<;%ohEqr1J#cU+nJyW@IaxfRMN(KS;62y{A3G;eh^t5|a3;Z$E zOZ1=62ZtLQcbD3}M`GvoIP}av!af{5-lOOk{dW#f23MMV*<49`wJ>&?lznw<2Suk@ zO+H9|ozZ#V+mvDbS6+9JLAkoRhT_wZ9xeI(D#`({rUI!O_vjOU!Ey31dw`=AFcl5hDsrFW`t#!dzw)57 zA`019YB);{>u?X1zRtBR1i!tcrN1x$H1tb&dcQnE5@qsDCS@qEI$P!TmdCmN)+-`D zI0SR~&ZYDjpG9oEzs_-@3;1B$(8P*Xe#aT-5Ah!-^J6b_g+qkyhBZ$g?>P@vs<43IoNY<3Sk%H~ zD3i#QL}dMYKFn;5Wl8BZfi4Up=ahr{DLzKEzAsW>8rQlhOU%)XmjpnJt;l4J@^GEoFS6WEjzk-|E@z zx7k*gz}=>LHzK=#tr1`Vgired0b3^zA2=#>po)m(ixjOhEU0^KmZT1}|;9h{!p|9MuBLJx{lXyWLIGfFyD;+Jjlh8zA2-JU^_d z*kr^mi_h_b^o!_0l8TYp?Stms%Ik;}VefmL7L0@2hB ze6D>-^=ymhhb%tdAm_AdQoFcyoT>U|SE=xhvtofVRFa?7DsI4D8L0?LZEc=Ju;T0E zmA2ma4-=R%s5kXa3N2peTDmK+x48cBQ~Rq^bTIUpe0-^Gd|ZDv8Qib}FaH@1%r`i1 z42pqIp^uo{Wi>h*==@@t3R(N9qS@G^Om0*p6v5%Xb%LHg1O7R)&uJIMS&3gI6^3^m zPJxhIj!qL{Cd;ujo;ym6zgSCP9!(?^U__B*N|#O;`0d*_Y_93)Y3UbFKQ9>?VSBqS zPP@H)miqeM_(;f23SC~?t^k#r{HORL`Lg}Ug9SP5*{`DE=vd4fJ-tX2EL)yQO-Rh8 z`Y01P>r}EqK%K+T{kWpg_t_fnjPc*G2CMgb#|j2$KZSv%@fseRsT|Bllayo$*0XUK zyCnD^6p^{3KxnnEyc@2@EF2d1)37UVAqOp$-4K&dpdp5|U`RyAOZ*ntyYt4y21W*I zKd>6vMzelgRikIH*Qv`7RZ7JWw-mTCp(^pc_}aYkk&0?fWqP_RW{_y@4}-} ztJaAvFK|HbZSvYQdJ-fazh|fvjQU&ThiERYIktXGI~#nrP1;ry`mIhW?*}%J#+-37 z|GRwH102pJ4Aju{w0h$&al_&1O~ngocyI^@shGI!~Z>b-dE5 zRHau&4^PivpejJ4Tp*UWNfXB*LdolQg#)CZYp5YTeAKpG2_Yz}Y3Kn=+5AEXTjSdN zZozGsUi)n}IeQ29J6pz8eJM_Hho)z+Ow)bvVN`QQvex6`+ z{>6t%g-p;f_R7oo5i+PdtE)MUEuXzf5s*YG$33a&(WKK1n@_AXzRBfw#-&_p@GM=s zZ(QSu_H;8UM`OfN2*2F!wqLk6)?3~8SKkOCBrm`6BL#{JWy)$;3FJjDDBeE;)rdA) z2stvn&$RU{#G4slOG9-Axsy0lunAsWie!=>%(}fF$>t7lqKaOR+H1}Vf0fYTR2BMJ zu}Z6VP*pI`~Ma+6H*Gssh!YoO~yQI2YYY^rI zac>NlOyEy|uSSUh7!0F@JGDYkk?QVvzIDIH?dY)J9~e(QUm&u7~V+1tvvST;OyU9?y=K33Le?R}&`hQOng#0GZz$qyS{}E`g zP@U4h*HHB9BLP_0=!1kmLdYMQ?Ij%@-7HmT3=r1c(8SGF8F$fE_`KNxwD*mLL>@K) zzJA&n+pUH9f9tb9G(Bo;(B@d4cxPtw^D7OoEK9CA-t(LO z(Q9)K3S|U}eSf6ki3}8paFBl}-H!f9;cOWomHz6j$(VRA}3>BIq8B1U@Q5M(4gy#03T-4D8d zpV&^ix_P0}_^N$%=^kZ1=UnA6y3zx#4r-{w1 zoNsNs1Iz!>va9>h4_Z3k{T~l8b3YKkGw83-S;zjlssjV+i3A);@|#FR{@ZEdfK^#Q z#&T6M?)Zu&A(0crS+ZH}66$Wov z$eXV4KH1<+uMdaiW2Y-5vVE9$-v1mehj-(8-NYwjESdd*Zcw8qH_z!y4{ z>YeW)N{?P;JQ+SV__p`A7pH-hwKmI&f!pO%OgZg+!B{dr1&{vAoNX*{9RJL2sN}HF zQUMg7!1mu+fWb}B=KX~EBBxxjLJKK_&lgdFkniMefN3^0-2CN1g!~V@&f6KLsiJ-6 z+1Yu+fg76HW*-EA2MYz57E-CRM9D9t6t31<$5xT_mmAtqF&W$l-zcNH%dIqA651Fb z-d-yW1eJCU20Z_mUMO2I7Dy$(Y_`*e_`U^I(1?a&;yy}5^{*0U0$Ocs6!q3rz!$g< z5B_zs9fqmb@5@lVVWFi1lIj)FOoo#&0LVe2ie(FpMV0z6T>G1IT-|a{IL2E*yNUe^ zmCe%b%k^rx#4^jcKHOcZXj?0Pl@IAhIX3R^b7`MS*^)`4BzRY29Y|Q7zB`48*7fc; zymz(eVQ6d>!NX~1uu*Di?|s{~Zi(`a6PFHqxOeN^{L;mu(t7G$C>kpz;c|P7h?i@g zP{cIl+g$U*h}yofH45PnxWM5#r}c&U%Gp{EvS`BDh~F5ae8P62!+=%UTPA;zTHYoUq=N2F}~9#T`2Rbdv}ZI@nryGNz!H($PhS`U|YG-mG7%mHk`b?e1q#P=`@4(SN*psbqlA#$QbI!xff~G zzqsA7LTZOnI!-?$z{E_?exz&jF;Xp7!ZKtd_QY%)fn;9YZq`^m|x; zA`B3ugAA&c1gJ=$7_UT^hgOKnfa^3r4$hv%~6MTS%t5WBC;P<s3IZmw+k(B5fT2A=}HrjUC~*JjmSF1VOEq z;2rs_pD6drSf#|$H6JMhwlwe0PUNzg6=sH4f{vD63?kHG61K&@WYDkd~g*8{Wy^FT(M&totWUCR1gLVtUh z3|dq2T?v4vDi;kEL!Ss!z%P^djje0hhfAWrt%%Ie2uI>67d?w>^&*BkIWLMalPyQ( z7luZ}VMUWM8||p2OF7Yu(oZ!R_#)Teq}yswxckjSY;1f>RaTRU*Y6HGE(sPh*T?=E znW7_x1mIbVmx|L$+r?Y!&gp&5?A;TZ`rbN6%Am2nkCgV(`wpcOR7@KChXqio z?N=ykCHfa+ix_-m8NHlo_w$}>J@og=MSOz_+X7YQ=yLIq3>!0L^cna{!k_Wr>Iz{)18x z(>KEY5X&-r{cf@qW4XoA0qqL{ePyK3{6eF}E9Aqats`W&g_yCUgYO#-B5-6WMxTLe zEoCMGbQdnUf+PTXi_d!p?c|Mpa!&dZ=*_oEqh-P)DWGq{r*)=%{hj&YccE%r6IOw;z~tHY}OAAbVT8+-CCuTs%R zP8yl=g7u}_yIm?6ynhlpC$Ru)12#M4wz^I-M0+oet)fZNM57U7_`!aEKHd{ZViMYX zv8qo~;=aTU9eO*`$OIX0;^n1yGlU9f^yr}7^Nyrz$i{&VvdNZv8^~+re*M*p9N!Y zdn09kIgLr+{vT}#75y6?f_pI%&zBNScXzt+>>#%;J){eP1xU&wqCtQLCP|F2UrmKe z{a{FMtJeyak_N{%z*103>ja@UkDK4%078E$kyGX_7NB)O=o!39<9S&npb`>-c#T;G3+EmtBdNO|i`suDIj^8#`$A%afy< zznE^TV?`9Yp8)jygEr?f7HE7L!1m=%jXWj^8I+sA{DQ>2-YsvQfo1TEvYW|3yj+BP zB|O-1P>i~<$%N<~MO8Q)f*}D^eWiY&+YX*R5#2%B|5PC$vy#1=k0-ryOIFgE;Br7X!F4nF^zibeL`2K&=gA%D zZ?dA+esANV>pU+}RlI-aG9dVA(Uec6Xej4%4fHo{Ed?37Ry`4k4S;kBUn9x1?Bb&g zeoY6&`@a_L-bWuS1hVe64JhU3Ngux9DwEG{k~5=w1Y+d>Q8P^KK%mFIXHpV{&c&WI z`xJn?p=;DN+)pQLz4|MRJ63Kv}ei# zKqyB`ji%_Jt38~yuehOwgoE*ng7TK%OWvQeA)$w6GPn@}n44Z$D$JegVnRqVMpzRT z8BJmbDnd|C7!d1Ayq32UnyAavDX=&x_8Emvh9umdzom)B8&s~h@58i1pqIj08^d{0 zHIVx91`Qm)-bEqTEtgbiSXIV`q*rASrd?Z&ZZ*Bz}6LNiTGn9;|O~DtJST0RRc1kOqc1O(u3lO-s9J78SiFqI^{2fD6_;;@y z1y0ZYPo~2X@-cni>iZ><3fCaq)CO7OT|DPV4KY|(vi}QMR*#{E+GAx}!W%ozDt+fr z{^VR=02G7+%Ks75G6C86Gt(UF=s%$hpnS?Z*2n*Dn2Hq?1iqI7PknZf^5+!d+20IC zoy<_!e+&jDP#^Fo9rgViA%0>2elcjS$-w#Z?EmLR|2MQzr=Jn<`I%zH3UYKxY5yXH zEMwq%2te87o#k|KWbHo3##la;5Vmk(O2Y3zw-@1`Z<1UI} zK(4pV5MFGkLgO=Y^uD{8)3ARBHPpF=HIm)_bs;gt>kAtVD!U~nkOf2bKuGLd;<@%I z5U7v({fQ5iqR6frxQVfEGY0y{-Oa;qkPgD+37)4mfbFtm_*f|or$*f)Bk~dRz}GYk z7V7xg_~q-7)XQ~{fsZJ3CzZ|A!g_4oyh*9l5s4$c9n1IflFeYlz~i*NK6kjOd38Nq zFLIwut<>^eJ0+H1`<=YRfYBRi;JB#{;_!41T?5$|B=p+FjKmQ-aZ zq}=9Z5|8cvm-PmNPCz>gM9~hmC!Y#|_Mcg+G&AKYQiY%Ii)46Rr9o=l{q30FZ4TXs ztuN1h`8_piqSKy_@#e&wuC$FX7QccnW~h=1oA_1NUXDLrw`m*+GAb(7zBHry<1T4t z#G^rCmmQI)d3TGalqLYXe0zzY~H2<_IB9>IJdg zj_W6ewv9kqd+E1Sxi38#ircv|$xOuFx7Q)2ol2&g@G?HH^@hTxa3lxl*Bo7mkR0_XUES^X+f^A#jqA_Wx3%1eIa8D?2BL(Es|1vi zg-J-B8ng$#hjIC|JinePfIWfL_?z{^K!ssIHhfaYB)~yP;4>zbicIta@GQS3&Ftko zHQMD)xxBBTb1wc;#Wbd6RaFxn8E?m(?!^^=YaoJNTB+cQIr+?2dV71Guc17$v1i0y zM0ZNyek-}2LqU>&cKa*c+P^|krd;^;NqOGW%;9VnRhu0+tDP>_B9rqJeFF7UYg9m^{ImU;4C=OI-u(doY=P@#*FJS%cKHlskge6MYRYI z0QN-2-MR3Dkcd0XHaJo^#$KHOOdGfBRgCokgDu(@)vu1$dOuOkXx$@9x>ECECH0q| z_owg+88*8Ek}HF@H)*r{aK|4ewrd0}bdv#%Z9R&K0@7F_tbr_1kczLe1WO;u^J1OM zVAki^!Q}Y7t^J5S=kG>CF#DgV3I%Yw@afmZj&lC{KO$%Aoun5iAIkhUA>~iWigz4AH*A;M< zNI0^FYAS)l16km2Xq`Pj$k&I^OUPM9TaObDry`>Bgy$60Bo?% zPNd$#Ahsl)Z;==)K_;iuiup#(IqOTBzH6yoI|jg@(EpzXrOfn^wRYMJT4mcpmw)CT zR7G_uMiJ|0=`7VB)0SiTFSUwv_d}y<7BA1Y18MNbFAKHRO1fkd1nn_&tADLy$`4TLl(`PNr@Ck9 zf0_nn4d>YNwYPNz9{U6Q9Oi%>lLd}m(C_aBvIm0j*87vgBw^ZFA7BwB)5SSQX8nmT zhNY2u)jTua(P@FZI~hyHiCAN3GQE(f*jd_xNsc2BM+3ruTwhzq{m#_i^^cA3CUt)f z4fe~6EVZl-^Dyh5?$6JgWs^GmhAKbnMyCdFVfvKu$q!|K@QY~DyAR)!RYQm4Yf`Fg z@0w+})3=AS^S(cbp^JkbnaO8B4#m21P=X*wUxTLzhc||i-7ZFc3Uu8p{pg&k-^a5n z$i611d0HaVj0Xw48jg9{laP+yVDt0&79YFg6&Vr}4XReEt|piDgc}vh0-%%Bo>9V) z?J~wluR-U)0Cyi&d35SB$fu!f3>b~IUQGCu=#)WkLhic-WQOdRMo!)zE)e_wNo@GI zTv=$K)K&YwyFG%C5SZpzlC5ot3R!E1Kjf#U3F);k)xogYr)5${L-FbXPuiif15Dj> z3?HKF&KNiWEmXbKUFX(f~mTh7Sso;7_fW;ts z`h+;K8Jz(?>-oZ9dNbr5ZMRjWsdV-B>mxcz_HBd}D}l^y&r%%Vdf=CIJ3rG&g?v~0 zaw#x0Z_A}`*TE|GL&&0g*J4#Pnala1z`LC_3lLbGG<5Ov6z4IWqYuHI1t&ag@m+PC z2EBLBE~}V4;1R%;!;YItXa$syrgKT3e1LVeB9aw;ee3^O=>DmGw73T?^rwrUju&b< zOfpv6_R3Lta>hL>Mt`P!)_|Zhb!Ymup{CvOY2A?{21QS^sz%JF4MfR zZneiG@wLPeEOE4}1TtS|ao{abb4N`+g2c+=Mo1?PI6& z0pT-F`9WcZe?otVt9yh~w^bq9IVX)m!VnOm20RB|2Q!x8ZrSwG$}H77#9ni5ink#7 zV07B|O~lN53KAZMuPxK35*4(PDI@O4}IjM{m8Rix4N3pMiMeE2R0z^F@1uj3D64r(u70@Eef5Oy;bixbiaNCQOBSarpm3d zw&U5x)XUGt>Q(QJG}kg`sRkr}rK~n5ylG zZxM-XtVO{&K#X?J?j(TaZbIJjxzmCsX*(S{sb^l zM_f~0RjJ=$$5NWO9Ud_r6}%%}NcG<2CR|7A9l7K*tm`om<}BSa-7i16?R>QVCoXyr zcAx!850aYIy*P@mY(kbCA=U7j15BsFY+|47N;HGBYFk zP~+N~6i9ioKVhzYf?$V@dW{4!%2%{B&8#@v0)0^YDOlBZC{8* zMD@W&WlA~bnmvrVG}AJqZ8Bc}>^gFt`1*%0@%X zRl7;G@(pOI#&J{uyKkS^Iz1(+FI6zv#dN;TOFE>t{T`9`Ey-2V*A*r1om#D$&byr{ zHD)`n^F3tqL7lJa0Wu?MfW!8^Bp&YOg5u`6z~sC6yQum1_A z#@y%b==0!A*Q2Z+{18#x+9G`gXfmk~P2MX!VN|@tPHZuM#WtRwfjGSV$eX`;7C>Az z18AQk#s9ObiUnSS_CJD$c)+`L@mz&R`ft$wHK6O7MgG&C`+paEfmpjI8B6lN`V1OC zC%E@F6c5bM1G9S_Aa-p3zpkyQ2dJR)e`T)^Fn&J=e#Pif&-#Bq_#+a`<6qguM-DhD1K-H&(g>z|Z=y=VV# zJa_-M;Q+Gfom^bb4u^`1iaci@{a5Y?%nYW*QY3b>o>fSL&66z25X?fnCoo1pp5frR z7CiluVrji!j|qoJ$Geen>R+1iM`aA)mU6Ltv@ZKUkpQ`^efY)3U^l6n<+9!`)vnK3zr?B~$zuA@n9+CI4pzhFY#-(2uJ#Xz%owClQ^y zSM8q7XqysK{20IK?%%g_YQys*c(c!)uWB}P=7;0JV9?9Zra@tE%%@mIIJQjeD{-TC@XOo>X73mKf_LXBFDh;vuYX%f zgp(8B=K^%$`wuk+5ypWHsEW==dniYE-u7^7^SAs`32`Eheqr>Gkut@@^#>64@N?#$ zcG>rr&w~rVR+I^{FXBNFb&beeJW~2pxM@bb~J`H28Da zG{kbjKe=po&^03&QM6i27!Q`_JW@oQpG$Zx!A@r=mXUtQK3pr%kS@9Y1Vrv%y*fYa zux^f*(kt<44n|-yBXjg)gM6Pc!y)?bw}+`qjT&2KYd_<&oA*JnihcxD8MlX?T8N7_YU~GF6Q9vmk9( z!@DJoQ(h+!Kz8J|XDgo?Ho1kMV&$%+L(0DYR;YyPRCR&`4DxPH-?H@_U$mhrzHkHnv)~X(^;H-##n1a1NOKY>^=B z?EQemWV~Q)=~d0sk?F53#WSu;JI@~py6$0p4<}}eZT)yF6uUOi$qc2K`TP>@vRm** zRnXtXHraSJKg3dTmupO0e&A<72%+gv{vqko7}^CENl>MtKp&}yu3io;mZVe=*m5Eh z>(M(qm~V+`G@s6M+uO`W?;2M%y*%+s_tWSe0gm}5Z~yGO-5&rbMnhOJ(U~b+yt3}` zJ)Z(dAb-f)b*Wabs$MtoC{l>p1i)^%Ev;mi$i^)jX_RQ<{@NJ@+%TnpZw|~;tQ#?y zE=*=4UZr0fT`J=kM$H2;pbnW`S}UB85O$)Ff<2)Eq_J)5m4%DDrwm<3{NjK&ST%-W z^QBwD^2YXV@!C+%0!{hWT>JVsyEoYgIs4ULEkF|xZXMM>7td;wjT&yj|14eKdfXxe z(9a|=5`JS4EdTo4IaOvwWoKAlP!cKnl;J-IN)MxreyxU=@VgxbTdLXg zVev-}k*WY^g}86G1)XqKQ*9Y{Rk~uyJuhlY^cN8E1isLrxH(8)jAX6gPYBpG5PRD z^CAV~3|2lvKa_{-$UaT(hn^-*=!q5P;y-)F$8XNh6cHXV+e3{;e(sq=UXLvIaL;%3 zx;Tu8Fj!Zi(8;p`*%q?%*GpX_Vh9C5HHdJ(JRyr=zN?jSGWtzBUwBS);Oe}gBs$CI zi1cn)!*{&ULVB#!i15QvoEDP#U6UeQK0UeTri>Jo5l#>-?{ro5h0k@5KtC{?k?WN5 zn|fYbqWaY{z?%p*YQj1DHEKK(5S<`w`lO>y@y3h=i4X!Vd8VcQKZ_vxI@9dX6=D!JSo^DBY06{eylr>F5e=8k<~+0rhE z_ib)A^W|6(i7%i7DIxij8I|Q(We5|JwH{51XC7xJZo=M3!bf7wrR#j9eN(ewou(`& z#u|RX=$dlL_*?(Qk!r#AqQ((U!^=adI^0P}*-D1^V3XUsyEI+ILW6fWN(ez_=`lK` z;JXJF5mcf;0=-JTu{z~+wH}9Ol4O{v_-wVk)W{pwp*m}2w-3T6$et3Sk0Iz{PrC2<3BBn8{FGc{V&wmWI23=a-I((%G@Y5Wn?4)WhTO zx>om^yvoM^CDF#(#lge!^}bI|4Cu^} z4>Rf+wtWRl2JjcHzI;lIBul_16U}F(#t4Vt+3DAYWz98ZE3K@%DZJ-Q6f!VKd#U{- z8+c8=K83SI!-W-(gHhcH2#z@f-Z8e6V|H43JCZL7NR!P5>aT z04n+bgj$g2OuL7j;V)_rd6%EJw)dg|Tv@JuMUvV}lD{`h2~pK{AXK`jUXX zS3}{U)a@#UphOWffC_nThUmUeGM?7p1(NpEPkpHuy)JQD2ut%41TBu9E{7f`u)!lJ z=SzxS(Mu_*m@OReg#dm-D-Ll{3~-awf+Sz-I0h=#h@WbCQrw4?!1rBQF9RgRTnST?ehYL2cdS#wCAWmK1rIHndnUmhzv9|?{-<9_VELBQ#37{9pfPk)%;+JTmc*rXy z$K|#T3iprXZ|QwH{C};T9v6RUXGFVvfdzsnA$!DBGFgAD3LK*%pOu~Z(M<*r4~II~ z9zPL!h<3Jvh`#aAR29{M^F=nn;7+#X+&^-O0CH)IUPL*kgtAVTBBoVyiFow+2L@<>Xa9&CGz%mYaviS zpkqD8yh)jIsmPF=Z>`5D>QWu;$K@FfEc39oZGt_`TD_y@ZE2)JiGX&ctVvT=UwbM5w}Gt)>8HYt7*@8l3H@mq<^>)&8d? z$hXpB+njQKzrSIX&` zsxo6NH}4NK@Hks(MwW(K9phvp^$Is~6lk*Ic7hX)m3MpN#_%P*o|HwoIHGdD&Afosd2(*26oNnRz+1Oz6q zA|lB*izrUJXgG7_*M7a2PcM?Ho3MD&YrCyYkwR!k6bg5k(9o}U6P=NEX^$(`fBfU} zRAZT$<)_BObBJ472A$VLn1`%l}3LS_c1Fu^|F@<0-c%Ii-MLHr%E? zvRaDsc(3uwf#U8*ai-!MS}1!u6EUm)gB&?x$+aFc{78jE$xuzTJ*69-IIs>1ke7|v zdYSC7v&lN!@IGj{tZwCq7#D^KuCj5M4YqSvDibvV)Tw5q9}bt=jG2?cT@c@)R&E5J-4lnH9wDuiMp6 z3s+zj$xn#IX}n~29jN`>`NeJZ6Bj&H53ugdoGnwtoy5(Sq?n9)I`_pPyO6!0 zmp@CHs&}TCt+v%#_=T12q-hK9Nbl(J zIZE983xtZwE3ECsu&bxgRb*PIIg*b8)KO$YWoH(Fskh${eYe2F~vX=cuklYk)ex_;p zpGlqencAfW+7m_Zc&r?6TO(il#=M!-0z*-8nh(Fx}p4ua&`oYJ&KNoI2+M8>R5zHHKeSZ z*z%!ZF3QkOh%Ys(i461*0$`h-fBf_#!lbRCu_0EES+YmNZEXXvpS)|nVi7<6q(;on zUQL|GHM$Xlb+WFNkusUe9ZAxzCJ}E8*3AK4Rplw9jo>4$BMFX|&E(GL40k zZS)9)$vK66AJnl>R&nON+Ta5h+O2Bd{92+|1m;^#M~c8`Sk%TXpvbJWb)qr$99=1B zAq9JXfDZR^g(Yc3&SsPW>4~1?>R#If8(a*}-}7%_fo}zQbgW4F5{mje&KEVylTgiC zp0ANXWuU08vm6&ipUXczAVh!Z`#v6r+$&X9o;Gyn(AHsB6g_(YnJAd*{NYxpS$izRu zdT7)scr`Np)A*>1dXEI{{(H3O#nhJzOzwAqtjJ@cLR0-=gBz`0ErVqtwE z3NngNJ-%Bk1sNCSYpwPaE5lALc3a>DUd$0{&wd*da}Opz%2F5PE!Gt+- zvgbT5St)E^TMlKt__WZM=biTun2KTH$LtwM4*m^Y+W3!w|=)Wg~~Tac6EaK^x#C!=89vOV3#|N zOt)t)%pf*>Rh?L;jb&5vKpN>Y@H9OhUsJaVrVWYlB&Y2-*JD5&q}sBnzaEUQv~_2{ z(4<{Y^z$;eEV$Bc93@q-3W-|i@W7Xus!2d`+hojs1pwqD#L8t?>fEG*n4ENGZv|cV zDTpC~m*gX|W}BC4#B8JDRbQP%H}k$%xol4RJkv)zQG>m-_u1fGZ2RsuSMMdii?d24 z1I)s`=3HArmdxkIH7`QAl;JEUnQ=YSx*Tla67GkyEyR%7CeQCT^`56ma@=nEwF)KS zK@MGU>g@278E8E1kEfm7@Mmx23DHCsGC#pwUtPTNDyXB@xUkt>mW-^fKYst%vXgqh zPw%-!u9KdV39lM}EgOmqsLY%*As_H<7jb;{G7@XQ@A%kD)XxQZAYC|<6Q(TEWti;DacN&e~Y_&X**WV!H2WW*iLV&*Zk--sZvAmaX1F} z#_43^5n3;Aj(4uBLk4O6I2%-$#kbvzbX!jb1!UB_XHu0^{w31D!9vE;Ht(aOH~yIY zQ@zu$@Jd!B7_SUzviy#@1&M8%-={82;vkc?C;4-Qkyfsj(2$GROjGhH-Brb>tds{O z9tz?z1_86?P1GN5k8^S6eG^lbkK719(PG8jdg5-8;Z4=*w2-;VzuNe!K${^?Glrsd zd2%*YYvFw0%^B3EZDd6fe?3*9pi6te?i*Mcc9sMCKlpiUR3C>r= zLfB5qp~pjwLCD15YRr(`xcq6QcHf!XZiryg{yNNc5r(Y;2M5W>fsl-05A>*})H*H5 z6sZN1mWRDN<^zi3tb6MxiBhkFH}@`hIzJrLZDTlq~oG)vz96@Yj%;52gh(g>P9t z^*CPVP)d6RYq~LTxVh|K1TkNHy6@C|Lwr^4M=cm}bn5kxVkYJH(He{=UVJ@_&+BdJ z_UOFrM2^8su2>Pq*f=4#GB9aB%mD( z(PWPo4~Gx;o#N(+@#l9y-W8{4>%_>F;p56D@TRpl-W&4W zjZrErC8NX|D0*hD;5r&w2`MvKzW#aVusva*Q8K+44}!x~jG*n2)VRv8N=;^e{{l({ z##YUy%?Z+&6%YX;Ngkk}U$E`WS^ho&qg?ReN>H|bBEg%B%g}6T>Q(J!@k9nTH{Th#T*`TgexNau;!w>7 z?g^y~)oU|TihPj5D7wCIVn1wIHecI1m3vdB8MIQ(7Xc&G3scLLiP)(PE!?kjwGPt% zAip#}hsK0aq(C;hvXgj2`sxishjM~%NBEY>zz)W}N)(5hKVoZ~J=1%R99s6>bzc;0 z@EnUVltgSW&42Oy^5v|Q;E(M@fV=MGalSwEb*oq-N0`&FA@t;|O6n>DA0jN=l)&XI zT=HQSp-2!sniuiVr*rL<+h)ks`IF^|@X-VJOCru6vr*iRCJLznx|KTgd~4=J|GYQ{ z)xcYO|B&6U8LI3T{DIm@?D*}6TG`T%Me`zxq1NeG(zyF=WrTT^AaK>Re(hFfkz|UU zp-2uf^t#&1T`xtTNm?hvJ!;&lZ>rmYg13M1dIY6*W@=q!fmtyIg^&qbyOl+`={eib z&2p{(0ic{arKr;SFlFYtcjC?HzV*`b(@Lky(p>#HVCKa+tuZPlk=|GJovGgc?iYEN zN#So+giG!_p0}@}kj`Z^W9LO;3}nh5oP?kOZs|`Iy`*2BDkf>T9j#h@gj6-(bi~|Z zJiLEFUbp>un(dseI8^Q{d%G2UPOD!Dp7Xv^THYF`=o#H8n{0QC z^GC6sD|=bCtZ64-wsYQu-0F3553Nc|W;0C1xv_(;gR^5}US!~E1YO|PHI_xy;dJ`_ zYPTMFu`W+XY4v|zVh>WGd>OYj}#G2nWSJ4#P?K`9zLb^ei0G<8WrSI25r$W zQxtT)rNf~X&IEf%N2Z2Nxh9<${#g;}h=5mG3Pr0>%Ye(V72T3#$dL#B~e56i)GZ3)BTjvHq;50I>fvwr34)hCrRbZRwDs7#E>G|^>TNPoeT18S>r7}xMELuew1fj$L zq8K;_7iC=HQTn#8IeVG0-WI;;rh_hZ?1?(!OXXX@-f7)87cUc7&N0QHR?8(5MKfW5 zP>s&Ej*<HENt)l*a(&Plw+_JV50|3glG%@t1(6zr9^vdl>GhXG>Q3R-@xWURy(7dr9nK z>3CM#!w!VDi2G!W;*9KQ@zC_?&a`MJx5VWyI_v`6Tud;?2T-3PLI^E1;Ual}ikDt- zE7qY;i%)F9X5_=41vHos*lAE{;lHOYW+k^3uEtGeE0GvjZ99?lh8%}9wYM@f&rnx3HsvPKoUw&Gs`{jsD z=Cey-}ri#{E!L+aWGIuG7&Pvh9moVJy(S!=w~8p~JJq#TR4H`0W9UArAd z{VdCHA{7w6 zwS0FerE~a&(pHMsHj-$s!0344@WnpnOiEZ zVm=qAJ_e~4X@ENW*d}aN)~Tef@DdQqE94-Kgurq$nCV)<TUs3ZQnuRdaCx~u*>&sO z@1-nCTnKX1$b{jVr{_joh&D=Ju6?as`aDj$h>u@nQbK^TboFEICZnK2A<-n5m|ZI46b2jG5kIa#z-i!^Us(J-B|~YIQrz4w+X!cP zivl&K&8_hKVx&pMUX1e}7>MTWW{zAL%-SF%C`Ghs#E5ZQ}jb(#nnXy49 zJKi67pP_uf0*QBkb-0G#qt7>cShu9f5C83BEAvBfBze=?OCU)=rzEr>qPCt|O~jZa zfl(eM4z=@68LQ~5@p5H0B%AO3$-$im8C|4)-Cr#LhQCx~XVqz;=1l~(}jNuB4nBRcpH5ak`vRP6V*i}{CBWyaw^ z<0I<-c5x~GBT8+Me;JlM61!Ai!lGLd5P>%xLAL-Zjtf)b#WMX}2G9+FaTbd|w;u;- zp=hNtaYS*r3VWWMcaHFnI^GQwFKk8XGyF4bi}t|zh@CRb0fG;4T!xW3M|R4wlzP%j{e_}KhLDSq|8hWbli zgx~D`ycZs{GsE1%xbEEq!RnLg7t6|+MVnkE8IH*jQ?*t?X|LQfcHMK5PNS|De7Jn3 z+lD%5PybjmAMwzSQ2?(;e}rg%8TKrOu!DxPjla1?rrpLHl1$W~OJzVzr*#}*5KUuj zvh%YF!yX7k*an8NNNt!D8#R%}52Tet2Xid{&dH_Euc4e`;a#Bs&U0%DfE^-H&SWg; z{T*_*4QOJ|_)^qMG?TVa3?qAc%K2vL2ut{4!rys;*Mva|YZ(a!?`7^KLIT~vNWFf? zi^^DFEEmOppYnk=K@`oo!?PkfcxwyI-%)#wz6l&KiE(oM(Tg0O$&5(+Z4_|TBqi{Z zrVe>s_P-BA86RFz*^<*c3D)%rCF;gw&3Rw7Z)i9O){V}=;*Cc*z3T$>u|7{Kp}$<@ zFaA#2fY+P&!9=VCMWbIkUPe-YiZxI`OE*B7ZBTimfxz!q=k0)ajsw&1f0hKX^w+?0 z70stAHe2hlko5(2zshvSF#2l7vK)AvPG>FVIQ(s?7|jU=g908kwTKt6ewIqIllbN^ z-!qk=_USmd)b6@O4Md#(^x!pmfbc`>?Xeoi6`oY=yZ(RftRQICLj)zviX3!?wf^sd zGenOtEWo9{ZgJe8gqd|SAoYGkm@HDlPETjF-8@8a@v_ajTcXPUU7;Qx69!ZAKZodh z-JKawcrYUMN(|8clc5S=eqLtmRiy#k?%>2;y-H*BAo$| z#pZru8H_I^u-q9L1f-uWMsJwfeg+QVd;;qrNYdME6@*7?00tiOnMkl~Gq0r?aeEs` zmgnEGSX9Ob&bI8e$Rdk?H@-*Z=yZGXfg--I*>YEa29&JaBoR{i=D#QPRx-fEWx`3| zM~Mpt6B^6^d!wZk;0#ZQHlz>aW5WnG+N^q<8=4unz$=&h>$ z-Su9hh>1h2Uc(SEtpD9$&>#N)#2q04LAsB+?`}T<6cg$JW;9g5CWLqnm?~t8)3v39 zBR}&x!yL06t~ySQoW`stFfTAk+5tL~!z!W)FH!*(VQ{#9%3h>b#n)zb!-0yO&i5E# z1|hy+ZQQ!W$J`_nJl6d#<#c%$!~eozclndZJd@SyE{v~LN%3xMh_z&p?>P=*{9Y-M2FIf{KpePA?T5k z6|}w8evG^dFAv3xKUCiorKAFU*t{?V8>9$DxWOax*g`2Vh@X(n7F!+(R!m7f?}Gpe zRxX{k5dx^0?|lA%&y-d@ZU=y1Nd#|UTJcZtR56GJW@y)ge&6mUzm=b{Ydko(_1c+k z5SFmT>TVC5CZ0WF8!v)sOiUyQ4J&$BZF36YGu!C{kS$C=z#vg9rRcb~ieaPIEKt*~ z$KAYlga1?%2`I)G(qT&9KkC|2GRfm5HD%V7U%4Ef%0=Rwr_{y~Y2Csp@9bNGSRNEO zd-Aapml}GUMBHqUtIvYcO7W+=K6kh>(sb4Ua|?I@DtNRf^gpN+DwqTt}EF>|MqNzoBQk9bw5&J{iuq*uq)`w#ZJ9OO=l)U zFr+(L$?IVG!ueFky!_2e>luszue!T9x3aN-veg@74Bs!MDyp%Cx>P`*_&X5+t2TN7 zKd`|e;X|IpKONFDHY?I8e?T&rHnIM8IFsG5$N5{eK?nyDCC*>tYk5);^tPBvK zXi+6FLUY+ynba&@NSMca0lhxR?7>8E<{A1MJhM1*Bfc9hGO+l4(TD&PekwH*tgAgn z=o`vZd6HF_0F*oon1XcXMG5&D*_;FnasdE5#pE;#RqnT=mzRN*33L+Y;O@Y1JU)7Q zqDY(sdvjLnrXP1@mtulWdLBvU^F&~YNM4aWuweL!PwW7NI;kd}8ij3ZKcz^?m~Gk+ zo*xT@bS3gO6U6V$R4q=m+^~sM^BV#9Q4t&XnR^wE!_2yJa=z_KitLI?v7LBZI5l;B zwGHC2yLgEr$ZzaBTO%Qo)2RK#dw%Je3!r zT)rsxDLtD8=wF#)r@At+d>Eb$=_UYstj6UR-PS(}#$%3DA#okv@P4lTS3|o`>r)?p z$sVMEJDpw)uA~ZQBiDMySiwlAw}U)pAVx23vG#u3>sBIt=q9!|wUF-f`o3bHZ?I#y z;qE33$*XAJ)-H88>hhVYUkP#>QPxt;Ha_{T^M;o4)0L~;w^Dh{eC!|_{%2+<4z@10 z##2CrSJPkm$F+fqjYj4X)Z%+R9dEq>I6UFKe$xBGYTqgx%ixYIBaQha%*53KAfp}h z^59-X&%_-UGCJzsxmQ7mZv4szqAk{1pZ@&Py z%T+FyAupHXX-!*0?{$sbD==&S<0`u+^VMkm)2>>aVoDf<%pHJsj>}0!1b~W;EhcmY zi7ndG1MXBruoL#gIBX*dEGc(QC>Y^l9D)A=6sVXWR;FV*1ec_^m-%57`KafJ4z>*R zNDMABehU3*<^SnLF3T{%OI_|0XEI5@J}R_1S6>4q;lVW4!wx{S^`%f3`O4L{Gvr5* zhK&S(kvrJwCM?pa!yC?$mYyjQ`ZnLlu;`KmhX1Cmclk-@N!v0}=Zq^6N{Qi7ZQuR{ zwxM|ei19jx=I2=UR(qdH)Oh`RA;{9=ygPzQK=|Q{3i*q7G{0$Ww)|R(0P$gNkOSuT zT;Y4|Vb;jxB-&|mwbtoC@^jJpj{wnFGUN;Jcz$qsxWtxfeFlkWziM!epQa`gLaRmW zw__1#ZimQok6Dc%4K}8;?j=cXFoo32>G3S1j%S8c4DZrBg(h|rSEnuinjgy2|Nc&g zQRRd2#r3bx1YeM)>+lQvSDy=uEnl2IJ>;9cW2@X=>FYM-tMf^BaWq}>u-vyx5&Dp_ zu7;WP!nW-5?AxHTJ<;oc5X%E z`>=gcX;Kxc>viHCAg3JvEf~f44G<%Sc7B$Q^tK{_y{+PQnxE6-r8B$^7~3o9Y4=%c zy)asTm&G8vB(c&H_ifT!WVv*U+xB50i`LdBDc)1^R|=o74sNU?-fZ2p?#?$EKYmSD zDw~WWHw8`>RQK=!j#gGSHgLNO0sxyVUCXbYQx@2{CjaivmiA#14nV(B7mMjUwD1UI zIcEua4Nm8!cyhAies}hG>g@vDjG`zrLB?CPJLv_9PWD@qqx_`KU+r?RH(sKxjh*>C+L}o z~2}+El7H04*)7R|8 zJ=A@i{PwCJR8&E9Rck z;V%J>HkF(8{DLjpWqB=E{;0?Q$OvbLboc307dqIm7+}WZ+T_-*PWWlUE>lqchSy~* zPiC_#&OE%6H_C?1IzTb{0P7lzF$#oEiBMhzOW0~RFh;kAaNvN6_{f}g$&yWzK4dHy zsROdUo|o;Cq#A!##EN`l)lykS)~IAuy2$2CmojT>ey_YTzU`wiYCat7eYx{|I~yIs zI+2BD5BZ5%#)sv{Sd*vh@m=Iqr}2jt)h2_Y1I?ZT&gJ%fk0qX+HLcZ6OjsD!DBzVJ{NedbOu% z{5gi3+}~`g=X(nJDZHk{(x1P7tKdqJbCb4@<~^{930`+wnIBz zNySp5fZ=Dmv!eC}D_>YsN%XyC<`_u`nm+-aS%&V<^z5~sYEduQF%rAgbE~v)8BQNv z-iFJ_S{uGHy(ri4L$QHwBHtVl^I4Iq%U@NI#l|Ua!^T*Y8%v*lk?Vl$ zJoMY!oh`yJ!KTiAQA6u>3iuG$8BuK*SJ{jZsy7Dn<2?aP-#O-3A$owo{d^{XeN zt^|C(7#`~TBnF~wASKYpy7M4alBGLwoEn46ha1;AxXc#q6GJzJPvJ~`H)ZuYKAax< zO~JKq9M?CJ0Cx3ME`8vI_xfXGz(t=DwF*oN18hFV&&8dDR_t zY>(bxw6|cB2}aMFtj=furZn(v$6~Elb*mn3)Z-o3`t@A1AR4Rh#Ye=JtfTg~k+1D+ zfOO!ILb3sr-(2$$w~h-!OAJXO<$7EKq|w+kLfHeE%JNO*y+MUj$Wr+sQu1)c5#;T+NdrA}|v~ywpBIXcdhc zjw4pW-xa^jVb{v#tA-Q$=iH5)Fr$Jx&knJo?JgCx1ge_K5$@NU535E7JwF{UW(~GjC9^SSyT2QJw1RfxF&)_C=eNX9TI+qRwfg**EX70b8uUBb zYOB!LZ(Z(=sk6>^k67U*9hYc7SmxhwqMqMkPcf3vwjaT_SDT?V=%*YIO{+ZCPJ=^= zCML(3u4htZ+-dJnUV|BO`Dnb4H`+>|h6nsou(=YOkfnA*X9aN;XYe!M9HWTacen-m z;W^TiUnT;VH-P-T453`w{HEIyM4j6icK98g?dRHg*YC9cokq9BK`tyxhk$*UK5rFgX907vHOw3<8>0=&n<*7`Zthrn!TR z)fa=N{|9?-8B}H4_I)dYAPCakf^;g~T~bmDB$kA9cXyY7goH>p(%s!iBi#+sC<4Cw z@;vpr?s@0=_I`NhnR~`@99*n*tmD{g+qU2K{}X=iKJM6!!^XQdm_VaxjQBvC+9nq$ zp;|Pf6xpj97ClK-!+eGk1A-k0XOz^&J$D&Va!Qb-F@nB#WP;o{J_##dNwXr-V4ir` zu!md>d<{S0hvsdfye(Rk3k*vX1^kcdJ$@8MToF&YEpXV-q+dx`7SEChFGc(^jKGB)D zc|Rs83EB9JPgfT7KI!YW+_zET@BH)K<5;)36Dv2iHiVDCR<;US=*$#sv|px_Hmjmv zKE>3$Bw){a+u1{vFIgyjxKI^W^{Rx|L2`l{Ir%DQQLj64q`d9ANDmFMa6k9c7Bk|y zy$w5x#3|F85d%q8H_=o%7kQtXm8Wa60TyJ>F8j450@7IP) zaMqr&9**<7ui+(HX6xpZu%W{`-`wMiZ$< zwX|d!G5$`c(>aL*QNbJT(NEr#ir8wL(K*F%os>lLFnMI@WR&^Xf zzF56nsP_rUCj`NGWVp@lDxGLSGelg(Li;eWCtdeDrQ)EkqbAPZJuYZlTd}GUFOvv} zM&_2hIAiYlf(comS-PiGDvRpK{ZuIRM?SJsaF96kgqilOnWC+^2i!SYm}g_Qy`iMV z+=*(l?o;Y_lyPNiBvC}tv?ur<=*wIVAn)u1grZFAWO`pPs#rnguStBnXDTGJ&C;&- zex=_Cjn~}-3sfiY3_Kk)w)|ed3v28+&csgZGI;A4?UstmbO`_4(h_RaW#7?86?e6OYkg8rPST z8EXyRpO5JUW0(RxTGJ>psy)}W4(}IS*YaAw*qLo?P9P?ocXLN8y+)n(g7a#If zQYzI1O_O8B4rAe`gdP+JV5srtQ)<&AaZ+f>YS41`=;+1}QFW1}fc8-T1D=Pm-P!3Z zhYLCGF$Sz0bPpE&hXm=mGnZQJgLP|aitv!Yf~eEzDDwqvcM5g6<*q^HtG0FyF$H=8 zz;lr0vM4NJ#RiPB4Uy!A5Z85N#IA2KX_mNYt_MpO)aLlX($bi47=b=W#UbqD^*Sr6UL@$r_0GJ1m2zVFg8aYFOb1NdT5qkFK7CWzLPj zr&T=Gpi}!(!9WjbgB9ae+7nBE!%V+xISCXXeq-0pQT>}aB!`312fm1U4gX1&LW01K zIcO9A>TiP67YPtb>hPoQ}5POD{k!!I+T)&G0g_Vad3kugZjV4;+_$WYzY7 zr`^+MdUy*FKvqeL_q92o9C+&zgFf6m&uz?jxw{6~SDHRfCQWuMO?G4P5q@gmp>suk zCin24CyPV|9zFtPneKHnsCJ@Y@Yqoq4!kD?E@sLdnX=VE6r^7%C6J+4Bh|DT8EAlm zAk92=Qszk=P|<{O3&Sa`*on+KVmIGym1`^>VKcucTklZ>ig#Hr3x!_5~ zy<-OS%tx6+lvrM*k+`og_bL#d~aQSG+!LLS5!tPtZL9r79})L?wjJ&?{LwmGM|4e5w6z$ z)gI69K+y!L`0_B>f8Hojkn1DAE1_XZhgr}H5+iA^)Dz}6w?QR4)1FYsVQkgEyq+J7 zC<}!vGMhSuRO^479zXup+RtWnz*>f|1}Ki%dUKYCPY42mL1rOlrIBMmO3HJ)?Ey6F zZ`-pKj(D2kl375%FG`BKfOA^Ie&G*w9&nIfE0bQ^&n?XJPb#vTq zyt~!zhqj^Z5PYvi;8gFBVc&}2Zv)Hj@NfnV0alYQ(!>HzuRO^(%v8Cb9EJy>nmv__ z*x`0PWqin6Pmh=XjKlKP6M;r&VM4nfOD*bb&ZEGxwN5plm`)IrlJN{>Q%%=0)Ho{M;tfs^-+xis|w0Y;F#iw364L`|@;qutdM2#&X^ts75lx zY`!R?X>cO?blpc;0qpO}%7+WOkn{Fu6jnL{8S|jP3#VLw`^ACz(!%W}uw_4_M=5>z73gBj9GG z0)jo`OSMmtr~AGG${@-f3N1gP*{Bs*4}(Tu*YRmQcO!=vZ_-B<%Ol4U!oyQ zQZ|Ebxs5@+1l(=&!hnXVvQuNmkRu_NkUT|x)@e)nRGpsuZ#*p_i@JcSpolp(f2>|* zGWL$5Ijx5OpGJp#^|G=yee)=*H0`5r4M;!cQ1=ta)%LJK*Ih)-t?xxBlkT<1k;R8M zMX{2?i%nk0y^0@2D&LYQH#o3ae67;%E-eRE zcxdZ^>FE{dW^jF)@JGwu_`Qm$J?Ewd%E3KMD4Ia0=uFw@<;NG8!zmoXa}GT3=nUMZ zwgo+d&Gw;OY>nZX`~qGF2uGFM>ol~Qg$)fKf7)K0vSBVOtCCLXKDGD#Yzk?`4c+SR z`_M>Ls@F;oF|SeaPla335uN8 zpU5SFltJnDfVQ*!Hp?sWpO1`o$Lea}!rSTfTBySC7+xMxzBU|ACZJQv{St-{*%pR1 z8Zae;^@j!E6>g@$fH1nuj(6Y!xNPJ0pkBncyL2Lg;Gmz-FnH(#J9K|~k=`jX@FB`4 zoyIGm9Vv*<*YiQ!wfa3oKcg@`mRiow-FXBN{W(vCarX#Gk(#%we>=q-dRfAP`$3eU zIwLyIZuEoKpTMFTy1i8UK3?hR?D;;whe^tw_bxj}wH{dPaLv`3U@+9l;XK#WHt0B~ zRqiQJC@F+Axjd}crjMclqF?DtZTVwytP75*HIMIF$m>Zt8j&K05n805N)DIrQWOiC zWb42V8NMNszxNgfnOJd?FZrlzy^%$@lR&d7Um_8!7t`_EG->_W`|C}!nztrVc7U!U zA4s6fSgV?y>tv?58-3SiM{*QUsxt@e9=M=S*lF*SGIRdWLug>-n-kie%x7It_%vUz z97SoyPpw?vlv*6(a$qZT?@2{dSvO4dYHs+x#qJxfv}$>$P121lW+1S}?h*~8TgZT6 zY`fC-7~nDGQ-aq&6snbk=u4nvzWlJZHEyF^30Ge|>lv~FwRLdg(&|043?y;sktX+(HHm^32 z1R?d?qC3)9ri27&`mtH-(loMMtumN5%LcB82sU_hFPS@Z?nj$K@7pDb6Qev-_*!Yy zUY^D0j|<`3NeYq{A)EL_wlPx5iZ;ZoogIrjJ6%@*w~vMBvL6=7WB)7@664qRjxODd z<@GTzW(fhpre+B0(~M}`wjQ-%#)R9PjMDd4&kl0KjP|~ozoU<1(htR??pl2+6jed1 z%1X9yRV)P{OxpmsMQfK-@y|>CYD3t0No)p%b=8Yh-(4!(!w1H@3t1j(IS z#_7oc1Lj=)DJml=<1@BKH|{fC)gNurQ3wqlC#-0s{6QZo@hBxu4F^Q4oPSZkXiHPe z@fE`E-D8pTL20Ej9Fygx$TEdE!UsF7Drg_S%@$DGZ4QxUw*gwe^5v=gJHBtn24j5*luNgXT#6(2N||tJdae!S8W%>=d3*xH)u9J<1tv>XzR1U< ztTx=Ff~VpWJr~MNE)>+-4$-yTj^kIBU9V5{_zFot<=d#<3LJAF<>*Z- zXC;9f%VEydU8?g_En;fyDWA>i8V+xSBU@c`(^DJ3(aUZS%<&Ykl+CIQWkx>Oy^5e5RWlFGhS<_{>(6oKV88uA>y(w(n$e{k>mGON8}&V3nSFf~{G?MJPkakk#?wGau`QV~FO`sI){-F=gD8sS4f>eb5K z)2#S29&L|UKE#|eTO14uwL(>(7s~Qtu;32EH}PEg_b}Img{v%2aYp9)ae2^=x1WC; zNo&O3`KocfxR!+#hQ|aSTpZY;8SVYKz6p!n%dc+EfYj?VcD?LBpsqzir>Xv z9v7S+Eb+OWW6#xm%NQw*{R4K9XTr`D%92{w{~-$lV850Tq(%SFvEx2ix+sxhL8MW3 zPWs;*{i zrOxgd+JdHy1X3(Iy??#|ETX3H1$x_s>2>b6TP4>z(7QmiMMKjVYB;&zn$kM`W7Ai8 zq6f3(pgRN^U`HLj(dm=cLo)xl^bXGZcXSmyV_6grZ;Q2>Tx?IahWRe$>yrUN)O8|f zYW~?yu*H6`ivjNKM6Gs?qIEFPCr26M{3>-}e)nZ+BwhoT~5% zA7OBq_!lw=cEf+sST_W42RRjB?VI7Zf>hn7EIWr+uPH{WyZuG-5SN-I@IMI)?*N}P zAla&9XmkfoMTp5Xc{?^wZg~GXP0K*sOvG&%S1TAhcR5-Et&)^Y- zk;_Ocfzvr4SrT3~`4l8eU`tFJKL|rU1ZPRe0ZdA{=~dmI#6~3+>$9?@)~n#gpCW3; zAK1Im8Pp?$_wn#Dz6Ng@`+93ZH<`4F)%GWUUjdL+8?+l!U$tC2$7}_2e;_=7xRX;mgN1Ub^_SE`_k!`5go9H6D(<&a zQ*|6!ynT25sIiSv6XaRA5$~$n3K#2m1U;_S)hKGKUL~aopv?2bP)bp<;*~_eD&>TJlez z@`V~5^6n>C-Qf=KCl453CXQ$Ut4`+-A0fK3Ov&&dT#`m$F2@0C#CnbbZ-*c|hwRAv z3Cupdo8cLeVtap}IP@*p-A^X*Pc|T^!}3_k-vx9`U`_NitnDYV;2zZf!D3-E7->i%A45|)Q(T?oCzxnH21MuX z3;@tv|CNV;8(t8`u}2O7EBdvMXQZ0R6rx{^xIt5dhjylc}0w_)lsViNY%rCY?erf~o=`#aN_ z;9@8x{2Y@f>$Yq=Gf}a8C0u~|UumF)bt0PJsOQO#Gvjc-|AAD}05=C!Duwsx&PPkN zCY!(Ke1)M56o(r5Mz^_!*5yC&H9C?f+#{^6-d@G@mGyP=u>lF_?juH+UFdvXXJ_z7 zT!jB#A~ImTf*e*0b@DWw-j)BnGwy@}=*F83*jYbY$UOcZ&?r(_;$A1Bx6J}iuf8UG zXd63u89sNfzUS>F&eGE8gDw+IJJxKAKciK&?FRlkp8Efio&UciJOA{ovOWjH`k@aR zAqOeImeiL$eqQ2y*1cZc%RvE?mB3oJmE9F7xqAO)>&h)$ipFQwAA~B;aT!!dT~Tm+ zQ_jd6HW~{O1NS|Q=+y!}pCJz6bJ5SK+p*UkPK>hjGDwq$-20u>+OT>Mywcg|JX_<%L@5>OM=-*X*S z8?ojRW(JD@`Q-1+bd$Xv%>r-=1sw}xUPRpk$Z0>(s0O=Lij#zTvwbVF+t>^4du!$b z>fdgQ$w!vM+yEe~Z^A5G!1)e2IW&?jk(FJ8=Nac~*a-yvAuKvlhT3uhitS6bCF0|~ zB?4eCCkpC>N*QiU8%z*(I8lFe7gEXDLn$_-yOY?|x_CDt*lZ`D%j=B` z@YLRI#dMgL`S+i#Pso?Xjh%!+ad2g!k!T6vZ=Z4k<=%_7P|AZcnOHg536Q;g}(Wb*e@+$zGZ}Ip67tN4Z22cRN6>9u&70K7Cz5gm# z#(!pmQlQHF3Gm%4V%ksVwpFSW!7{WP5@l>zIqkNet$3Sr>9+W-)hnmvBVYLN`E6db z&;VCXb9X;LeH}Cb^oBO_k=Iv)#p6v1MclMR(#`VwLe%NWW} zq|&YGMa${`TfY&WF=CBI8YJ0B4G^d70lw#N3DZ^K7_HdqLm;gM!wLFeIAxpa@{b(j z=hv+kY-Su*xwk1L7FofMmAr|HREDP#SXQMDqTzjf-GGeSAJ9De=2wlTY%k{)%k0|G9K zutL^?iEUHpZI!eHP}T&z->VLPCZn%*+Y$m=jKSx7gZo^s>PN{WCLY8YRB`>JT z1e<>OaLD1&GQ8fLy8xZI_kmDZwz+rY7)&`l(emYJbBr)MJ1QZg zEYK+j3YOaSHg|Zs{J20#DVCgDLG*weuz)<9rdb$ZYud8PYoyAL4CzjYGB z@$y@M6$8p<;dIxAa%cSaam8Ui2a&aGnRtBft4k(&C1Oab%L(l-R_c%ejiN`0b{IhH zkm7kH0mC!x9>w-k*UDAf3-+Zr?$Kn1=rP{#ubWqg zw`ylrzB^5z!CUbWFc>u}613E*B~;!;fSLAENebyNO+I=!ccFVB0>u90;Pt#C^@%ID z`-{3JlQJH}|KzY8H{5wXi6ez}(EVXuszAUrJ|!uzl+Cx>a>e!FVpb(mPwC1Odew1> zYCbhveK3j2?752&67V$K(qKK~!+e9~3Xf5KYU^8^m?*HvK;MD#EEI{69P`AWk+F@sLH$<13VE9GbwIeW!Uw9d%WrcE`fu_Eh@u&viuEdVA>|1kUtHd8UOO%R zYzxLsM`cLJ|CSjmeSP}lIdBIm?IL&`ye0qPQ!~H@2KwS#$wLQyvA@OP0BZNyAR!Cy zH#JgRBf3{P9o6A@aJODCBuB^ti8L-B>T6Wo+-&-$+2iP?4}PVB=R>8RpKu_=##N~c z)kDcx;i!=a3ay>|xH7Rcz1(<9*vC&#Bz^S_DTt;&=b71eCRmzM92VfzSrF_4x-ckq zXLA^}{uDcKJ9HddHuXYmD?UtQmfp|H~4nnW^Y>gwKdr`2DckD zy@Iv8tx&fWL(`s*ZnZs>sp>W;a(sm6*6Ab_bkZuz^zDsQ`aWIgldX;yKc-MEP$4t5 zJY4hN0FQmO?BNdWuDBz5!l!{LW4;jBB);i)n0l_ zUaE}RACuFZdrT=z7^&aodDVx!X+zuy!Dti*M&E4N&Ke%nC~;owhV5FFh7`h>%N@qeIl>P!Dc!n1|dDS1QzK!8oXhOr|tXjV>Tkt>`{0yQ}lXmtj zj?o?+{LC<*QT91|n9A=a{fN=v&@rX@9!jp_D_m@OeFlh`ejaE!N=N}mf>`rILLh^e ztL^5sA^GAxMELtpZ>2SjxEF9?I_dzDR$r5;9aX2Y)0}-txa${ub#w=yg5A6f?xsif zcty{4&MsgoeU9QKxoM0wu7b0(Ia1?>{qaV0P78HA^BEkPWI`!aWqyS!t!dhLqt-fd z$^g%+FdsXq7yh@xV#Ik~v+OZ7=%~EPAbsDq#_k&9lwHRTkgS-%>z*U_C>edpg@DP|5SF=9n`=Z%t;+qG+3x+ZsPJT0qGJ{Y_CRqc!mRUpy9zk~XAFD_=( zH8%2LGpHkX(_!azzg}1jYvcnzUR=ZtmbgrRej`&!(b#|VOtYVBbRzHY?NJ|JoO^4k z{no;XK}*U61CAIq<;;Sg(ICGkCgdzvz9-ULqv?X$s+a4tuIG&R{>b8m7RlFEjXbX7 zR|&aWhrB;Dp+Bbcyx|j4k+ApEGeS}v`vMA*vers0a3EWzFJB_4GjLd!qYXa66_d$p zVL2r^DQY`Cbl;nIZjWWHs3FCMP+WiL?~m~r26h4#*pN4>#LA6UK$2-mfbqsSxdwWn z!Q5dBEMlio#bSiNgQgTuJb&sZ+k_W{nHs&*Jq;XtT;rm{(Dv9TZofFRyA7Gtri-5? z@xILtV>`V==hP-v@{I?sw>amAzRuRmJI_!+=VvSqeL+&mGT1c*NSuw#9<4eEM;cz4*TV>SUGp3c-vL_WdRhzQne#I$W zT{1(-NF;QaZOmstFgo*E=<_%D1ZuRcvnr}G-rKP!G||N4>Iz~qX?KV?&ga4i`|d78 z4cVONgUcaz1J!QZ33rS|VR@K%CGCFH>P?UL%KS`|gD9|a-!z0RKF>BS`~5f$^u9g_ zVxv*pA1gN|%L@PEp`D-|(DI1URBs_FPVzA)JDplaMbvGD8uaU_z{O!BG%1ck-k)jz z4KO#ZHCV0b6auad0j$8m0T;JQ=W6MDKBH1B;viE*irpZUaLoALi`QL!Fg|l^KuIob zS~iaPfqH?D)}!N**L6sC+bsE{tU7A?a4ox&V=hRE)@%yfM zV!k@XZjEly;p<3S?X-+a*-FevfT+Z;XVhgrCJ54$HNa8pJ%iSzBwC$} ze3wY>>EAXG|CyYAx;T<9E2t>WXS8bPFel(zx!OtU^4pm2#q{-gbQ>n#VcKi1YBhF? zx-#A*ZSAUuZ>$8FdDu)z{WnwEFjpsbqwi)5{SK%{m_<6eXg_zw!w=5Wm%-;&e=D$F zM(~Fw@GPp?d*&5SU=r=lyFoA^TZiAN-`FC#Z`trtt+0dH$H-U>QN*N{DHOt(c^8sb znWq394C2t@CJ5tyPrai~dJ_|0y?#{kS_@qoy%yV+<%QLhy57_FMM}bf0u_^^J^0nY zO56msdjmU1bF=`d{+3Bc$*WZP!pgCvT$5fL~i6-*H~a!Xh&!Pe8PKPUdY7e zeaV!&J1kHzI$Um6K{zU9f4CDqC@b!(F!2e$-&9&gnSxR_YJTO}U(rQ=G9q^8loK(o|(}XEDt(NHU zu7C0b+VEE;!=WiJ3m>xgMD$gd>UI^DfwDSpAoljYur+TW@EPL6jTN5|glOZvjm$9l zxY?(264Sy8uCj%QJdU(^eDX{X$^*h`lgKLi=B~M%@N(;I&drYDQWb?dIypTuoL-vl zTEW|=WmNKgXpUu>h5FC(kVeyZ3KTqLlY^|w-M7{ooPK1c19NB!by184+sS?rj5*9E z=C3Qalx4-9iY1*O2e7H(SL>cb^wv70W^|Os@32=%p@(onWS4 ziI> zFU#)~I+n3k0%5i*D^Rd1pgpuO_xYY<`-cTMJ5jyow7ukMD^>5aQI%NN36Unx$FvD* zi>=>Q`>+uX)Cpz>Dt?O8J8n8;)@6DQIOD3i+G>OME0ywr8@!lKeg5o^cZ?0IFZ$Cm z;-S#tf4usBJ9YiuA0)fOU{~lH_-)C`4SjkTR#7^qRIx?oqVkv{!FRpu(-St{hyZt zA$Tlkdv}lr4Cof*wdU|GFsqz-#;as31)1`MYHq z1nV`&T%4bGH8(y1#O1@sr0frY+KDaFk8ByN^|c@V4Tt=$a_a~6C{Fu%CS!2lt0h!6 zhCblfuJ`l+*|ReH8NmpG=Lp*@+7+*bS~%YG%F7n#mNkDa49B^u_mZ##vv3V#f-u4Gb$55^v$E?%pX3{q`H5~}ra&d9J{KyZ7vCx?X zJQAHM{Qy3Q1!P&L*KusWpZ7nfl1;t0+7Z|Zcs8Y4FMErHy5XZUHfx)qoz`QyGf7B; z2WsxikAaO>N9#5E+7*Tv=Q~Rn2EFp#$s8%ff>AzBU$SilUZ%1>&gy+BIjZ|0a`<+- z90!myzD^{K>jRtS_YZ?+{+;SRVvpd%+F}v&_=Qm=79j3UB~oi$k6@NX2TPu98F4il z>oDRqJGAoz*hK>GFO^2pRzoIs{ICU?b*x!5fkn=F@253B{8ZU3-(I zhb;MOMeO6*3JA6|NAu43ol+6KV+J(MCq;Q{5n8pTL63#9Rh~Go5DVEN``;1n$`=c` zIuNXOHa?ZDGVC#zxaVN9jEPD#DS5gfb1bg;sm5|SvK2A92S}uOYCo+%nu|B!?J3ZT zOb0M2UrQ06gR~xyi6V}B_RD{y_1Bvle*4_VJ73qiuxZ~(y*>ifOTPk@ZRlmHL=P_v zn_3ds=Lk35s}%8f13Qo;zx$6S8mx)WpRcKb5CIOC?$W7pz6i{2^Uea(71&1SET z^Rw+yZl?q6AXKtw0ry{6yEPS$k@cjbe18v6azk_^OmO5&aI$*s5C3SLLh1;!&h|DT zw-Jb)s}`x>2XstNslZPRCRx?@Os}<3N`Nla`das3gYH?ncgHN7_xg`BJV>L}^M#JO z47aH;8|Np%<3?s{UseqS!vb@{e~qyT0Y~$0-v?5vsfw6Pj{$}kf|*maep$kYtI=Sy zdNbQ(TY&sb$REy(F(Ir=Y_l&0VKm)!W%k6{uMFzj1gUiUSYsEOBO@4xUrMV?Y0%L{ zurppAU22SufEoUtR}J~#x>fD7T@L%)?RVq!*7z@}K7>H~PbtGgGKRY2^mr8uXeE+d z9`TtdLql`SPdW&+&ZU2{2C&lK>4tFq8S$ zYB|2c5eH}InaAQFu zR}_9@K$!J3ffR!Z0Mb1%Su_p;7{Z71HEm!8E4Nx~y|&((PP%q9*S|hzzS!qs2f ziyPk_E!~-}7RY^>7P-uOQBkKboXpww-BzrnQbF_evvR)GcdH#3W9WZ&Q8 zt0|dgeibT$C1QdOIjvu{K3%4a>LPJD>#;QK({7thYwu7gcEKSTd`r89TMW#_ccH(i zWH2+-_Ln2oE6;Z)^kPMX>?;2pXPRaVsm-K72=fj;oX7)aQ@My5)M%)0G6ZgnN7KG;_r8qcoMgSmQK;3_StV8rx-sWa$OEM-~<5d!TFO~eKA#g$=Q z<63j{AX1!0et>v>tuQ2$r=Ny3^Tt12%j_y!Pg9G8o8e!pKOY=8UyzIOU+4^Y(DmB5mc zb1EbODp21XJ*I(kT$#-Y@@8&>sW_Cm?D2HAs;S*tyT6pz9HS7Id5En6;?I6RwOR$o zp;R8U=f6Hg*Bmyu?knP3EZ4~9JM+YKRCzuD+9v-oS;Q<8CjWwv@aCmN*hX(Is$8}EOKm~h_xm>c zTR75joH#&#yu^S0vvz%HQ8>`LgZ3}$_eKRKbI@nh*>PpU}ues^>kQGbf2L^%*--QX4h|b#DrYRcASV@<^%C zX_nXYo2GUVAZG?vw1X_>;A_UJqPr1BYMRn$6|S5F#4RQykhN6*Or&orWX@9`d6ON(lx{Y{*0ND5Q_< zM&<+v;#XEA4rmNwoDS&;%ji4I?I$F>=U-$5skh_0Y-C6oA0^;1Xq7hQM3fEyKLS&Y z9hO#yuA9I?hlvU#zS_@YzT4DtYQ4VxQa6mq7LFJ}s6@D2r6e<%{i^r__?O=nvvs0# z;mk*m9Q!^zO;NZu?bA^w_a?PitPqV}1hUTiTC(mB@p9|qNBR47@MYgMZV+JI_gwu& zBio(U^k$JV?VQ7LPJ7Pir7w=1H8hm~7L35F!RB2ly~c9duk9q{ZBq3d3h4r}gV-NU z+CO-;pxN??guzd4P(sJFTD?$61-v3e>AQWmw-x(hd9t`r654efzAZgS>!g2`ElsOe zN<}Fd;qz>-u4Q%6Kf#W-1i~qm_w3!wJTbouc7H5QP*bUdb`Z6!#zfwQ2Se;L;bNm# z8yR=vIqskq)`F`^S(?mY{khkSHV|dAcTdg!V*BSMSx03ai-S)~7ww0STIWTyWopoV zZbHs{{{j`swZTM|*#<{E-DX$+aq95BiTCV+D!xW#wEd2h;u2+s1K%H9eiO>9|4LbQ zQ;}fTak^2S#+w&!*tT5u24#y0V#X%#-yTM-E;;c6)$Y|R?O;lmobT~QO1Bkuuy_yr zX%80ugv6-AuYckS43TlU^h;v-6@n~dU*e0V4G!mq=k$&CD6t2ptx3EC zA=y;9d!512*G9eI7}=34|EvCp@eyPBM}kuMT+GM;h(o4w{QHSKV#wAY!{;p1A97e} zXR>RN;>Cwb-zbSW%wpLz5_dT$SE!a%eBtUH1&bLpJ{LG(oUs~g@f&cohQYNWG46_= z8awVy%FLcOljqg% zMauV+Y+C^wL%!%rT?@v$m)Eh%-ZufeJ98y0<^L|=z#Vuszr{u8}2aRAn}y zXU-VP$~P5O^F2l0fI6+h7PjC`EiU*`ypCYCB+Q!9Hr zp?3s3OBGGG{v^uGVqw(g7UHzl4g;dP2BRMz2s&|@uGy&4>Jb)c$l;PXXi zC>xT21!!`KCDYU8E9#(7QEQdjG3S`#_ujF03624TTig!EdN0f-kq6a_QLW_qPhx+Y z7S|#93gbXNu;2M8XFa4ejw-IRyWn-ZBn+u8!u!5QX)#_$%IjzmC?1NnGQL^#egB&> zr1nU!XmB&`_YriVGgz!(oWp_Wv)i))fP#nH zDfO}~saz$(Q;SID&ZC|UD*VK7G(odl(iJ+g(CxPHur}|U$Kt!oIT`EAS^^eb;)OaV zxI@14ws|gVHiI6Bf!@QJ+#dmfo2z24kIzOA%fYeDd$@annqi%g+XVhquB`8#8iPr= zGLWZd^Im0NY@xM<4U5*{CYPO%#RdND;`K#4+;9mYt=*dI0gHXl~&(~t?Bt}v> z$g(6Adq4}H$65Vf zFJ|91a&!|o2D3?8Zb$uETmni?*uIv~`eM4HCl8;|_KkcRJLWpg%ifR8h)N||5|bq= zBx*%mUDG9v9-G^EzSJjDu=c^YUd8x(#M!?*w{9ec(p$0R38z$U)rG;~y6SrcXRgEU z#7f&zBnVL8p2??%R_(qL(@%RwH+Bs2BaxzgG``fh1qn>aCT!1L`tXbOS`m7q(r>Iv zDerosiTudeH+xG`oFfjejhKPP8_+LhHQKX)1@>uP<|PvPxy6CDKc`9b5Y!vUPBWqH z{OI=^f3>|~(sBkG&M!<=R##f&XB)53b4OFdK~_KvoUp1*4(Amkg*>&01u9TMPNAz< z6GgcRLa`WK;I1m}vfxr4!=?A3?fXcX;-5h(Oa293eO=Nqeg(g@dn|#8??M>W`%DI! z#@m8|$E+)|cs(3B9kDJ|vwj-CFZghT2h zHzHN0^EkhTwCKdO@bGKgr#nt-d4VwzDitXQ4v zZ=@m&MMrw!dvT@b{mC+AD+GrIm0v*MVxhai{&ZF$9GB7g%EsmTdswNlTc%RfQM$@> z&VsA#kAYDZxIxu}^KUa^D8<|-<9n9|-C?18wF+4(+e>p(#+EPLoj$8ru_HZ6S&Al1#e5;8`N4kzt zAviggjiH#sMvnKFAfnctt+Y?WRnyT$i;pC@T`w({>&j_Vj4WMFOK zI*9vsYxST;&xb40nb$*q?qrtLhI#d+JZqi_nUQCghEhg+c(z=nS$=PwM$!5SRPMP` z0O;1S4xM5zVdL$8T=OZnlF$ zNT28E0LS+P4mmivh4(lN?|l_<8CNl$yFH#PoH%bZicR9h7+viQmg+vuQOY#Lc+af% z7~~*97ieU;=TB14$ej99cZGj9Zjn&oPmhWvg>FM&$wv@L%r?0;z8%lK7@Y$L?(pM< zT~GR>UE)11i#LW4mHGsQd`SNOjMeqP6b77JvEX#i3)>D>OU;3#`g@?EI=7d$fFhZr z#M%Kvu*v&A5vQd@<=eyaZQxAS1)S=BVL<{_lY``2@=SNmUo$)={(>SCYt&w8JnG1I zSqA;_tW2Ez3Bdy5he{kyWNi$kP|RoNAOqsAD6SIp5+S0aK|1E<_~!c~#3v+y7Jb_F z6>q?y(G88V#%h^A$A1o$Mi_4>rO=?AYAGV{&tZqsR~QQdNa`LL92M%k4k{Nu(ST!< zrQ(cQ@+-Y$uJIFDoyLo)69u6{AE)wn(mvlublfjYt6;X{5`zj!T;4L^=qhGYuV{c- zl-k9V#_y<(uroS+7Kder$<1sfNk^G0o5)-Uw@s$)DpNu# zCf}j;?(oF9VYW|tVtF5dab@Bb6df`i9Kv3HEZ}}ktXYBg!>Z*? zXHfU>K8-`^=cv|;@pL{ds0^8Y_ufK5q{1{k<#%w}*)ue<1hd9`p>Ne6UC4{ra9H_e zx*%h=IHc8U@x>F@&0)X5!Kp_b87{x>=J&0Wf8AisRhi|^d|&OPDO~)4&ufu-H}oz_ z6pdo&^)ZtvBg&ssG6$Rj&_}X7rLg2*b1MdIStEALKXOI&32v^^P3E#&M$~^vZXy{_ z{8N$g{Ui-4ZnULgLcLThZi4o9T%gU8kg|jiB*j(Qe@Vr6+qH~*=obAmh2rlas4M9Z zM1}u5=A_AIF+a5-*U0ty{w-*M5D^<9;%LOp{O=|90&MWN9~11b-snKg{io_BtVF${>f4QX_HfFw&Gz3im(M=>JFY1125x!ihj0dP zVwBFk%-@UW!JrfuPGYr+`p7U(^S9iF0eAWq!&R)WXvX^7eCM&XBHPR5G0MSRIJsx< zU&!c#tPln%Da*;&go+W#6|8+<$FQGA{?B#am%>6K;eD~N*h$gU$d@S*9tPB>*Dli< zd#uD#+hkKXIRrgVyW(g@g%dNfYVrQ_x#(;-;Ov4xiCwDGnBf~h>l;sHO9@Vr$cpI# z+h%|7)H@%QM_rLbM4$}Q<&PLG8r6mjOh{hk$VLIDb{?E(d{6&;5%T*Fd=2-Sn}Pg- zZ5`gQDH}vT8Owg5NaJ(;z`vmj2tUzj;q?n2IFYFyTutPS63?|xz^f;}D0nY(MB)FH zq#@PfPi9cMjSCJuVX=svF`M-*%V0mdI2)Aea*+cPzx__S|X`BT$C&V6zbYeF(>)ZPjv8hj27% z4ccC2!El3$b&wB##}79MuPQpt#w1J-LoEFM{jn%IsDJV?K0`<#A+LD!XJ(7QdjPF@ zP%BAbYCN})OJoiU!RY+SZQp=&OR(|rg#^87zB17X7F9>3v0}Mrr&cPT;{wqRWzGPF z^=jr`v)ECc+X)lP^CELL1V9gA)boUs3vOE2(T0kin#)!=5 zjZ_S=UCVGx{K#EljR%az0)aqkhWL-LBsME-E1lNDTCEJs9OZmNf?}Np^v`;3pB!?` zkgzxzb0df`J8CHN{sfH_!9LU=&!FvjyO+bLA#rY4Yw@*2uX;ORuT#8 z8DqlGhc^q#9Q$1Dd86rNZ~Hw0oKXCa9t1->!>GAkH{O+E(Y-RfS-P@jOk#Va!OVM! z|F-YzR_ddFCk%HY_!P&jAqnkfH+6TguLU0T)TyC~VNH@RjQq0|XwP%W;Za)fF{<=xMV{o1B9 z{1>T=FF&vo_%96O^?_s8-1lmGXX?@s_e$uLNgvd3W_ zI&7}KS~K7tPc3D&!Sw1euyX7eHrIHe7urv+)Pz0F@;YMXvu@^lI|7g|M2nNWv0TTj zB>c1F`cyvpzCrmSgpsTEi<>l%q_{|^Y4=FQhTMS~BAeXE0`02`M*W`9aAQE$?_W>V zIUGfBwR8`t3x9-@Ah+r^*I?eu&a&2Rw!74Q0{=dN@d4kxe&|3Q0)LZ9C0t ztFTRjME+d|@G+q88cA>mTftr2NtXYcC4v_LyNMG78>|PyhFJf;A-W%u67=gEHoCHd zdolm@+9csURXUsN2!my{RB{pi0NMA9MKfS!h2w`V6ZR!h2v{vNZGI8_9}AZxISA{d z6J9(T(W~1~Qc}QzWa6gtdu|;D4xyzH{AU>>zyBN#?CTN-5H{PxYx&>`+vK(|A}-F3 zy88BYY84u82Z^tL7et?Ec>SJ`sOXO$f!osuF*+W3Xg!^6{Imb3y|0Xls(a%_7?eXIDM3Ia9a35mknWU0x+DgqK{`dcqy*_4xO?9B>i@s)=ezE`AMUJK zd^pU^*=L`<_j&g7i$_%nC=VyIh%A@?b`8S<5eyVCSpc3tY`mBHq0(S2-rwJs&X&d2 z2{|fL;uCjY&;9QfClaAx{W-)nTA0X3D2F_00S-r!Lh=~v-s%>asNW?q-52QJDWTBv z9wHrzcZWdV)T}A8T>pWZc~4L*lj7~y205W!%D-+Qe=pK^B+$Pi{qcvSLPk%U-8(Br z`L$SoT`q?D4p91brZt>I5=*DHJo=_$Sz*~331fLr|3Eqj3yY|;zat@il=07l>IE@u zFayEzYQekF=g5%44MCc4w-}yrzUa{^;qEPj7 z+@kLbzEKBkjES+9|&XCfO(i5=+W(eXo|6k{x%lZ=eh1P@qNca}9b>AG_pabZum2OI#L z#wiBObj$CJ-$^HFKG1uOGwH%jLq2nTD5ri^@#PESjI`_)AQS&iyF1&y^GC zNQ4hBiPDVtgC{3TJ!lYO5WU2*#fP?s5Qq6HYM^lHOf#DNC9bA?CtoRrx@a`7YpuE% zeI_GE=$7Z3PdB^=CaIVAlqoR7NxLp;cV8^6XPMKNz0uIy)qP-MekV=v&S+=US77^& zteAlTT#8+ztih)M%3HiJLJkL*jsndnNYK404VB;xKTY8su*d@2!hCJid9n z9tm1-{^}FVD-E1*b0t9Pr_^wOH7-96hY^cV0Ul<;OTkLAT}S|!5qvp)7~(d~0fH{4 zXi5pCk?I-Z7K;+2cOO_nbl|49cq}`{Q<52IqJRErIRNi29pM)*oQZAfcW$NM7+3pjEA9bc?&T|>x+>e%m zyGx7>fc-82N7T#Qb~tA0ybLBMe|d}QyOZ9#*d1C{UsVzd0QDBKr4AB05q7wfeQ5^z0sF;1F=pK0JFW0-Q z66$fK$1vm5Y%SPr3+4Q7q^EQ26E3+K-5%oQE#G5$(AS`r1Dp`aRPeAX%A^OFiCS|hZ zEAYIII_NRUESo+dArO0_f|Pk0INkMeFr(70IcdD^sZY6OSL=r^{Bn!cN7hoJ)_aCV zotF3PzfF}_iGyIe(h=4unsuagOzP?Ox*U!67+sj#dc`fq<## zO!I3Z?;2!8CtG97nA|!Ah28F|GQQXe_=v@~1z4T;)=l_cRi2sln>8*8`^l8alRm!$ zKi;Q3;g8}!ZCrGxOqQbPm!JSZ>lz=(8F7L5C6c&kwD6*fog-0*#PN&U$*|n|=qu@~ zIh%D``tUCN9jo6NVu#m1D%7K?W=fBr-3h`zKq6NV-Y+04S|@(>gWK9CHJn>|6c?2q zG5E#sHK3Z3tVwnetFFV+kZ)dh#ch;7F^VX|^Ipo6LWj-}ui(jf;YOL*q+cm+eqZdYCqFkvRJwz>d^@G*xWEa3Azil5H8HA!pG8)rNH z1)~#>`i+QyGn5!qmTJ;}nH0vn*~)<*uVXjDN-(16zPei6LTK?`T>IH$`_UW7FCJq8 zbOi6mM^lvWu zqU{kfNFwt&bsFbfk81r9Mid~16POTpF+f-r$qlF)R?BSHxs)q~eX4by%k&vfeoM%C zzw>f-o~6uZOMPkN{^blIdt6eM5}ej;HY2&t{XNY=p{Ts2iR0{d~rj=ukgb)ePN3R%8${sfDZgAC52Hw$Zt* zkIq=J(!hRF9r1GjYJx$)J2Fr2PD9j6z@sG_S7J=nqn+2;z8J{aX@2HY3N7^Oej2U(*$>~kRWjKT4hppWSkyS6Q99jSH$97{cEh((ZC>a>0;MV9;rsnZ_ zh^ccorviBzHXWARDWXK+NjQm**M=eP8po0-;-2lNFz!4djUBCadfB7xpRQsM(!M!b za8N60IBKbA9;}<~a6aX{B&9Sc)fR{OoD=WYERss9p4{i?{L@7O9gjqb|1Qjw%8qhQZCPY%Tb?7b8-&hd>-f`y=?oN_sb7G zx+OW+b5XlMm~fp(NR3bR_LQ8#V`__FLIJI>A-Dxl-++&kC$NS61-M@AgDaiwm78pg z?+I^sjZqw(Lh$I~#M797 zf3oBEd(3*}4DSBg1I1cQRN+bq@VfmZPAeW7`&p*`aT(!9pt=w8*P>|-o>$d{ZuPMv z&A8oVWrapiYXG_Sq09_eYA^K0$Q|{637g<`sWz#@y+{-@Bskv7%^TiNTSADQidh3#F3VPn5R`& zRDpj_n78x{9U+G^i|dDvw-(nC=XT04yK;syQ?Eu1#N|(l;xIuGmz7L8_ON=i3S#zs zqtfPS6FNOoWpw+VRR2gzP3mbGX4S6We*Ozrvp@Vup&?ft$<}O%?W^@pXO>4T^O?>X zOHQv}?0zN`{COA#h91EUT7h5Ld40tn{1GYUxWTPN`Q~#~J=3oA1r0u1gU06Mol{MY z%I-PWgO2PwYUBr?+uadsTUFL&V0C2g@WyvB+zb8CJyNP zt>SVMu>FkZJK79>Dd>J#7=zw74dw9V`aBPwg`1)2hiRn>JWbAbg9ZdtjoyT6$~$^%Ozs5 zPk>?*F)HR`o`MP<@Xu0q)q3QW&)U|dkddQUKdGq$yVUnI%?%p#6LC3=juqd>T37SU zR^6tSAUViapE#j2XgFarw(3bu>SR;Y&Xw&-_jO~DUnI=)Y3gbyqKR`=-<+!n)>KfE zaitM2Z59Dq0v#fUa}ji~>$27@*Wm4pIdjV1@7s-V4KpBuCRnCZvX-d;1c{++tf*C6 zKIWcPtO_Ue<{sSy-2q(!smx3q7EHyUve>MQ{GCBqPzlB4>j;;JzP4oCMw$j0?bc@{ zZ%JHGsx)5rSWTc-%jic}PC8m;$OWtpxWJDwXnzE>^IHTXY?Z>kevTOce+ifSAJ3Oa zKO{16TlWwXn6sfCG7^zYQAa#nN)XsvXcz?#XGWa$gn9fUWCNon8#8W4MFOjl2bDgH z-VK+YWHNiJ^268S@y$6Zd2C?LalHk4m@J8mPAjgm70909s_A25a{o$xswQlX`KePd zNE)6%Vypf7>yvOfK%|4qf!MY=b9u0c1{}F{cp^R?rL&jJT;uP|@!@dHPXH6z7nFQu zr7rggm5(N;dc?uOW}^KhIhb`#LH|iu>NKgQQ)5OU4aT|OBkOEUbKmprqFsy-%eqo> zNP_dpX>O19bJ%!}iL$cL)j>oDG)z4NRmfFus0XF_82zy;?fS?=4E=C)2-?buT_(&m!Z zwoiLe4~O3aY&H&4FVDiMAc5oLhXjgnX_j;$_ zs2B-)K$xZZdYO{j}Z zB)1-3vSllS1CRZ?y2$e8xyfZ6^DkL%bq(R8H3dqhKOSm)$gTM~VA?9L>LCbGW9|u6d_hFEj`6tq z3l)DnhiETd#)o~jXo-DNCWkxiLVVC?qoJ9jQnTY19mB#r%8fLqE07*9cE zF$b=LI(nm65YRiV3$ARMkOT<+wb0XCQmR+ZvtvBZT$NZde&jC?>B=s}R-0q@0?_QI7yLcQ|d@{XT)yk3Y zq}HLRHB*s!q!XX*w^u!mG|8e~-BUY~!*pF5rn`$aJ@@IEGyQgXHM%OPfXFX{w!{Fz z6J7kle*aB510K*u`7B!woTXLVJf;!XyR``JJOhYtclYC8K%xO#Gy<)m!l8NXQ==4GC* z4b`hx@?jj`@qw7M{WQye1LMwG$27Fj-w$IWP8Y7#FU%&jtKsW4GUAnl!(qS7Ln7;< ztD;6mveImnMk!k1PZP_a9IZvd&i)bVlhabs*%{kX;St4&JZCC%sBfM$YjZhlG2Pa@ zrPH3h{t;>1=}f_Dh(JcX3G~=ZA=W^_aw3Bf#>gY&`bb4f(MNgog+ZQO;!jT1z`Wc0C{Pk77v+FY@_yZFyyv`|MBe~{O~q{qZcPdQ;$g7pftrqrG%-@$@#6x z5knww!p(NuY+x^sCv0;+nCA1%R)OT$!PA@_0N)V9dERq+q?A?SWpQEk7A`SoR&WuN zV0G-X*>RcX(2 z2%lN=o-XjMkfGY%+wjZQb|@5_;@%YDY!n^uSw}wqoS(pKf8u%Y_zs00mwtglrCkIU zCsb{mjBDNifY2Wc)X-3gx&=#!gQD^F2SE=Wom3~R2a+%nY0}tH6zc|-2}EA(ETwo| zYgizc7@gV6|1U4A5dca{Aumt~#?LiNsoZ}PiESV$C`8F4@;^B{Hwgd#Xa0}%4+wzd zqA9V(zPys{zaLM)tj1y7Df zJvQ2sqPmTbX#^vs{G;5S)vE2X_v4dmfF*)KDV7-Q>|L%7S>Z;_kW#ZIw@qklfl^>6 z=MVFd?gyTizhT-J&un1InYJ6QC;%zs=vT?jrPVF-)M|Chj;M%zU3)sm^V)o{V7qW z*hE?O2*gmOFT0QD^}y1@`?U_Z_%T1BO`cM3Ldfa$94>i0AzW!95v)0(ukDq0@=Ao= zd9V6M59omnPT&!H!u^w|EVW6!e&7RFfEL9lXhxY0(L4RUSnm5nNj8cSlm3Zn`!A7K z?*-J=E`hM@*|*ft9WnH)yvJB@%Z}m&KpK8aP0t4WtRthT3@y8d%I}FjRVkas|4dsJ zo2L+5W8!VO8|^6Qy)QUJA0(d?u1mmgEO>kO`c`AJpIpyA$)3!b!#xQ%jW-j6h9bk3 z?UG@xB;|HTN{#*}{F1&s9Wu;nZB?M;E2w0LR@-rtQkhnF`cNT_ON_tN#JZZ^dzVO= zs#6<(yT-%|>^>T%5cVRX5`MBEPyLSFU+PRSYKT5{{mO<-lTy8-^p20{;O@E^7S9=0<@-#7X@=;Sik(wc_m>TV*hG@|M}~| zBnh*&$#jVR&v{wO>TdmQghI;z1c36n^nd6t;8%e{idxijWyk`g>4n@#{{#91iUq*c z;j~rvC~^KYTC$rKEUJ$t-v3W|73%xa0GACIvV6hvw^|I5*+Ei!twVS{%@LZzMVbIVJiFK5oVvKf&N|6@PgQq}B^d5S8HgloX0=*|12UF8vB{v6X9+CcvS` zq>=Di0?zuNSjnB0>pL9?=?Ke->R0jpC#sT#n&y~6)TRHRViJT8Mb}qDXe9iMgYb7h z%eET<9KZm1fN5%KhMcnOE~aMFu?6d0NSQ?&o$Jb5K={F+7Wdiae$UvL^CT)9us&(! z_E+9g3GmOf1>?RR63p4f@&Vf1(JpnsbyGF|;)V(A&;xjbNij(U7*RJ?Fzihc38URe z`|tD3U@~fIC7og2^-;oZ3?992iKqKVxI1%A&XwJS`8zv1EJmM+0pnyNm*km%R8mVk z`()YbykoEM`7o<=1bD#i-i!tS8EyOBha{SNOC3^mWsAQrJ0kA#O314-c)+2a7duqx z;=a#xpJH#>4rgZ@9r+D>+fESpWuvn4#Z< zF*=p50O0c~Ftqj6BiQ48S!|+(&1#C%^P?1s@q$~qWn5-i93utl6bkg#wV-W45Ly0u zloSV^*{DECfx6Hs>6S4s5)ls$5;Ni_9VSgRt6CF05AM=?xjA-t?R{;NXHO`LrluFZ zFW~fcDC_n9JBoF0bhCJSzk%)7UNk}@;c~d7uW0`1vcxwxNfP30DG8_+Z&D8ke|^Eo+nHbA?;yAOG_Q>q?ie?m&lv$4_ur$P|&Y@OuBjN zrQErdyg|8b>(bRfi^VOXha^x?ucp0vOMr65s@qXqw--gONi9V$cK?x}(s9?d_KnFi zCRR$O=nQF?HCeJl5Nd@$2RFW@H<>X9Z|vC~4Q#APYhlc89&^GuZThU##dK8*@+68< z9=58*KF|a1LXaSsvBySQI2;${Zbu0~`QUr&#TA(Kt?59TflPRZ-si_*(!_|vp+A2k zRRa5+WEnG9j}VNxEfqA%`p#kKedDjteyRiJ(KfcAmU%pv$?ac1swLM{$bYZ(8&L|o z-S!7HV~X_y)lw(Jtt$!Cn!3Met$N)sRkp$9P{ek*Y-f#^x~lgR$WFc8m%_{pr0b*$ z^@0^JeKzg0#@7*=h34WzbUy_yEpNO|>gejvOTh-GuUYr5UgYgsGpfX%WE@8z^ z*!2)%SbP56tYLl*eG9m^LYurWKIxU9^%T+&;sAIUDmMW1ye)ayg3{)UD)(qY@f=!# z;&y1EAmQ|I$ z7Gs~@10YI0-(8cNn5KRm&r(K-aAz%}9845Lt)M_04o*>RH%kfhl`>ZW)-J2P?#|1l z7gp|12U2e|dEt-v15pf%-|h@4xu3!<{Fsk`u*n*NM^P-%_fddsqQvCwGl|-^L5b_2 zRH2Gbj|nG-vt-oc@fP8q=|7jJi}}f z45W8`Foyic0F_+nGMuAgeH^nIiNI`OT1alga%thTNf%blD@^E3q9 zPL#xKaroty9qrL+-Y*uvT3dbmUTGuc=U!6Hui9nn)GF+?4N*%K4BTH;vOG!0m`+P& z(F6t1O(Bc)L6aYQQ;$FKNcinT2gkwWwzcS%uGT*0D;af$G&-$O# z71cAV#c9+VsN3^P6rRYCawJYpp;+%a8Mj`@yTs~KFsdd)FmYF;Ama#P#JBH1=)V7! zAcpcwTK+={!|H`lJ}6^SzYqHwyiaenhTg0u7UN|diB%YtQe&|(qWex|-=v|cYuDte z9sV6R3SBt}C2aVu*0fdXoZ&RB^?`X_0Jk5gg*6-WNN&^cPy-%)8#m&yboffoI{|-k zQO#0hkx}i<&KRlyR&5)L`74=1<08uEtYn61U;4|ey5`JT%W52E!@N&+auk=(oA>#! z;8;qz3WVQ5Wd~Zt*mh`zTb>tHEZP1_PgZaTP^;eMG<-Z(XsTFS>oNNb5J6;{MwmtD zXTCPUL8h0UBU=nZ|KBQ8N6Dc!)kE3V9a^p8}kr=`EVl7bH?zA9TK2VN` zegYe>JdspM6E^baa*;q@xXULv1VJj z^oQ#d?UD6$-N(Pa7ZQEqe?G$o#eutT3E019CFPh8mog7KQiLSLvFn#w^(9ahY8MDw zvPZ<#BF6RnBdCY~KCct7nL}uzdGyn=oUkvx)$-X6T~(f{5hP(Fal&~j+st#OOv+K= z*#SB48eM3GdHA?e_%}1i$G42A9LE}{>a|g60~ollIU>5ut0X=e*6|C$2oSmV>3_B; zCaeJLb}8WV?3Qt&3!ju>?5%j(TL}5iSxi~tm5BO;c?uw{pAO2Fjz?kGD7ZG-?uv1 zLCz0@+KuRdOkQ&Bboa&nJfW?>RM?lVa_pGOuvLBB1ZXnDm1?iwsv)t!Cp(S=3w&Tw zK)C;PS3N^wIazq)?DsHzqt}@s0?n91$o-1Dk#Vni9j;y` z)ntRI9H8pN|ItgrHQXc?pW{?ty1s_xzqkwyiM7LLWL{s5fE0Im6s-+1> zkOJMzMMvYQ`+y(vleYO;JShGJmPpCF^b1%p>$k0J8ilFia<+yHH1wL)fyRBc^P2_05K;nlVR>r|rDFRaG4BGd zWcq)-62=?7OUWXhzgbX<15vp|*^UN>A760fbsD2mx#CWXq}d7ujiR%Y!=X4_rh>WZ zG|7f#K7@b(+tk&w-0;xp^6@_^sRyKQKO<^8#x`EMvAxX+tt~>^#kHM=m%R5GXImXt zP8!^z-2n^wm53Zm0cM}itMG?@jh?;Ly3_YooY>0RO}?mN245EjIl&X>je}h4T9#Lp zS0_{>Yj@{9QR*sdGA z^?Q}zNmR~RD)PL#pk<$(X9C3;hS5F&<6Al3_FJI)2HV871iqdsNhB9nCUC#7b)he~ z7Orv*leoa^G$F|}MVN9DbETR!ggOvFU8P;Vo?5W2*To8x3z`}T>}0<_$N&zTN-OLd zPcV5Tve07Q(In$4yecjC6B*S@$qWnJI zQVzQtMxjNp$o3ZTDLUg(yLl20d@w`YYB)B&R}PDlSi1Q7-GbxyTZC3e2KBdA-@xjY zDr^Sf90sK>m#3`mEelPW<|cK8mK7e`!#95RjA6I0f8w5>eq4vVWG|D|7%(f-$>1RV zxNqb>C8egxq?U?h-4{1xm0DoAGg7#4NHbEP>Eh`EA81W(A;eQ2YsDpaWywk;E+3sP z`_y{%gACz>VDjMi(u;8PzS4GR1}gpwxOYPynkX4Q0y6g@3@cb7P+YZjzwil?BhE?3Z=!2UA!4O-=l3aRJ*H@DtN|utfT}4TGj3h1{Lj_ zhd-=*=`_{y6yJ~G6OJ-&;Ef>P>37{fs}@zF)O@PxZ;JL^~`n^l0wx*1Pt35{`HHJ8ZZJVnrV>!}`Og>U@P}KQ<}h!0u0b zaOxF%waDc6j z^_24#0Lg+Yo0G>A)c=oB5v@c%h=Z**HaRuT>(B8PED}Zk$e4Zo3Oeo$(8vwmeJB13`lX#Jtt{SsA$1iaru@#mER@jzL zKtaZ+StF&jGou0z7?0>N`8O2Luo83m1uA3Nl8V}SeB`==DnIhgNY1c^=-p^!EVbl5 zGkG{8Z2XI(a@w~K!4Mxh+R~fJf_~vwQ*QX*KR^Q@dw1D1E8wQCV{!jVfLgA=t|Ni+ zK!4-C&Pb2`@h(eH?LFZFc9VTF}IgMT&wd{jTAv}_hez*USLnNiQ&5j_`Y;7?&f4e*}FIDkQI{{?-L*I z6DCa3lYi*v5N&ked6@ss*m{kWcY^3Bz>M?n00XmK=IRSI-4~-GEci*NQy$DCt?v9Vr}+0&Y$bKtO$=i=H5>2g*gM&TWzEHf3;0 zw_}~Jq<~j3-J^kv2%M*6t*pC6-Un-Gr0EHBX6vJE!!6S$*i{v)O04!4N*`^M(1r@s zYn{G35pPBl>x1-@3n~U$U_fuKzX4oLa)RS8bF{?jefIXzWuNv)WPs%U`%pfZb)AOk zw^S|@!(4^;l;z8v&v0&$eYc<+$zUV9jG_>5*%+}#R=gaQhON?3z(zq!vCbg(?5z=e zHODU)>@Ujk!Z;0nNE|2fT9s`}y?6X)^AQC(U8p69;~u}c8SQG<2W!A9&mkso&MaMDr;w zc%!mvB&f^!_x=BFR;{W<=^aDmBrq>i2>eg3O`!!3veXd~sNq8R^fyQ-V1T+m)iNLc zqhSE1X&>L9(nInwAsReOxySp5mjNfc^9@{g2L6g|-axh`X;EZp3jW8^a&O~?*ezEI z@3NEr(QN&53$C$IJ|)hkuK^u>NV92)Q6>E6pxAFvALP2WoBdyCBMd{?D z@URE=e|svv%8GCzR7?5ORV^O>=!vRqHGhL(sL>ZmXlf?09)U0@yE2n~S}*?=&hB@% zPXdHaZwh`ghjacAs$}|ePDv~#+6gwrj)fg+HSepHYalwn9K?5yt2?(%trvhD{VH|B zB4G-scq%tWrreT{Vr2W)`TZZ!Bj^(2q&F*fNa*7V3YfuWQ>U}=