Troubleshooting¶
Issues specific to Llamactl deployment and operation.
Configuration Issues¶
Invalid Configuration¶
Problem: Invalid configuration preventing startup
Solutions:
1. Use minimal configuration:
- Check data directory permissions:
Instance Management Issues¶
Instance Fails to Start¶
Problem: Instance fails to start or immediately stops
Solutions:
-
Check instance logs to see the actual error:
-
Verify backend is installed:
- llama.cpp: Ensure
llama-serveris in PATH - MLX: Ensure
mlx-lmPython package is installed - vLLM: Ensure
vllmPython package is installed
- llama.cpp: Ensure
-
Check model path and format:
- Use absolute paths to model files
- Verify model format matches backend (GGUF for llama.cpp, etc.)
-
Verify backend command configuration:
- Check that the backend
commandis correctly configured in the global config - For virtual environments, specify the full path to the command (e.g.,
/path/to/venv/bin/mlx_lm.server) - See the Configuration Guide for backend configuration details
- Test the backend directly (see Backend-Specific Issues below)
- Check that the backend
Backend-Specific Issues¶
Problem: Model loading, memory, GPU, or performance issues
Most model-specific issues (memory, GPU configuration, performance tuning) are backend-specific and should be resolved by consulting the respective backend documentation:
llama.cpp:
- llama.cpp GitHub
- llama-server README
MLX:
- MLX-LM GitHub
- MLX-LM Server Guide
vLLM:
- vLLM Documentation
- OpenAI Compatible Server
- vllm serve Command
Testing backends directly:
Testing your model and configuration directly with the backend helps determine if the issue is with llamactl or the backend itself:
# llama.cpp
llama-server --model /path/to/model.gguf --port 8081
# MLX
mlx_lm.server --model mlx-community/Mistral-7B-Instruct-v0.3-4bit --port 8081
# vLLM
vllm serve microsoft/DialoGPT-medium --port 8081
API and Network Issues¶
CORS Errors¶
Problem: Web UI shows CORS errors in browser console
Solutions:
1. Configure allowed origins:
Authentication Issues¶
Problem: API requests failing with authentication errors
Solutions:
1. Disable authentication temporarily:
-
Configure API keys:
-
Use correct Authorization header:
Remote Node Issues¶
Node Configuration¶
Problem: Remote instances not appearing or cannot be managed
Solutions:
1. Verify node configuration:
local_node: "main" # Must match a key in nodes map
nodes:
main:
address: "" # Empty for local node
worker1:
address: "http://worker1.internal:8080"
api_key: "secure-key" # Must match worker1's management key
- Check node name consistency:
local_nodeon each node must match what other nodes call it-
Node names are case-sensitive
-
Test remote node connectivity:
Debugging and Logs¶
Viewing Instance Logs¶
# Get instance logs via API
curl http://localhost:8080/api/v1/instances/{name}/logs
# Or check log files directly
tail -f ~/.local/share/llamactl/logs/{instance-name}.log
Enable Debug Logging¶
Getting Help¶
When reporting issues, include:
-
System information:
-
Configuration file (remove sensitive keys)
-
Relevant log output
-
Steps to reproduce the issue