No description
  • Python 90.8%
  • Just 6.7%
  • Jinja 2.3%
  • Ruby 0.2%
Find a file
2026-04-05 21:03:40 +02:00
agentkit Refactor WorkoutAgent and update logging flow 2026-04-05 21:03:40 +02:00
audio Switch tts model for voxtral 2026-03-30 21:31:08 +02:00
glances Move from scripts to justfile 2026-03-26 19:45:27 +01:00
homebrew Move from scripts to justfile 2026-03-26 19:45:27 +01:00
llamactl Move from scripts to justfile 2026-03-26 19:45:27 +01:00
logdy Move from scripts to justfile 2026-03-26 19:45:27 +01:00
nginx Move from scripts to justfile 2026-03-26 19:45:27 +01:00
.gitignore Update agentkit configuration and add WorkoutAgent plugin 2026-04-05 19:41:14 +02:00
AGENTS.md Move from scripts to justfile 2026-03-26 19:45:27 +01:00
justfile Rename just homebrew command 2026-03-29 21:03:14 +02:00
LICENSE Create LICENSE 2025-10-06 13:10:41 +02:00
README.md Add logdy service for web-based log viewing 2026-02-26 22:20:56 +01:00

Homelab

Local AI services (LLM inference, chat, audio) and infrastructure on Mac Mini M4 Pro.

Services

Service Runtime Purpose
llamactl launchd LLM model management / routing (llama.cpp, MLX, vLLM)
agentkit Docker (Colima) Chat UI with tools and skills
glances launchd System monitoring dashboard
logdy launchd Log viewer UI
audio launchd OpenAI-compatible STT + TTS API

Nginx reverse-proxies all services with optional authentication. Audio is internal only.

Setup

Install all system dependencies:

cd homebrew
brew bundle install

Secrets go in .env files (gitignored). Never commit them. Python projects use uv with pyproject.toml.

Llamactl

Llamactl provides unified management and routing for llama.cpp, MLX and vLLM models with web dashboard. Config is generated from config.template.yaml via setup.sh (uses envsubst).

cd llamactl
./setup.sh    # Initial setup (generates config from template)
./start.sh    # Start service
./stop.sh     # Stop service

AgentKit

A flexible chat client with Web UI that integrates multiple AI providers, tools, and agent frameworks through a unified plugin architecture.

cd agentkit
docker compose up -d --build   # Build and start
docker compose down            # Stop

Plugins live in agentkit/plugins/ and are volume-mounted into the container:

plugins/
  tools/<name>/     # Toolset plugin (extends ToolSetHandler, auto-discovered on startup)
  skills/<name>/    # Agent skill (SKILL.md, auto-discovered on startup)

AgentKit connects to the audio service via host.docker.internal:9100.

Audio Service

OpenAI-compatible audio API using mlx-audio. Models are lazy-loaded and auto-unloaded after 60 minutes of inactivity.

Endpoint Method Description
/v1/audio/transcriptions POST STT via Whisper (mlx-community/whisper-large-v3-turbo-asr-fp16)
/v1/audio/speech POST TTS via Chatterbox (mlx-community/chatterbox-fp16, 23 languages)
cd audio
./start.sh    # Start service
./stop.sh     # Stop service

Test scripts: test_stt.py (file → transcript) and test_tts.py (text/file → WAV, supports -l for language).

Proxy

Nginx reverse proxy with optional Authelia authentication. Routes are declared in nginx/config.yaml and rendered via nginx/setup.py (Jinja2 template).

python nginx/setup.py          # Regenerate config, test, and reload Nginx
brew services stop nginx       # Stop Nginx

Monitoring

Real-time system monitoring dashboard using Glances. Web-based UI for CPU, memory, disk, and network stats.

cd glances
./start.sh    # Start service
./stop.sh     # Stop service

Logdy

Web-based log viewer using Logdy. Follows logs from all services (llamactl, audio, glances, agentkit) on port 9011.

cd logdy
./start.sh    # Start service
./stop.sh     # Stop service