This website requires JavaScript.
Explore
Help
Register
Sign In
Mathis
/
llamactl
Watch
1
Star
0
Fork
0
You've already forked llamactl
mirror of
https://github.com/lordmathis/llamactl.git
synced
2025-11-06 00:54:23 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
58
Commits
2
Branches
19
Tags
451076b2bff47ffe8b9c8c7f91b24659d4061770
Go to file
Code
Clone
HTTPS
Tea CLI
Open with VS Code
Open with VSCodium
Open with Intellij IDEA
Download ZIP
Download TAR.GZ
Download BUNDLE
LordMathis
451076b2bf
Add OpenAIProxy handler to route requests based on instance name
2025-07-24 23:04:28 +02:00
server
Add OpenAIProxy handler to route requests based on instance name
2025-07-24 23:04:28 +02:00
ui
Implement working health check
2025-07-24 22:57:32 +02:00
.gitignore
Initial UI setup with React, Vite, shadcn and Tailwind CSS
2025-07-20 19:52:03 +02:00
LICENSE
Initial commit
2025-07-16 20:02:28 +02:00
README.md
Initial commit
2025-07-16 20:02:28 +02:00
README.md
llamactl
Description
Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.
llama-cpp
llamacpp
llama-server
llm
llm-inference
llm-router
localllama
localllm
mlx
mlx-lm
openai-api
self-hosted
vllm
Readme
MIT
2.9
MiB
Languages
Go
59%
TypeScript
39.5%
CSS
0.9%
JavaScript
0.5%