New TUI dropped for managing LLM traffic and GPU resources 🔥
🌀 **ollamaMQ** — Async message queue proxy for Ollama
💯 Per-user queues, fair-share scheduling, OpenAI-compatible endpoints, streaming
🦀 Written in Rust & built with @ratatui_rs
⭐ GitHub: https://github.com/Chleba/ollamaMQ
#rustlang #ratatui #tui #gpu #llm #ollama #backend #proxy #terminal