QueAI orchestrates AI modules on your hardware or in the cloud — your choice, every time. Chat, RAG, STT, TTS, OCR and more. One platform. Zero vendor lock-in.
Each module runs locally, in the cloud, or both — you decide per deployment. Install, start, stop and remove with one click from the QueAI dashboard.
Conversational AI with any LLM. Run Llama, Mistral or Phi locally with Ollama, or connect to OpenAI, Anthropic and other cloud providers — same interface.
Query your documents in plain language. Run vector DB and embeddings locally for full privacy, or use cloud embeddings for faster inference. Your choice.
Transcribe audio offline with Whisper for full privacy, or push to cloud APIs for faster throughput. Meetings, dictation and recordings to text in seconds.
High-quality voice synthesis on your own hardware with Coqui or Piper, or route to cloud TTS for premium voices — no extra infrastructure required.
Extract text from scanned documents, images and PDFs — entirely offline. Invoices, forms and physical documents digitized without leaving your network.
Extend QueAI with your own module using a simple
manifest.json. Local agent, cloud workflow or hybrid — the ecosystem is built by
the community.
One command works on Linux, macOS and Windows (WSL2). No manual dependency management, no broken environments.
A single
curl
command detects your OS, verifies Docker and pulls the QueAI
core image. Works on any platform.
Navigate to
localhost:8080
in your browser. QueAI's module marketplace is ready.
Browse the catalog, pick local or cloud deployment per module, and click Install. Each module runs as an isolated Docker container.
Chat with documents, transcribe audio, extract text from images. All orchestrated by QueAI, all under your control.
Whether you need full privacy, predictable costs or freedom from API dependencies — QueAI puts you in charge.
Integrate LLMs, STT or RAG into your projects via local REST endpoints. No billing surprises, no API rate limits, no data leaving your machine.
Chat with your internal documents, contracts and manuals without sending a single byte to third-party servers. RAG + OCR on-premise, fully self-hosted.
Professional secrecy, patient records, client data — QueAI never routes your files through external servers unless you explicitly configure it to. Full compliance control.
Swap models, mix local and cloud providers, build custom pipelines with plugins. QueAI is the playground for anyone who wants to push AI capabilities further.
Your data never leaves your machine unless you choose cloud. Zero telemetry, zero mandatory external calls.
Local and cloud are complementary options. Mix providers per module based on your needs, budget and privacy requirements.
The QueAI orchestration core is fully open source and free. No freemium gates, no hidden limits in the base platform.
Every capability is a decoupled plugin. Activate what you need, remove what you don't. No bloat, no assumptions.
A UX designed for people who aren't infrastructure experts. Installing a module shouldn't require a tutorial.
Public roadmap, newcomer-labeled issues, clear contribution guides. The ecosystem grows because anyone can build on it.
QueAI is an open source project with a transparent roadmap and a community-first approach. Every plugin, issue and pull request makes AI more accessible.
manifest.json
good first issue
and start contributing today
Install QueAI in minutes and start orchestrating AI — locally, in the cloud, or both. No credit card. No lock-in. Just AI.
Install QueAI — It's free