gigit0000 / qwen3.cView external linksLinks
Lightweight C inference for Qwen3 GGUF. Multiturn prefix caching & batch processing.
☆23Sep 1, 2025Updated 5 months ago
Alternatives and similar repositories for qwen3.c
Users that are interested in qwen3.c are comparing it to the libraries listed below
Sorting:
- ☆11Sep 18, 2023Updated 2 years ago
- Yet another `llama.cpp` Rust wrapper☆12Jun 19, 2024Updated last year
- Single-file, pure CUDA C implementation for running inference on Qwen3 0.6B GGUF. No Dependencies.☆22Nov 26, 2025Updated 2 months ago
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Sep 14, 2025Updated 5 months ago
- LLM CLI Interface - Extremely Convenient and Fast☆12Sep 22, 2025Updated 4 months ago
- One-Click RAG Implementation, Simple and Portable☆30Oct 5, 2025Updated 4 months ago
- Desktop application for instant AI-powered text transformation. Translate, correct, summarize, and change the tone of any text, anywhere,…☆27Dec 29, 2025Updated last month
- Mic-controlled mouse clicks☆17Oct 6, 2025Updated 4 months ago
- win32 native frontend for llama-cli☆12Nov 2, 2024Updated last year
- TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes☆13Jul 1, 2025Updated 7 months ago
- A c++ framework on efficient training & fine-tuning LLMs☆27Feb 10, 2026Updated last week
- Generate a llama-quantize command to copy the quantization parameters of any GGUF☆30Jan 23, 2026Updated 3 weeks ago
- LLM FX: A LLM Server Desktop Client free for everyone!☆33Dec 19, 2025Updated last month
- ☆85Jan 19, 2026Updated 3 weeks ago
- Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input your VRAM and RAM and the toolcha…☆76Updated this week
- ☆20Sep 28, 2024Updated last year
- JacQues is a Dash-based interactive web application that facilitates real-time chat and document management.☆22Jan 5, 2026Updated last month
- ☆23Dec 9, 2025Updated 2 months ago
- minimal C implementation of speculative decoding based on llama2.c☆25Jul 15, 2024Updated last year
- *NIX SHELL with Local AI/LLM integration☆24Feb 26, 2025Updated 11 months ago
- ☆22Aug 9, 2024Updated last year
- Loader extension for tabbyAPI in SillyTavern☆26Jun 30, 2025Updated 7 months ago
- Network for procedural editing of text with LLMs☆23Dec 6, 2025Updated 2 months ago
- ☆26May 31, 2024Updated last year
- A sleek web interface for Ollama, making local LLM management and usage simple. WebOllama provides an intuitive UI to manage Ollama model…☆67Oct 8, 2025Updated 4 months ago
- An Open-Source Modular AI Assistant☆32Mar 20, 2025Updated 10 months ago
- Local drive deep search.☆32Jun 4, 2025Updated 8 months ago
- Run LLaMA inference on CPU, with Rust 🦀🚀🦙☆34Jan 5, 2025Updated last year
- Robust, privacy focused home AI assistant in Rust.☆42Sep 21, 2025Updated 4 months ago
- Crow is a Desktop AI Assistant☆32Aug 9, 2024Updated last year
- A live multiplayer trivia game where users can bid for the subject of the next question☆29Jan 9, 2026Updated last month
- Text-to-Speech (TTS) engine for the Armenian language☆12Sep 29, 2024Updated last year
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆28Jun 7, 2024Updated last year
- Oak National Academy's AI Auto Eval tools provide LLM as a judge evaluation on lesson plans and resources☆17Nov 4, 2025Updated 3 months ago
- Run Ollama LLM models in Google Colab for free☆37Nov 24, 2024Updated last year
- MVP of an idea using multiple local LLM models to simulate and play D&D☆96Apr 23, 2025Updated 9 months ago
- Helper package to spin-up a Qdrant instance without Docker☆13Dec 24, 2023Updated 2 years ago
- A desktop GUI for Flux 1.1 Pro built using DelphiFMX For Python☆11Oct 5, 2024Updated last year
- Material associated with Physics Report "Data science applications to string theory"☆11Jun 20, 2023Updated 2 years ago