Lightweight inference library for ONNX files, written in C++. It can run Stable Diffusion XL 1.0 on a RPI Zero 2 (or in 298MB of RAM) but also Mistral 7B on desktops and servers. ARM, x86, WASM, RISC-V supported. Accelerated by XNNPACK. Python, C# and JS(WASM) bindings available.
☆2,032Jan 20, 2026Updated 2 months ago
Alternatives and similar repositories for OnnxStream
Users that are interested in OnnxStream are comparing it to the libraries listed below
Sorting:
- Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++☆5,562Mar 15, 2026Updated last week
- Llama 2 Everywhere (L2E)☆1,529Aug 27, 2025Updated 6 months ago
- ☆1,274Oct 24, 2023Updated 2 years ago
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆265Oct 31, 2023Updated 2 years ago
- Distribute and run LLMs with a single file.☆23,794Mar 14, 2026Updated last week
- This repository contains a pure C++ ONNX implementation of multiple offline AI models, such as StableDiffusion (1.5 and XL), ControlNet, …☆633May 29, 2025Updated 9 months ago
- Tensor library for machine learning☆14,252Updated this week
- Fast stable diffusion on CPU and AI PC☆2,018Jan 10, 2026Updated 2 months ago
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆10,020Sep 7, 2024Updated last year
- Stable Diffusion inference in pure C++☆67Dec 23, 2022Updated 3 years ago
- Inference Llama 2 in one file of pure C☆19,262Aug 6, 2024Updated last year
- An extensible, easy-to-use, and portable diffusion web UI 👨🎨☆1,673Aug 18, 2023Updated 2 years ago
- Port of OpenAI's Whisper model in C/C++☆47,689Updated this week
- LLM inference in C/C++☆98,098Updated this week
- Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.☆3,712Mar 12, 2024Updated 2 years ago
- StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models☆6,222Aug 10, 2024Updated last year
- High-speed Large Language Model Serving for Local Deployment☆8,834Jan 24, 2026Updated last month
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,468Mar 4, 2026Updated 2 weeks ago
- TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones☆1,308Feb 5, 2026Updated last month
- An Open Source text-to-speech system built by inverting Whisper.☆4,576Dec 14, 2025Updated 3 months ago
- A simple "Be My Eyes" web app with a llama.cpp/llava backend☆493Nov 28, 2023Updated 2 years ago
- ☆256Jul 15, 2023Updated 2 years ago
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆570Aug 8, 2023Updated 2 years ago
- Simple UI for LLM Model Finetuning☆2,061Dec 21, 2023Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,922May 3, 2024Updated last year
- ☆723Aug 15, 2025Updated 7 months ago
- Universal LLM Deployment Engine with ML Compilation☆22,246Updated this week
- 3D to Photo is an open-source package by Dabble, that combines threeJS and Stable diffusion to build a virtual photo studio for product p…☆449Jan 10, 2024Updated 2 years ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,709Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,201Jul 11, 2024Updated last year
- tiniest x86-64-linux emulator☆7,465Dec 10, 2025Updated 3 months ago
- build-once run-anywhere c library☆20,653Mar 6, 2026Updated 2 weeks ago
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆31,592Updated this week
- [Unmaintained, see README] An ecosystem of Rust libraries for working with large language models☆6,152Jun 24, 2024Updated last year
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,380Oct 28, 2024Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,537Jul 16, 2023Updated 2 years ago
- Generative fill in 3D.☆744Dec 17, 2024Updated last year
- High-performance In-browser LLM Inference Engine☆17,616Mar 13, 2026Updated last week
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,749Mar 13, 2026Updated last week