WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
☆1,037Dec 17, 2025Updated 3 months ago
Alternatives and similar repositories for wllama
Users that are interested in wllama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆288Jul 23, 2024Updated last year
- A cross-platform browser ML framework.☆756Apr 2, 2026Updated 2 weeks ago
- High-performance In-browser LLM Inference Engine☆17,740Apr 8, 2026Updated last week
- Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation le…☆1,999Updated this week
- High-level, optionally asynchronous Rust bindings to llama.cpp☆245Jun 5, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!☆15,845Updated this week
- Suno AI's Bark model in C/C++ for fast text-to-speech generation☆855Nov 16, 2024Updated last year
- Minimalist web-searching platform with an AI assistant that runs directly from your browser. Uses WebLLM, Wllama and SearXNG. Demo: https…☆554Updated this week
- Distribute and run LLMs with a single file.☆24,121Updated this week
- Local AI API Platform☆2,762Jul 4, 2025Updated 9 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆855Apr 3, 2026Updated last week
- Fast, flexible LLM inference☆6,994Updated this week
- 🕸️🦀 A WASM vector similarity search written in Rust☆1,056Sep 20, 2023Updated 2 years ago
- Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference in pure C/C++☆5,726Updated this week
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Cleanai (https://github.com/willmil11/cleanai) except I'm making it in c now. Fast and clean from the start this time :)☆17Mar 6, 2026Updated last month
- A vector search SQLite extension that runs anywhere!☆7,395Apr 8, 2026Updated last week
- React Native binding of llama.cpp☆910Updated this week
- Tensor library for machine learning☆14,394Apr 9, 2026Updated last week
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,892Feb 10, 2026Updated 2 months ago
- Vercel and web-llm template to run wasm models directly in the browser.☆172Nov 21, 2023Updated 2 years ago
- LLM inference in C/C++☆103,237Updated this week
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,567Mar 23, 2025Updated last year
- Universal LLM Deployment Engine with ML Compilation☆22,414Apr 6, 2026Updated last week
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- A Javascript library (with Typescript types) to parse metadata of GGML based GGUF files.☆52Jul 30, 2024Updated last year
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆1,621Feb 8, 2026Updated 2 months ago
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙 Alternative to projects like llm-d, Docker Model R…☆1,520Updated this week
- GGUF implementation in C as a library and a tools CLI program☆311Aug 28, 2025Updated 7 months ago
- Python bindings for llama.cpp☆10,181Updated this week
- Code for Papeg.ai☆229Jan 5, 2025Updated last year
- ☆16Feb 21, 2026Updated last month
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆3,212Updated this week
- LLama.cpp rust bindings☆419Jun 27, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Web browser version of StarCoder.cpp☆46Jul 30, 2023Updated 2 years ago
- A simple library for working with Hugging Face models.☆14Dec 30, 2024Updated last year
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,175Updated this week
- ojjson is a library designed to facilitate JSON interactions with Ollama, a large language api (LLM). It leverages the power of Zod for s…☆12Nov 7, 2024Updated last year
- Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.☆998Feb 18, 2026Updated last month
- Controllable Language Model Interactions in TypeScript☆10May 17, 2024Updated last year
- ☆135Apr 8, 2026Updated last week