ngxson / wllamaLinks
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
☆987Updated last month
Alternatives and similar repositories for wllama
Users that are interested in wllama are comparing it to the libraries listed below
Sorting:
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆285Updated last year
- VS Code extension for LLM-assisted code/text completion☆1,139Updated 2 weeks ago
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙☆1,430Updated 2 weeks ago
- Chat with AI large language models running natively in your browser. Enjoy private, server-free, seamless AI conversations.☆933Updated 4 months ago
- ☆786Updated this week
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆1,586Updated last month
- EntityDB is an in-browser vector database wrapping indexedDB and Transformers.js over WebAssembly☆270Updated 8 months ago
- Run Large-Language Models (LLMs) 🚀 directly in your browser!☆222Updated last year
- 🕸️🦀 A WASM vector similarity search written in Rust☆1,046Updated 2 years ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆785Updated this week
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆612Updated 11 months ago
- On-device LLM Inference Powered by X-Bit Quantization☆278Updated last week
- Suno AI's Bark model in C/C++ for fast text-to-speech generation☆852Updated last year
- Vercel and web-llm template to run wasm models directly in the browser.☆169Updated 2 years ago
- Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation le…☆1,842Updated last week
- Big & Small LLMs working together☆1,258Updated this week
- LM Studio Apple MLX engine☆870Updated 2 weeks ago
- FastMLX is a high performance production ready API to host MLX models.☆341Updated 10 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆458Updated last year
- Large-scale LLM inference engine☆1,641Updated last week
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆629Updated last year
- An extremely fast implementation of whisper optimized for Apple Silicon using MLX.☆867Updated last year
- A JavaScript library that brings vector search and RAG to your browser!☆158Updated last year
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated last year
- LLM-powered lossless compression tool☆300Updated last month
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆2,067Updated this week
- Local AI API Platform☆2,761Updated 6 months ago
- A collection of 🤗 Transformers.js demos and example applications☆1,940Updated 2 months ago
- Vectra is a local vector database for Node.js with features similar to pinecone but built using local files.☆575Updated last week
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆393Updated last week