tangledgroup / llama-cpp-wasmLinks
WebAssembly (Wasm) Build and Bindings for llama.cpp
β278Updated last year
Alternatives and similar repositories for llama-cpp-wasm
Users that are interested in llama-cpp-wasm are comparing it to the libraries listed below
Sorting:
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inferenceβ843Updated last month
- Run Large-Language Models (LLMs) π directly in your browser!β214Updated 11 months ago
- Vercel and web-llm template to run wasm models directly in the browser.β160Updated last year
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfacesβ140Updated last year
- JS tokenizer for LLaMA 1 and 2β358Updated last year
- JS tokenizer for LLaMA 3 and LLaMA 3.1β116Updated 3 weeks ago
- EntityDB is an in-browser vector database wrapping indexedDB and Transformers.js over WebAssemblyβ204Updated 3 months ago
- Browser-compatible JS library for running language modelsβ229Updated 3 years ago
- SemanticFinder - frontend-only live semantic search with transformers.jsβ293Updated 4 months ago
- A JavaScript library that brings vector search and RAG to your browser!β139Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPUβ103Updated 2 years ago
- Web-optimized vector database (written in Rust).β252Updated 5 months ago
- Inference Llama 2 in one file of pure JavaScript(HTML)β33Updated 3 months ago
- A client side vector search library that can embed, store, search, and cache vectors. Works on the browser and node. It outperforms OpenAβ¦β217Updated last year
- Vectra is a local vector database for Node.js with features similar to pinecone but built using local files.β514Updated 3 months ago
- Vector Storage is a vector database that enables semantic similarity searches on text documents in the browser's local storage. It uses Oβ¦β234Updated 8 months ago
- On-device LLM Inference Powered by X-Bit Quantizationβ266Updated 2 weeks ago
- WebGPU LLM inference tuned by handβ151Updated 2 years ago
- Simple repo that compiles and runs llama2.c on the Webβ57Updated last year
- Library to generate vector embeddings in NodeJSβ139Updated 4 months ago
- A simple vector database built on idbβ97Updated last year
- JavaScript bindings for the ggml-js libraryβ43Updated 5 months ago
- Record and stream WAV audio data in the browser across all platformsβ87Updated 9 months ago
- Universal LLM Interfaceβ99Updated 2 months ago
- Tensor library for machine learningβ275Updated 2 years ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM β¦β587Updated 6 months ago
- LLM-based code completion engineβ194Updated 7 months ago
- C++ implementation for π«StarCoderβ457Updated last year
- LLM-powered lossless compression toolβ288Updated last year
- A fully in-browser privacy solution to make Conversational AI privacy-friendlyβ228Updated 10 months ago