tangledgroup / llama-cpp-wasmLinks
WebAssembly (Wasm) Build and Bindings for llama.cpp
☆284Updated last year
Alternatives and similar repositories for llama-cpp-wasm
Users that are interested in llama-cpp-wasm are comparing it to the libraries listed below
Sorting:
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inference☆933Updated last month
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆140Updated last year
- Run Large-Language Models (LLMs) 🚀 directly in your browser!☆220Updated last year
- Vercel and web-llm template to run wasm models directly in the browser.☆164Updated last year
- JS tokenizer for LLaMA 3 and LLaMA 3.1☆116Updated 3 months ago
- A fully in-browser privacy solution to make Conversational AI privacy-friendly☆235Updated last year
- JS tokenizer for LLaMA 1 and 2☆361Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆106Updated 2 years ago
- A JavaScript library that brings vector search and RAG to your browser!☆157Updated last year
- WebGPU LLM inference tuned by hand☆150Updated 2 years ago
- EntityDB is an in-browser vector database wrapping indexedDB and Transformers.js over WebAssembly☆236Updated 6 months ago
- Inference Llama 2 in one file of pure JavaScript(HTML)☆34Updated 5 months ago
- JavaScript bindings for the ggml-js library☆44Updated this week
- SemanticFinder - frontend-only live semantic search with transformers.js☆304Updated 7 months ago
- Browser-compatible JS library for running language models☆232Updated 3 years ago
- LLM-based code completion engine☆190Updated 9 months ago
- Web-optimized vector database (written in Rust).☆258Updated 8 months ago
- Generates grammer files from typescript for LLM generation☆38Updated last year
- A Javascript library (with Typescript types) to parse metadata of GGML based GGUF files.☆50Updated last year
- Library to generate vector embeddings in NodeJS☆156Updated last month
- Vectra is a local vector database for Node.js with features similar to pinecone but built using local files.☆536Updated 6 months ago
- A client side vector search library that can embed, store, search, and cache vectors. Works on the browser and node. It outperforms OpenA…☆221Updated last year
- Simple repo that compiles and runs llama2.c on the Web☆57Updated last year
- Tensor library for machine learning☆273Updated 2 years ago
- Run LLMs in the Browser with MLC / WebLLM ✨☆141Updated last year
- GPU accelerated client-side embeddings for vector search, RAG etc.☆65Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆606Updated 8 months ago
- JavaScript implementation of LiteLLM.☆141Updated 7 months ago
- Vector Storage is a vector database that enables semantic similarity searches on text documents in the browser's local storage. It uses O…☆237Updated 11 months ago
- ggml implementation of BERT☆496Updated last year