do-me / SemanticFinderLinks
SemanticFinder - frontend-only live semantic search with transformers.js
☆299Updated 6 months ago
Alternatives and similar repositories for SemanticFinder
Users that are interested in SemanticFinder are comparing it to the libraries listed below
Sorting:
- A client side vector search library that can embed, store, search, and cache vectors. Works on the browser and node. It outperforms OpenA…☆220Updated last year
- Vector Storage is a vector database that enables semantic similarity searches on text documents in the browser's local storage. It uses O…☆234Updated 9 months ago
- TypeScript generator for llama.cpp Grammar directly from TypeScript interfaces☆140Updated last year
- EntityDB is an in-browser vector database wrapping indexedDB and Transformers.js over WebAssembly☆217Updated 4 months ago
- Browser-compatible JS library for running language models☆231Updated 3 years ago
- JS tokenizer for LLaMA 1 and 2☆359Updated last year
- A simple vector database built on idb☆99Updated last year
- JavaScript implementation of LiteLLM.☆139Updated 6 months ago
- An API to transcribe audio with OpenAI's Whisper Large v3!☆305Updated 10 months ago
- A JavaScript library that brings vector search and RAG to your browser!☆145Updated last year
- Fully typed & consistent chat APIs for OpenAI, Anthropic, Groq, and Azure's chat models for browser, edge, and node environments.☆169Updated last year
- Run Large-Language Models (LLMs) 🚀 directly in your browser!☆217Updated last year
- JS tokenizer for LLaMA 3 and LLaMA 3.1☆116Updated 2 months ago
- Shush is an app that deploys a WhisperV3 model with Flash Attention v2 on Modal and makes requests to it via a NextJS app☆216Updated last year
- Vercel and web-llm template to run wasm models directly in the browser.☆161Updated last year
- GPU accelerated client-side embeddings for vector search, RAG etc.☆65Updated last year
- ☆112Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆104Updated 2 years ago
- Enforce structured output from LLMs 100% of the time☆250Updated last year
- Parallel wasm Barnes-Hut t-SNE implementation written in Rust.☆21Updated last year
- Chrome extension to chat with page using local LLM (llama, mistral 7B, etc)☆180Updated last year
- A dictionary, but it shows you position in embedding space relative to some synonyms/antonyms instead of a definition.☆74Updated 8 months ago
- WebAssembly (Wasm) Build and Bindings for llama.cpp☆281Updated last year
- A fully in-browser privacy solution to make Conversational AI privacy-friendly☆230Updated 11 months ago
- rerank library for easy reranking of results☆50Updated last year
- Edge full-stack LLM platform. Written in Rust☆381Updated last year
- ☆135Updated last year
- automatic sentence highlights based on their significance to the document☆191Updated last year
- Vectra is a local vector database for Node.js with features similar to pinecone but built using local files.☆522Updated 4 months ago
- A fast, light, open chat UI with full tool use support across many models☆219Updated 4 months ago