tantaraio / voy
πΈοΈπ¦ A WASM vector similarity search written in Rust
β944Updated last year
Alternatives and similar repositories for voy:
Users that are interested in voy are comparing it to the libraries listed below
- Vectra is a local vector database for Node.js with features similar to pinecone but built using local files.β465Updated 2 weeks ago
- A SQLite extension for efficient vector search, based on Faiss!β1,835Updated 11 months ago
- π§ Motorhead is a memory and information retrieval server for LLMs.β873Updated last week
- Web-optimized vector database (written in Rust).β224Updated last month
- A cross-platform browser ML framework.β686Updated 4 months ago
- A hyper-fast local vector database for use with LLM Agents. Now accepting SAFEs at $135M cap.β1,396Updated 2 months ago
- A tiny embedding database in pure Rust.β400Updated last year
- Run modern deep learning models in the browser.β831Updated last year
- A client side vector search library that can embed, store, search, and cache vectors. Works on the browser and node. It outperforms OpenAβ¦β197Updated 10 months ago
- Vector Storage is a vector database that enables semantic similarity searches on text documents in the browser's local storage. It uses Oβ¦β221Updated 4 months ago
- A reactive runtime for building durable AI agentsβ1,311Updated 3 months ago
- AICI: Prompts as (Wasm) Programsβ2,013Updated 2 months ago
- structured extraction for llmsβ700Updated 2 months ago
- A realtime CRDT-based document store, backed by S3.β785Updated 2 months ago
- Use your own AI models on the webβ929Updated 8 months ago
- Data framework for your LLM applications. Focus on server side solutionβ2,525Updated this week
- JavaScript/Typescript SDK for Qdrant Vector Databaseβ314Updated last month
- Easy-to-use headless React Hooks to run LLMs in the browser with WebGPU. Just useLLM().β690Updated last year
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inferenceβ653Updated last month
- JS port and JS/WASM bindings for openai/tiktokenβ836Updated 2 months ago
- Python & JS/TS SDK for running AI-generated code/code interpreting in your AI appβ1,662Updated last week
- Custom AI assistant platform to speed up your work.β1,093Updated this week
- A realtime serving engine for Data-Intensive Generative AI Applicationsβ987Updated last week
- Low latency JSON generation using LLMs β‘οΈβ398Updated last year
- Vercel and web-llm template to run wasm models directly in the browser.β146Updated last year
- Scalable, Low-latency and Hybrid-enabled Vector Search in Postgres. Revolutionize Vector Search, not Database.β2,008Updated last month
- Library to generate vector embeddings in NodeJSβ116Updated last week
- Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llamβ¦β870Updated last year
- Self-hosted version of OpenAIβs new stateful Assistants APIβ537Updated 3 weeks ago
- The TypeScript library for building AI applications.β1,252Updated 8 months ago