FaizChishtie / vemcacheLinks
Vemcache is an in-memory vector database.
☆38Updated last year
Alternatives and similar repositories for vemcache
Users that are interested in vemcache are comparing it to the libraries listed below
Sorting:
- A tiny embedding database in pure Rust.☆429Updated 2 years ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆47Updated last year
- ⚡️Lightning fast in-memory VectorDB written in rust🦀☆29Updated 10 months ago
- Fast serverless LLM inference, in Rust.☆109Updated 2 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated 2 years ago
- ☆140Updated last year
- Rust implementation of Surya☆64Updated 11 months ago
- Semantic search webassembly module☆18Updated last year
- OpenAI compatible API for serving LLAMA-2 model☆218Updated 2 years ago
- Rust framework for LLM orchestration☆204Updated last year
- allms: One Rust Library to rule them aLLMs☆107Updated 3 weeks ago
- Anthropic Rust SDK 🦀 with async support.☆67Updated 3 weeks ago
- Structured outputs for LLMs☆53Updated last year
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆114Updated 10 months ago
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆45Updated last year
- Light WebUI for lm.rs☆24Updated last year
- Rust containers for machine learning.☆112Updated 2 years ago
- Minimalistic Rust Implementation Of Model Context Protocol from Anthropic☆63Updated 6 months ago
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆242Updated 5 months ago
- A Whisper CLI, built with Rust.☆99Updated 2 years ago
- Unofficial Rust bindings to Apple's mlx framework☆247Updated this week
- LLM Orchestrator built in Rust☆285Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU☆106Updated 2 years ago
- Library for doing RAG☆81Updated last month
- build your own vector database -- the littlest hnsw☆67Updated last year
- LLaMA from First Principles☆51Updated 2 years ago
- Build tools for LLMs in Rust using Model Context Protocol☆37Updated 11 months ago
- llm_utils: Basic LLM tools, best practices, and minimal abstraction.☆48Updated 11 months ago
- A simple and clear way of hosting llama.cpp as a private HTTP API using Rust☆27Updated last year
- Proof of concept for a generative AI application framework powered by WebAssembly and Extism☆14Updated 2 years ago