KevKibe / memvectordbLinks
⚡️Lightning fast in-memory VectorDB written in rust🦀
☆22Updated 4 months ago
Alternatives and similar repositories for memvectordb
Users that are interested in memvectordb are comparing it to the libraries listed below
Sorting:
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆80Updated last year
- Neural search for web-sites, docs, articles - online!☆134Updated last month
- Run Python functions on desktop, mobile, web, and in the cloud. https://fxn.ai/explore☆64Updated last week
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆37Updated last year
- ☆137Updated last year
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆62Updated last year
- Built for demanding AI workflows, this gateway offers low-latency, provider-agnostic access, ensuring your AI applications run smoothly a…☆65Updated last month
- Tensor library for Zig☆11Updated 7 months ago
- Fast serverless LLM inference, in Rust.☆87Updated 4 months ago
- Ask shortgpt for instant and concise answers☆13Updated 2 years ago
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆97Updated 4 months ago
- A complete(grpc service and lib) Rust inference with multilingual embedding support. This version leverages the power of Rust for both GR…☆39Updated 10 months ago
- ☆26Updated 7 months ago
- Light WebUI for lm.rs☆24Updated 8 months ago
- Rust implementation of Surya☆58Updated 4 months ago
- Library for doing RAG☆74Updated last month
- utilities for loading and running text embeddings with onnx☆44Updated 11 months ago
- Using langchain, deeplake and openai to create a Q&A on the Mojo lang programming manual☆22Updated last year
- A Fish Speech implementation in Rust, with Candle.rs☆92Updated last month
- Ask questions, get insights from repos☆82Updated 11 months ago
- LLM based file organizer☆26Updated 2 years ago
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆40Updated 8 months ago
- LLM Inference API in Rust. It also has a streamlit app that requests the running API in Rust.☆20Updated last year
- OpenAI compatible API for serving LLAMA-2 model☆218Updated last year
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆58Updated last year
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆182Updated 3 weeks ago
- ☆10Updated last year
- implement llava using candle☆15Updated last year
- A high performance batching router optimises max throughput for text inference workload☆16Updated last year
- ☆15Updated last year