tyrchen / qdrant-libLinks
Extract core logic from qdrant and make it available as a library.
β61Updated last year
Alternatives and similar repositories for qdrant-lib
Users that are interested in qdrant-lib are comparing it to the libraries listed below
Sorting:
- π¦Rust + Large Language Models - Make AI Services Freely and Easily.β182Updated last year
- Rust port of sentence-transformers (https://github.com/UKPLab/sentence-transformers)β123Updated last year
- pgvector support for Rustβ189Updated 2 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rustβ79Updated last year
- Rust language bindings for Faissβ245Updated last month
- LLM Orchestrator built in Rustβ284Updated last year
- Rust implementation of the HNSW algorithm (Malkov-Yashunin)β222Updated last month
- Approx nearest neighbor search in Rustβ165Updated 2 years ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Faceβ46Updated last year
- ChronoMind: Redefining Vector Intelligence Through Time.β72Updated 8 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python packageβ250Updated 3 weeks ago
- Inference Llama 2 in one file of pure Rust π¦β235Updated 2 years ago
- An Approximate Nearest Neighbors library in Rust, based on random projections and LMDB and optimized for memory usageβ297Updated 2 months ago
- Rust library for generating vector embeddings, reranking. Re-write of qdrant/fastembed.β709Updated 2 weeks ago
- High-performance framework for building interactive multi-agent workflow systems in Rustβ216Updated last month
- llm_utils: Basic LLM tools, best practices, and minimal abstraction.β47Updated 10 months ago
- Rust client for txtaiβ113Updated 2 weeks ago
- An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasβ¦β138Updated last year
- Structured outputs for LLMsβ52Updated last year
- Low rank adaptation (LoRA) for Candle.β169Updated 8 months ago
- In-memory vector store with efficient read and write performance for semantic caching and retrieval system. Redis for Semantic Caching.β376Updated last year
- Llama2 LLM ported to Rust burnβ278Updated last year
- Inference engine for GLiNER models, in Rustβ82Updated last month
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibesβ240Updated 5 months ago
- AI gateway and observability server written in Rust. Designed to help optimize multi-agent workflows.β65Updated last year
- Library for doing RAGβ80Updated last week
- HNSW ANN from the paper "Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs"β251Updated 4 months ago
- Rust client for Qdrant vector search engineβ366Updated last month
- Framework to build data pipelines declarativelyβ93Updated last month
- Use multiple LLM backends in a single crate, simple builder-based configuration, and built-in prompt chaining & templating.β138Updated 7 months ago