huggingface / text-embeddings-inferenceLinks
A blazing fast inference solution for text embeddings models
☆4,156Updated last week
Alternatives and similar repositories for text-embeddings-inference
Users that are interested in text-embeddings-inference are comparing it to the libraries listed below
Sorting:
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,532Updated this week
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,462Updated this week
- MTEB: Massive Text Embedding Benchmark☆2,944Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,528Updated 5 months ago
- Large Language Model Text Generation Inference☆10,621Updated last month
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,740Updated 5 months ago
- Retrieval and Retrieval-augmented LLMs☆10,772Updated 2 weeks ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,223Updated this week
- Developer-friendly, embedded retrieval engine for multimodal AI. Search More; Manage Less.☆7,880Updated this week
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,944Updated 2 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,977Updated 6 months ago
- Supercharge Your LLM Application Evaluations 🚀☆11,302Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,918Updated this week
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,565Updated 5 months ago
- LLMPerf is a library for validating and benchmarking LLMs☆1,036Updated 10 months ago
- Superfast AI decision making and intelligent processing of multi-modal data.☆2,872Updated this week
- Efficient Retrieval Augmentation and Generation Framework☆1,739Updated 9 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,542Updated last week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,261Updated 5 months ago
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆2,016Updated 9 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,865Updated last year
- Tools for merging pretrained large language models.☆6,412Updated this week
- Retrieval Augmented Generation (RAG) chatbot powered by Weaviate☆7,408Updated 3 months ago
- Blazingly fast LLM inference.☆6,189Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,072Updated 4 months ago
- LangServe 🦜️🏓☆2,192Updated 2 weeks ago
- SGLang is a fast serving framework for large language models and vision language models.☆19,718Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,698Updated this week
- Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and cro…☆877Updated last month
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,699Updated 3 weeks ago