huggingface / text-embeddings-inference
A blazing fast inference solution for text embeddings models
☆3,543Updated last week
Alternatives and similar repositories for text-embeddings-inference
Users that are interested in text-embeddings-inference are comparing it to the libraries listed below
Sorting:
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,164Updated this week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,442Updated 3 months ago
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,057Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,976Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,700Updated this week
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,800Updated 2 months ago
- MTEB: Massive Text Embedding Benchmark☆2,505Updated last week
- Supercharge Your LLM Application Evaluations 🚀☆9,136Updated this week
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,411Updated last week
- Tools for merging pretrained large language models.☆5,646Updated last week
- Large Language Model Text Generation Inference☆10,119Updated this week
- Efficient Retrieval Augmentation and Generation Framework☆1,536Updated 4 months ago
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,076Updated 11 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,335Updated this week
- Retrieval and Retrieval-augmented LLMs☆9,610Updated last month
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,162Updated last week
- Developer-friendly, embedded retrieval engine for multimodal AI. Search More; Manage Less.☆6,401Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆14,392Updated this week
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,077Updated 2 months ago
- Structured Text Generation☆11,560Updated this week
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆3,935Updated 9 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,823Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,377Updated last week
- The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval☆1,214Updated 8 months ago
- Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.☆11,190Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,171Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,846Updated last month
- A framework for few-shot evaluation of language models.☆8,904Updated last week
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,949Updated 4 months ago
- PyTorch native post-training library☆5,186Updated this week