huggingface / text-embeddings-inferenceLinks
A blazing fast inference solution for text embeddings models
☆4,476Updated last week
Alternatives and similar repositories for text-embeddings-inference
Users that are interested in text-embeddings-inference are comparing it to the libraries listed below
Sorting:
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,661Updated this week
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,687Updated last month
- Large Language Model Text Generation Inference☆10,757Updated last month
- MTEB: Massive Text Embedding Benchmark☆3,106Updated this week
- Developer-friendly OSS embedded retrieval library for multimodal AI. Search More; Manage Less.☆8,788Updated this week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease- …☆3,849Updated 8 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,718Updated 8 months ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,986Updated 5 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,084Updated 2 weeks ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,576Updated this week
- Superfast AI decision making and intelligent processing of multi-modal data.☆3,273Updated 2 months ago
- Efficient Retrieval Augmentation and Generation Framework☆1,766Updated 3 weeks ago
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,594Updated last month
- Chat language model that can use tools and interpret the results☆1,590Updated 2 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,312Updated 9 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,028Updated 10 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,823Updated 3 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,440Updated 2 months ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,279Updated 3 weeks ago
- LLMPerf is a library for validating and benchmarking LLMs☆1,084Updated last year
- Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and cro…☆935Updated last month
- The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval☆1,572Updated last year
- Retrieval and Retrieval-augmented LLMs☆11,256Updated last month
- Fast, flexible LLM inference☆6,508Updated this week
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆2,022Updated last year
- Python client for Qdrant vector search engine☆1,211Updated 3 weeks ago
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,477Updated last week
- LangServe 🦜️🏓☆2,259Updated 3 months ago
- Supercharge Your LLM Application Evaluations 🚀☆12,526Updated last week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Updated 2 years ago