huggingface / text-embeddings-inferenceLinks
A blazing fast inference solution for text embeddings models
☆4,357Updated this week
Alternatives and similar repositories for text-embeddings-inference
Users that are interested in text-embeddings-inference are comparing it to the libraries listed below
Sorting:
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,606Updated 3 weeks ago
- MTEB: Massive Text Embedding Benchmark☆3,051Updated this week
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,589Updated 2 weeks ago
- Large Language Model Text Generation Inference☆10,720Updated 2 weeks ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,977Updated 4 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,015Updated 2 weeks ago
- Developer-friendly OSS embedded retrieval library for multimodal AI. Search More; Manage Less.☆8,383Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,466Updated last week
- Efficient Retrieval Augmentation and Generation Framework☆1,756Updated 11 months ago
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,805Updated 7 months ago
- Retrieval and Retrieval-augmented LLMs☆11,082Updated 3 weeks ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,661Updated 7 months ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,877Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,299Updated 7 months ago
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,586Updated 2 weeks ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,017Updated 8 months ago
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆2,022Updated 11 months ago
- ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)☆3,748Updated 2 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,795Updated 2 weeks ago
- Superfast AI decision making and intelligent processing of multi-modal data.☆3,141Updated last month
- Supercharge Your LLM Application Evaluations 🚀☆12,050Updated this week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,885Updated last year
- Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and cro…☆914Updated last week
- Go ahead and axolotl questions☆11,024Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,867Updated 3 weeks ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,085Updated 6 months ago
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,444Updated 2 weeks ago
- Tools for merging pretrained large language models.☆6,647Updated 3 weeks ago
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,237Updated 2 weeks ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,403Updated 3 weeks ago