huggingface / text-embeddings-inferenceLinks
A blazing fast inference solution for text embeddings models
☆3,707Updated last week
Alternatives and similar repositories for text-embeddings-inference
Users that are interested in text-embeddings-inference are comparing it to the libraries listed below
Sorting:
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,262Updated this week
- Large Language Model Text Generation Inference☆10,249Updated this week
- MTEB: Massive Text Embedding Benchmark☆2,626Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,591Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,028Updated last month
- Go ahead and axolotl questions☆9,715Updated this week
- Tools for merging pretrained large language models.☆5,853Updated last week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,511Updated last month
- SGLang is a fast serving framework for large language models and vision language models.☆15,421Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,773Updated this week
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,821Updated 4 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,877Updated 2 months ago
- Retrieval and Retrieval-augmented LLMs☆9,973Updated 3 weeks ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,191Updated 3 months ago
- Developer-friendly, embedded retrieval engine for multimodal AI. Search More; Manage Less.☆6,698Updated last week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,836Updated last year
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,216Updated 3 weeks ago
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,163Updated this week
- Efficient Retrieval Augmentation and Generation Framework☆1,578Updated 5 months ago
- Supercharge Your LLM Application Evaluations 🚀☆9,607Updated last week
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,971Updated 5 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,426Updated this week
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,052Updated 10 months ago
- Blazingly fast LLM inference.☆5,764Updated this week
- Structured Text Generation☆11,963Updated this week
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆2,113Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,196Updated last month
- A language for constraint-guided and efficient LLM programming.☆3,974Updated last month
- An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)☆4,607Updated 3 weeks ago
- PyTorch native post-training library☆5,287Updated this week