huggingface / text-embeddings-inference
A blazing fast inference solution for text embeddings models
☆3,175Updated 3 weeks ago
Alternatives and similar repositories for text-embeddings-inference:
Users that are interested in text-embeddings-inference are comparing it to the libraries listed below
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆1,782Updated this week
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆1,816Updated last week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,266Updated last week
- MTEB: Massive Text Embedding Benchmark☆2,193Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,362Updated last week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,702Updated 3 weeks ago
- Large Language Model Text Generation Inference☆9,777Updated this week
- Efficient Retrieval Augmentation and Generation Framework☆1,458Updated last month
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆4,879Updated 3 weeks ago
- Developer-friendly, serverless vector database for AI applications. Easily add long-term memory to your LLM apps!☆5,643Updated this week
- Supercharge Your LLM Application Evaluations 🚀☆8,214Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆5,603Updated this week
- Retrieval and Retrieval-augmented LLMs☆8,555Updated last week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,790Updated last year
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,295Updated last week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,448Updated this week
- SGLang is a fast serving framework for large language models and vision language models.☆10,325Updated this week
- This includes the original implementation of SELF-RAG: Learning to Retrieve, Generate and Critique through self-reflection by Akari Asai,…☆1,976Updated 8 months ago
- Tools for merging pretrained large language models.☆5,260Updated last week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,842Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,946Updated last month
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆2,911Updated this week
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,704Updated this week
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,915Updated last month
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆3,970Updated last week
- Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.☆10,181Updated this week
- Superfast AI decision making and intelligent processing of multi-modal data.☆2,396Updated this week
- LangServe 🦜️🏓☆2,015Updated last month
- RayLLM - LLMs on Ray☆1,257Updated 8 months ago
- ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)☆3,247Updated 3 months ago