huggingface / text-embeddings-inferenceLinks
A blazing fast inference solution for text embeddings models
☆4,218Updated last week
Alternatives and similar repositories for text-embeddings-inference
Users that are interested in text-embeddings-inference are comparing it to the libraries listed below
Sorting:
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,549Updated last week
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,506Updated last week
- Large Language Model Text Generation Inference☆10,664Updated last week
- MTEB: Massive Text Embedding Benchmark☆2,977Updated this week
- Developer-friendly OSS embedded retrieval library for multimodal AI. Search More; Manage Less.☆8,043Updated this week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,775Updated 6 months ago
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,575Updated 5 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,533Updated 6 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,950Updated this week
- Efficient Retrieval Augmentation and Generation Framework☆1,747Updated 10 months ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,958Updated 3 months ago
- Retrieval and Retrieval-augmented LLMs☆10,887Updated last month
- Superfast AI decision making and intelligent processing of multi-modal data.☆2,901Updated last week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,286Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,989Updated 7 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,372Updated 3 months ago
- ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)☆3,706Updated last month
- Go ahead and axolotl questions☆10,842Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,742Updated last week
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,627Updated 3 weeks ago
- Tools for merging pretrained large language models.☆6,468Updated 3 weeks ago
- Chat language model that can use tools and interpret the results☆1,588Updated 2 weeks ago
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆2,018Updated 10 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,272Updated 6 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,872Updated last year
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,403Updated last week
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,264Updated 8 months ago
- Supercharge Your LLM Application Evaluations 🚀☆11,509Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,876Updated last year
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, 20+ clouds, o…☆8,983Updated last week