michaelfeil / infinity
Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali
☆1,964Updated last week
Alternatives and similar repositories for infinity:
Users that are interested in infinity are comparing it to the libraries listed below
- A blazing fast inference solution for text embeddings models☆3,381Updated this week
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆1,916Updated last week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,363Updated last month
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,359Updated 2 weeks ago
- Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and cro…☆778Updated 4 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,886Updated 3 weeks ago
- Superfast AI decision making and intelligent processing of multi-modal data.☆2,507Updated last week
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,760Updated last month
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,601Updated last week
- High-performance retrieval engine for unstructured data☆1,292Updated this week
- ☆846Updated 6 months ago
- The code used to train and run inference with the ColVision models, e.g. ColPali, ColQwen2, and ColSmol.☆1,662Updated this week
- Open-source tool to visualise your RAG 🔮☆1,119Updated 3 months ago
- Developer APIs to Accelerate LLM Projects☆1,620Updated 5 months ago
- Fast lexical search implementing BM25 in Python using Numpy, Numba and Scipy☆1,087Updated last week
- Chat language model that can use tools and interpret the results☆1,532Updated last week
- Use late-interaction multi-modal models such as ColPali in just a few lines of code.☆757Updated 2 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,048Updated 3 weeks ago
- Fast State-of-the-Art Static Embeddings☆1,136Updated this week
- Efficient Retrieval Augmentation and Generation Framework☆1,500Updated 2 months ago
- Generalist and Lightweight Model for Named Entity Recognition (Extract any entity types from texts) @ NAACL 2024☆1,905Updated last week
- MTEB: Massive Text Embedding Benchmark☆2,363Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,162Updated this week
- Knowledge Agents and Management in the Cloud☆3,853Updated this week
- The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval☆1,159Updated 7 months ago
- Optimizing inference proxy for LLMs☆2,124Updated 2 weeks ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆4,984Updated 3 weeks ago
- ☆699Updated last month
- Large-scale LLM inference engine☆1,368Updated this week
- Cohere Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.☆3,019Updated 2 weeks ago