mixedbread-ai / batchedLinks
The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching of inference workloads.
☆149Updated 2 months ago
Alternatives and similar repositories for batched
Users that are interested in batched are comparing it to the libraries listed below
Sorting:
- Baguetter is a flexible, efficient, and hackable search engine library implemented in Python. It's designed for quickly benchmarking, imp…☆189Updated last year
- Generalist and Lightweight Model for Text Classification☆162Updated 3 months ago
- High-Performance Engine for Multi-Vector Search☆160Updated last week
- minimal pytorch implementation of bm25 (with sparse tensors)☆104Updated last year
- ☆159Updated 10 months ago
- Notebooks for training universal 0-shot classifiers on many different tasks☆137Updated 9 months ago
- Late Interaction Models Training & Retrieval☆608Updated last week
- ☆71Updated 3 months ago
- ☆41Updated 2 months ago
- NLP with Rust for Python 🦀🐍☆65Updated 4 months ago
- Crispy reranking models by Mixedbread☆36Updated 2 weeks ago
- ☆199Updated last year
- ☆210Updated 3 months ago
- experiments with inference on llama☆104Updated last year
- FastFit ⚡ When LLMs are Unfit Use FastFit ⚡ Fast and Effective Text Classification with Many Classes☆212Updated 2 weeks ago
- Simple UI for debugging correlations of text embeddings☆292Updated 4 months ago
- Pre-train Static Word Embeddings☆85Updated 3 weeks ago
- Python API for https://vespa.ai, the open big data serving engine☆143Updated last week
- ☆136Updated last month
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆65Updated this week
- code for training & evaluating Contextual Document Embedding models☆197Updated 4 months ago
- Datamodels for hugging face tokenizers☆77Updated last week
- Efficient vector database for hundred millions of embeddings.☆208Updated last year
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆138Updated last year
- XTR/WARP (SIGIR'25) is an extremely fast and accurate retrieval engine based on Stanford's ColBERTv2/PLAID and Google DeepMind's XTR.☆166Updated 5 months ago
- 📝 Reference-Free automatic summarization evaluation with potential hallucination detection☆103Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated last year
- ☆49Updated 7 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆274Updated last year
- Neural Search☆363Updated 6 months ago