AnswerDotAI / fastkmeansLinks
β63Updated 3 weeks ago
Alternatives and similar repositories for fastkmeans
Users that are interested in fastkmeans are comparing it to the libraries listed below
Sorting:
- minimal pytorch implementation of bm25 (with sparse tensors)β104Updated last year
- NLP with Rust for Python π¦πβ64Updated 2 months ago
- β49Updated 5 months ago
- High-Performance Engine for Multi-Vector Searchβ130Updated last month
- An introduction to LLM Samplingβ79Updated 7 months ago
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching oβ¦β142Updated 2 weeks ago
- Trully flash implementation of DeBERTa disentangled attention mechanism.β62Updated 2 months ago
- Pre-train Static Word Embeddingsβ84Updated 2 months ago
- Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and teβ¦β42Updated last year
- β35Updated 3 weeks ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).β80Updated last year
- Baguetter is a flexible, efficient, and hackable search engine library implemented in Python. It's designed for quickly benchmarking, impβ¦β186Updated 11 months ago
- code for training & evaluating Contextual Document Embedding modelsβ195Updated 2 months ago
- β77Updated 2 months ago
- PyLate efficient inference engineβ61Updated 2 weeks ago
- Crispy reranking models by Mixedbreadβ33Updated 2 weeks ago
- Simple GRPO scripts and configurations.β59Updated 5 months ago
- Hugging Face Inference Toolkit used to serve transformers, sentence-transformers, and diffusers models.β83Updated last week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optunaβ55Updated 5 months ago
- β9Updated 9 months ago
- β56Updated 2 months ago
- Supercharge huggingface transformers with model parallelism.β77Updated last week
- Storing long contexts in tiny caches with self-studyβ117Updated this week
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.β33Updated 2 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.β94Updated 2 weeks ago
- β128Updated 3 months ago
- Code for NeurIPS LLM Efficiency Challengeβ59Updated last year
- PyTorch implementation for MRLβ19Updated last year
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.β158Updated last year
- β31Updated 8 months ago