RAIVNLab / AdANNS
Code repository for the paper - "AdANNS: A Framework for Adaptive Semantic Search"
☆60Updated last year
Related projects ⓘ
Alternatives and complementary repositories for AdANNS
- Implementation of "Efficient Multi-vector Dense Retrieval with Bit Vectors", ECIR 2024☆57Updated last month
- ReBase: Training Task Experts through Retrieval Based Distillation☆27Updated 4 months ago
- ☆39Updated 10 months ago
- XTR: Rethinking the Role of Token Retrieval in Multi-Vector Retrieval☆37Updated 5 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆56Updated last month
- ☆55Updated last month
- minimal pytorch implementation of bm25 (with sparse tensors)☆90Updated 8 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆76Updated 2 years ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated last month
- NLP with Rust for Python 🦀🐍☆59Updated 5 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆77Updated 8 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆129Updated this week
- QLoRA for Masked Language Modeling☆20Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆37Updated 5 months ago
- ☆62Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆113Updated 5 months ago
- A repository for research on medium sized language models.☆74Updated 5 months ago
- The repository contains code for Adaptive Data Optimization☆18Updated last month
- A place to store reusable transformer components of my own creation or found on the interwebs☆44Updated 2 weeks ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆87Updated 3 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆71Updated last month
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆77Updated 5 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆38Updated 10 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated 11 months ago
- Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆79Updated this week
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆59Updated 3 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 7 months ago