TobiasNorlund / retro
Official repo to On the Generalization Ability of Retrieval-Enhanced Transformers
☆38Updated 9 months ago
Alternatives and similar repositories for retro:
Users that are interested in retro are comparing it to the libraries listed below
- ☆38Updated 10 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆58Updated last month
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆89Updated last year
- ☆139Updated last year
- ☆73Updated 10 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- ☆94Updated 9 months ago
- ☆34Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 2 years ago
- ☆80Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 5 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆130Updated 10 months ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆64Updated 3 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆114Updated 8 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆121Updated 7 months ago
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆72Updated last year
- ☆125Updated last year
- Easy-to-use Retrieval-Enhanced Transformer implementation☆9Updated 2 years ago
- Retrieval as Attention☆83Updated 2 years ago
- Code repository for the c-BTM paper☆105Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆68Updated 11 months ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated 2 years ago
- ☆49Updated 4 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆67Updated 6 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆147Updated last month
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆92Updated last year
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆56Updated 5 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆68Updated 9 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago