TobiasNorlund / retro
Official repo to On the Generalization Ability of Retrieval-Enhanced Transformers
☆38Updated 9 months ago
Alternatives and similar repositories for retro:
Users that are interested in retro are comparing it to the libraries listed below
- ☆139Updated last year
- ☆94Updated 9 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆58Updated last month
- ☆73Updated 10 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆130Updated 10 months ago
- ☆49Updated 4 months ago
- ☆38Updated 10 months ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆64Updated 3 months ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated 2 years ago
- ☆34Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆114Updated 11 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆92Updated last year
- Sparse Backpropagation for Mixture-of-Expert Training☆28Updated 8 months ago
- Retrieval as Attention☆83Updated 2 years ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆121Updated 7 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 5 months ago
- PyTorch building blocks for the OLMo ecosystem☆71Updated this week
- Easy-to-use Retrieval-Enhanced Transformer implementation☆9Updated 2 years ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆89Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆84Updated this week
- Understand and test language model architectures on synthetic tasks.☆183Updated last week
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆207Updated 6 months ago
- ☆125Updated last year
- ☆80Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆63Updated 7 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆147Updated last month