0x7o / RETRO-transformer
Easy-to-use Retrieval-Enhanced Transformer implementation
☆9Updated 2 years ago
Alternatives and similar repositories for RETRO-transformer:
Users that are interested in RETRO-transformer are comparing it to the libraries listed below
- ☆38Updated 10 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆75Updated last year
- minimal pytorch implementation of bm25 (with sparse tensors)☆97Updated last year
- Code for Zero-Shot Tokenizer Transfer☆121Updated last month
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆147Updated last month
- Official repo to On the Generalization Ability of Retrieval-Enhanced Transformers☆38Updated 9 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆117Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆46Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 5 months ago
- A repository for transformer critique learning and generation☆88Updated last year
- A repository to perform self-instruct with a model on HF Hub☆32Updated last year
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- Retrieval as Attention☆83Updated 2 years ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆68Updated 11 months ago
- ☆31Updated 8 months ago
- ☆50Updated 4 months ago
- ☆73Updated 10 months ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆113Updated 5 months ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆43Updated 8 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last month
- A framework for few-shot evaluation of autoregressive language models.☆103Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆58Updated last month
- ☆126Updated last month
- A repository for research on medium sized language models.☆77Updated 9 months ago
- The code for the paper: "Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models"☆55Updated 8 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆66Updated 6 months ago
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆50Updated 6 months ago