vmarinowski / infini-attention
An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'
☆39Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for infini-attention
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆51Updated this week
- ☆62Updated last month
- A repository for research on medium sized language models.☆74Updated 5 months ago
- GoldFinch and other hybrid transformer components☆39Updated 3 months ago
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆35Updated 3 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆91Updated last month
- ☆61Updated 2 months ago
- This is the official repository for Inheritune.☆105Updated last month
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆73Updated 9 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆18Updated last month
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆102Updated 6 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆137Updated this week
- Evaluating LLMs with Dynamic Data☆68Updated this week
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆30Updated 2 months ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆79Updated 7 months ago
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆67Updated 3 weeks ago
- ☆44Updated 2 months ago
- Data preparation code for CrystalCoder 7B LLM☆42Updated 6 months ago
- Linear Attention Sequence Parallelism (LASP)☆64Updated 5 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated last month
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆41Updated 9 months ago
- ☆26Updated 4 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆22Updated 8 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆49Updated 6 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- Unofficial Implementation of Evolutionary Model Merging☆33Updated 7 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (Official Code)☆133Updated last month
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆64Updated 6 months ago
- Code repository for the c-BTM paper☆105Updated last year
- Code for Zero-Shot Tokenizer Transfer☆115Updated 3 weeks ago