kyegomez / Infini-attentionLinks
Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTORCH
β55Updated last week
Alternatives and similar repositories for Infini-attention
Users that are interested in Infini-attention are comparing it to the libraries listed below
Sorting:
- A repository for research on medium sized language models.β77Updated last year
- DPO, but faster πβ43Updated 7 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"β41Updated 8 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS β¦β59Updated 9 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"β98Updated 9 months ago
- β47Updated last month
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundryβ42Updated last year
- β55Updated last year
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M contβ¦β83Updated last year
- β35Updated last year
- β51Updated last year
- β48Updated 10 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,β¦β47Updated 2 months ago
- Official implementation for 'Extending LLMsβ Context Window with 100 Samples'β79Updated last year
- Using FlexAttention to compute attention with different masking patternsβ44Updated 9 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)β42Updated last year
- β82Updated 10 months ago
- This is the official repository for Inheritune.β111Updated 5 months ago
- Codebase for Instruction Following without Instruction Tuningβ35Updated 9 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)β158Updated 3 months ago
- β64Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"β86Updated last year
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"β40Updated last week
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Schedulingβ33Updated 3 months ago
- Long Context Extension and Generalization in LLMsβ57Updated 9 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVDβ30Updated last week
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Modelsβ52Updated 5 months ago
- The repository contains code for Adaptive Data Optimizationβ25Updated 7 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"β48Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"β117Updated last year