kyegomez / Infini-attentionLinks
Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTORCH
☆56Updated this week
Alternatives and similar repositories for Infini-attention
Users that are interested in Infini-attention are comparing it to the libraries listed below
Sorting:
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated 8 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆61Updated 9 months ago
- ☆51Updated last month
- A repository for research on medium sized language models.☆78Updated last year
- DPO, but faster 🚀☆43Updated 7 months ago
- ☆56Updated last year
- Using FlexAttention to compute attention with different masking patterns☆44Updated 10 months ago
- ☆48Updated 11 months ago
- ☆37Updated last year
- ☆65Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 10 months ago
- ☆51Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆79Updated last year
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆83Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆41Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated last week
- ☆82Updated 11 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 11 months ago
- This is the official repository for Inheritune.☆112Updated 5 months ago
- Linear Attention Sequence Parallelism (LASP)☆85Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Codebase for Instruction Following without Instruction Tuning☆35Updated 10 months ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆41Updated last month
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆61Updated last year
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 3 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆160Updated 3 months ago
- Cascade Speculative Drafting☆29Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 11 months ago