kyegomez / Infini-attention
Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTORCH
☆53Updated this week
Alternatives and similar repositories for Infini-attention:
Users that are interested in Infini-attention are comparing it to the libraries listed below
- Codebase for Instruction Following without Instruction Tuning☆33Updated 4 months ago
- ☆31Updated 7 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆43Updated last week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 4 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆42Updated 2 months ago
- ☆74Updated last month
- This repo is based on https://github.com/jiaweizzhao/GaLore☆24Updated 4 months ago
- ☆49Updated 7 months ago
- Repository for the paper: 500xCompressor: Generalized Prompt Compression for Large Language Models☆24Updated 6 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- A repository for research on medium sized language models.☆76Updated 8 months ago
- Long Context Extension and Generalization in LLMs☆48Updated 4 months ago
- ☆47Updated 5 months ago
- ☆43Updated 3 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆42Updated 6 months ago
- Train, tune, and infer Bamba model☆83Updated 3 weeks ago
- ☆55Updated 3 months ago
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆33Updated last month
- ☆12Updated last month
- DPO, but faster 🚀☆33Updated 2 months ago
- Cascade Speculative Drafting☆28Updated 10 months ago
- ☆53Updated 4 months ago
- PyTorch building blocks for the OLMo ecosystem☆51Updated this week
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆46Updated last year
- ☆60Updated last week
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated 8 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆38Updated last year