vmarinowski / infini-attention
An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'
☆51Updated 7 months ago
Alternatives and similar repositories for infini-attention:
Users that are interested in infini-attention are comparing it to the libraries listed below
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated this week
- A repository for research on medium sized language models.☆76Updated 10 months ago
- GoldFinch and other hybrid transformer components☆45Updated 8 months ago
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆80Updated 10 months ago
- This is the official repository for Inheritune.☆109Updated last month
- ☆32Updated 9 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆75Updated last year
- ☆74Updated 7 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆85Updated last week
- ☆76Updated 2 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆150Updated 3 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- ☆47Updated 7 months ago
- ☆49Updated 5 months ago
- Train, tune, and infer Bamba model☆86Updated 2 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆55Updated 6 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆55Updated 11 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆24Updated last month
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆36Updated last week
- Randomized Positional Encodings Boost Length Generalization of Transformers☆80Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 5 months ago
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆42Updated 4 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆148Updated 2 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆161Updated 2 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆32Updated 5 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 10 months ago
- DPO, but faster 🚀☆40Updated 3 months ago