jlamprou / Infini-Attention
Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M context keypass retrieval
☆64Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for Infini-Attention
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆51Updated this week
- ☆121Updated 9 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆129Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (Official Code)☆133Updated last month
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆133Updated last month
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆49Updated 6 months ago
- A repository for research on medium sized language models.☆74Updated 5 months ago
- ☆62Updated last month
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆91Updated last month
- The official repo for "LLoCo: Learning Long Contexts Offline"☆110Updated 4 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆137Updated this week
- ☆95Updated last month
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆134Updated 4 months ago
- ☆197Updated 4 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆56Updated 3 weeks ago
- This is the official repository for Inheritune.☆105Updated last month
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆111Updated last year
- X-LoRA: Mixture of LoRA Experts☆175Updated 3 months ago
- An unofficial pytorch implementation of 'Efficient Infinite Context Transformers with Infini-attention'☆39Updated 2 months ago
- ☆182Updated 3 weeks ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆120Updated 2 weeks ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆64Updated 5 months ago
- ☆62Updated 4 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆116Updated 4 months ago
- FuseAI Project☆76Updated 2 months ago
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆96Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆200Updated 5 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆124Updated 3 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆67Updated last year
- Prune transformer layers☆63Updated 5 months ago