tim-lawson / skip-middleLinks
Learning to Skip the Middle Layers of Transformers
☆15Updated 4 months ago
Alternatives and similar repositories for skip-middle
Users that are interested in skip-middle are comparing it to the libraries listed below
Sorting:
- ☆34Updated 3 months ago
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆17Updated 9 months ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated last year
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆21Updated 2 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Updated last year
- ☆14Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated 3 months ago
- Unofficial Implementation of Selective Attention Transformer☆18Updated last year
- ☆26Updated 2 weeks ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆30Updated 2 months ago
- ☆36Updated 9 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆31Updated 7 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆28Updated 3 weeks ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆27Updated 4 months ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28Updated 7 months ago
- LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs☆46Updated last week
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆25Updated 2 weeks ago
- ☆21Updated last month
- ☆19Updated 8 months ago
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Updated last month
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆16Updated last month
- The official repo of continuous speculative decoding☆31Updated 8 months ago
- Algorithms for approximate attention in LLMs☆21Updated 8 months ago
- ☆70Updated 5 months ago
- ☆19Updated 11 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆84Updated 5 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆28Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆57Updated 10 months ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert …☆13Updated 10 months ago