thunlp / SparsingLawLinks
The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".
☆28Updated last year
Alternatives and similar repositories for SparsingLaw
Users that are interested in SparsingLaw are comparing it to the libraries listed below
Sorting:
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated last month
- PyTorch implementation of StableMask (ICML'24)☆14Updated last year
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More☆33Updated 6 months ago
- ☆69Updated 5 months ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆16Updated last month
- ☆61Updated 4 months ago
- [NeurIPS 2025] Official implementation of "Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning"☆25Updated last month
- ☆15Updated last year
- ☆19Updated 11 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆96Updated last year
- ☆17Updated 4 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- ☆55Updated 5 months ago
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Updated 2 years ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆105Updated last month
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆17Updated 11 months ago
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆51Updated last month
- Kinetics: Rethinking Test-Time Scaling Laws☆82Updated 4 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- ☆10Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆55Updated 9 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Updated last year
- Code and Model for NeurIPS 2024 Spotlight Paper "Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training…☆44Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆27Updated 4 months ago
- ☆85Updated 3 weeks ago
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆33Updated 6 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆104Updated 2 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆27Updated last week
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Updated last year