NVlabs / MaskLLM
[NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models
☆150Updated last month
Alternatives and similar repositories for MaskLLM:
Users that are interested in MaskLLM are comparing it to the libraries listed below
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆145Updated 8 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆82Updated 8 months ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆114Updated 2 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆72Updated 8 months ago
- An algorithm for static activation quantization of LLMs☆116Updated 2 weeks ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆33Updated 2 weeks ago
- ☆111Updated this week
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆71Updated 2 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆56Updated 3 months ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆78Updated 2 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 9 months ago
- ☆192Updated 2 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆426Updated last week
- PB-LLM: Partially Binarized Large Language Models☆151Updated last year
- The official implementation of the paper "Demystifying the Compression of Mixture-of-Experts Through a Unified Framework".☆59Updated 3 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆149Updated 2 months ago
- Awesome list for LLM quantization☆170Updated last month
- ☆217Updated 8 months ago
- ☆47Updated 2 months ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆57Updated 7 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆140Updated 5 months ago
- ☆125Updated last year
- Awesome list for LLM pruning.☆203Updated 2 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆272Updated last month
- EE-LLM is a framework for large-scale training and inference of early-exit (EE) large language models (LLMs).☆53Updated 8 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆324Updated 3 months ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆62Updated 9 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆140Updated 11 months ago
- SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆27Updated 6 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆51Updated last week