shaoyiHusky / SparseProgressiveDistillation
☆12Updated last year
Alternatives and similar repositories for SparseProgressiveDistillation:
Users that are interested in SparseProgressiveDistillation are comparing it to the libraries listed below
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆43Updated 2 years ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆88Updated last year
- Code for ACL2022 publication Transkimmer: Transformer Learns to Layer-wise Skim☆21Updated 2 years ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆71Updated 2 months ago
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆24Updated 7 months ago
- ☆15Updated 2 years ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆32Updated last month
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆66Updated last month
- ☆17Updated last year
- Official Code for "SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression"☆116Updated 3 weeks ago
- ☆49Updated last year
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆43Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆100Updated 2 years ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆56Updated 4 months ago
- Official PyTorch implementation of IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact☆40Updated 8 months ago
- Efficient LLM Inference Acceleration using Prompting☆46Updated 4 months ago
- ICLR 2021☆46Updated 3 years ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆27Updated last week
- ☆122Updated 7 months ago
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆42Updated 3 months ago
- The official implementation of the paper "Demystifying the Compression of Mixture-of-Experts Through a Unified Framework".☆59Updated 3 months ago
- (SparseBERT) Rethinking Network Pruning -- under the Pre-train and Fine-tune Paradigm (NAACL'21)☆8Updated 3 years ago
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Updated 5 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- ☆36Updated 5 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆51Updated 2 weeks ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆41Updated 3 months ago
- ☆18Updated last week
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆25Updated 5 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆58Updated 11 months ago