LOG-postech / rethinking-LLM-pruningLinks
☆28Updated 6 months ago
Alternatives and similar repositories for rethinking-LLM-pruning
Users that are interested in rethinking-LLM-pruning are comparing it to the libraries listed below
Sorting:
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆88Updated 11 months ago
- ☆20Updated 9 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆62Updated 11 months ago
- 🔨 Malet (Machine Learning Experiment Tool) is a tool for efficient machine learning experiment execution, logging, analysis, and plot ma…☆17Updated 4 months ago
- ☆38Updated last year
- ☆59Updated last year
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆117Updated 3 weeks ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆94Updated last year
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆21Updated last year
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆38Updated 7 months ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆73Updated last month
- ☆11Updated last month
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆23Updated 6 months ago
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆22Updated 11 months ago
- ☆51Updated last year
- Official implementation for LaCo (EMNLP 2024 Findings)☆17Updated 11 months ago
- ☆51Updated last year
- ☆41Updated 3 months ago
- [ICML 2025] Official Pytorch code for "SASSHA: Sharpness-aware Adaptive Second-order Optimization With Stable Hessian Approximation"☆19Updated 3 weeks ago
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆82Updated 6 months ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 2 years ago
- First Latency-Aware Competitive LLM Agent Benchmark☆20Updated 3 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆49Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆64Updated last year
- Explorations into some recent techniques surrounding speculative decoding☆285Updated 8 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆104Updated 2 years ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆176Updated last year
- Code and Dataset release of "Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models" (NAACL 2024)☆10Updated 10 months ago
- [ICCAD 2025] Squant☆15Updated 2 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆59Updated last year