stephenqz / OATSLinks
Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition
β17Updated 9 months ago
Alternatives and similar repositories for OATS
Users that are interested in OATS are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"β80Updated 7 months ago
- [NeurIPS 2024 Oralπ₯] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.β180Updated last year
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)β32Updated 2 months ago
- β40Updated 2 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inferenceβ46Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costsβ23Updated 2 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMβ¦β50Updated last year
- β56Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)β67Updated 10 months ago
- LLM Inference with Microscaling Formatβ34Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Modelsβ88Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Modelβ¦β68Updated last year
- This repo contains the code for studying the interplay between quantization and sparsity methodsβ26Updated 11 months ago
- Code Repository of Evaluating Quantized Large Language Modelsβ136Updated last year
- β25Updated last year
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Modelsβ69Updated 2 years ago
- [ICCAD 2025] Squantβ15Updated 7 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inferenceβ49Updated last year
- [NeurIPS 2023] Token-Scaled Logit Distillation for Ternary Weight Generative Language Modelsβ18Updated 2 years ago
- Awesome list for LLM pruning.β282Updated 3 months ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".β23Updated 10 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.β147Updated 6 months ago
- Official implementation of ICML'24 paper "LQER: Low-Rank Quantization Error Reconstruction for LLMs"β19Updated last year
- [ICLR2025]: OSTQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fittβ¦β88Updated 10 months ago
- β23Updated last year
- β63Updated 2 years ago
- The Official Implementation of Ada-KV [NeurIPS 2025]β126Updated 2 months ago
- Official implementation for LaCo (EMNLP 2024 Findings)β21Updated last year
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"β50Updated 3 months ago
- β74Updated last month