[NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models
☆187Jan 1, 2025Updated last year
Alternatives and similar repositories for MaskLLM
Users that are interested in MaskLLM are comparing it to the libraries listed below
Sorting:
- Learnable Semi-structured Sparsity for Vision Transformers and Diffusion Transformers☆14Feb 7, 2025Updated last year
- [ECCV 2024] Isomorphic Pruning for Vision Models☆81Jul 23, 2024Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆368Nov 6, 2025Updated 3 months ago
- [CVPR 2025 Highlight] TinyFusion: Diffusion Transformers Learned Shallow☆160Dec 1, 2025Updated 3 months ago
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆65Sep 27, 2025Updated 5 months ago
- ☆159Feb 15, 2025Updated last year
- ☆32Oct 4, 2025Updated 4 months ago
- DreamGaussian with 2D-GS☆12Oct 10, 2024Updated last year
- [Arxiv 2025] In-Video Instructions: Visual Signals as Generative Control☆46Nov 25, 2025Updated 3 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Oct 3, 2024Updated last year
- Vico: Compositional Video Generation as Flow Equalization☆59Nov 15, 2024Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,106Oct 7, 2024Updated last year
- A simple and effective LLM pruning approach.☆849Aug 9, 2024Updated last year
- [ICLR 2026] SparseD: Sparse Attention for Diffusion Language Models☆58Feb 22, 2026Updated last week
- Work in progress.☆79Nov 25, 2025Updated 3 months ago
- A family of efficient edge language models in 100M~1B sizes.☆19Feb 14, 2025Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Mar 27, 2025Updated 11 months ago
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆37Feb 22, 2025Updated last year
- Video-Infinity generates long videos quickly using multiple GPUs without extra training.☆191Aug 4, 2024Updated last year
- Towards Meta-Pruning via Optimal Transport, ICLR 2024 (Spotlight)☆18Dec 5, 2024Updated last year
- [Interspeech 2024] LiteFocus is a tool designed to accelerate diffusion-based TTA model, now implemented with the base model AudioLDM2.☆34Mar 11, 2025Updated 11 months ago
- Awesome list for LLM pruning.☆288Oct 11, 2025Updated 4 months ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Jun 3, 2024Updated last year
- MetaLadder: Ascending Mathematical Solution Quality via Analogical-Problem Reasoning Transfer (EMNLP 2025)☆11Apr 18, 2025Updated 10 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆70Jan 6, 2024Updated 2 years ago
- ☆40Nov 22, 2025Updated 3 months ago
- Official implementation for LaCo (EMNLP 2024 Findings)☆21Oct 3, 2024Updated last year
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆49Nov 5, 2024Updated last year
- [ICLR 2025] Adaptive prompt tailored pruning of T2I diffusion models.☆15Feb 1, 2025Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆373Feb 14, 2025Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆38Sep 24, 2024Updated last year
- LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding☆34Jan 16, 2026Updated last month
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆134May 16, 2024Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆454Jan 16, 2025Updated last year
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆33Jun 9, 2025Updated 8 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆524Feb 10, 2025Updated last year
- [ECCV2024] Vista3D: Unravel the 3D Darkside of a Single Image☆56Sep 19, 2024Updated last year
- ☆52Nov 5, 2024Updated last year