A family of compressed models obtained via pruning and knowledge distillation
☆374Nov 6, 2025Updated 4 months ago
Alternatives and similar repositories for Minitron
Users that are interested in Minitron are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆187Jan 1, 2025Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,111Oct 7, 2024Updated last year
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆90Sep 13, 2024Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆53Sep 7, 2024Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆455Jan 16, 2025Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆643Mar 4, 2024Updated 2 years ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆877Aug 20, 2024Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Mar 27, 2025Updated 11 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,891Updated this week
- A simple and effective LLM pruning approach.☆856Aug 9, 2024Updated last year
- An Open Source Toolkit For LLM Distillation☆894Mar 14, 2026Updated last week
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆238Oct 14, 2025Updated 5 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Nov 25, 2024Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆492Nov 26, 2024Updated last year
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆676Apr 25, 2025Updated 10 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆408Aug 13, 2024Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆990Sep 23, 2025Updated 6 months ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆64Aug 2, 2024Updated last year
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,218Updated this week
- Scalable toolkit for efficient model alignment☆850Oct 6, 2025Updated 5 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,229Feb 20, 2026Updated last month
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆1,268Mar 9, 2025Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆204Jul 17, 2024Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆362Feb 5, 2026Updated last month
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆78Apr 29, 2024Updated last year
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated last week
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,310Updated this week
- A framework for few-shot evaluation of language models.☆11,802Updated this week
- Efficient Triton Kernels for LLM Training☆6,216Mar 18, 2026Updated last week
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆330Nov 26, 2025Updated 3 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,322Mar 6, 2025Updated last year
- Official PyTorch implementation of "LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging" (ICML 2024)☆31Aug 15, 2024Updated last year
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,625Jul 12, 2024Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆282Nov 24, 2025Updated 4 months ago
- The official implementation of the DAC 2024 paper GQA-LUT☆21Dec 20, 2024Updated last year
- [TMLR 2025] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆125Mar 6, 2026Updated 2 weeks ago