A family of compressed models obtained via pruning and knowledge distillation
☆376Nov 6, 2025Updated 5 months ago
Alternatives and similar repositories for Minitron
Users that are interested in Minitron are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆186Jan 1, 2025Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,115Oct 7, 2024Updated last year
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆91Sep 13, 2024Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆53Sep 7, 2024Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆459Jan 16, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆643Mar 4, 2024Updated 2 years ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆877Aug 20, 2024Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆68Mar 27, 2025Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,996Updated this week
- A simple and effective LLM pruning approach.☆860Aug 9, 2024Updated last year
- An Open Source Toolkit For LLM Distillation☆925Mar 14, 2026Updated last month
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆240Oct 14, 2025Updated 6 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Nov 25, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆501Nov 26, 2024Updated last year
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆677Apr 25, 2025Updated 11 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,644Apr 7, 2026Updated last week
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆418Aug 13, 2024Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆1,007Sep 23, 2025Updated 6 months ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆63Aug 2, 2024Updated last year
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,436Updated this week
- Scalable toolkit for efficient model alignment☆852Oct 6, 2025Updated 6 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,273Feb 20, 2026Updated last month
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…