Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry
☆42Jan 15, 2024Updated 2 years ago
Alternatives and similar repositories for SparseFinetuning
Users that are interested in SparseFinetuning are comparing it to the libraries listed below
Sorting:
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- ☆57Jun 10, 2024Updated last year
- GPU operators for sparse tensor operations☆35Mar 11, 2024Updated 2 years ago
- The official code for "Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation" | [MM2…☆14Dec 7, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- ☆12Jul 30, 2025Updated 7 months ago
- This repository contains code for the MicroAdam paper.☆21Dec 14, 2024Updated last year
- ☆16Dec 9, 2023Updated 2 years ago
- ☆353Apr 2, 2024Updated last year
- ☆30Jul 22, 2024Updated last year
- A basic pure pytorch implementation of flash attention☆16Oct 28, 2024Updated last year
- An official implementation of Random Policy Valuation is Enough for LLM Reasoning with Verifiable Rewards☆37Oct 3, 2025Updated 5 months ago
- PB-LLM: Partially Binarized Large Language Models☆156Nov 20, 2023Updated 2 years ago
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆185Apr 16, 2024Updated last year
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆30Nov 22, 2025Updated 3 months ago
- ☆42Mar 28, 2024Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆28Jul 13, 2023Updated 2 years ago
- [ICLR 2026] Learning to Parallel: Accelerating Diffusion Large Language Models via Learnable Parallel Decoding☆31Jan 27, 2026Updated last month
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆877Aug 20, 2024Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆263Updated this week
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- ☆19Nov 6, 2023Updated 2 years ago
- Running inference on the ZeroSCROLLS benchmark☆20Apr 18, 2024Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆408Aug 13, 2024Updated last year
- Hacks for PyTorch☆19Apr 18, 2023Updated 2 years ago
- [DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive La…☆84Jun 30, 2024Updated last year
- Targeted Data Generation with Large Language Models☆19Jun 25, 2024Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆713Aug 13, 2024Updated last year
- ☆129Jan 22, 2024Updated 2 years ago
- ☆35Dec 22, 2025Updated 2 months ago
- ☆25Oct 31, 2024Updated last year
- Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models☆25Sep 27, 2023Updated 2 years ago
- This repo is to demo the concept of lossless compression with Transformers as encoder and decoder.☆14May 2, 2024Updated last year
- ESRGAN implemented in rust with candle☆17Dec 6, 2023Updated 2 years ago
- AFPQ code implementation☆23Nov 6, 2023Updated 2 years ago
- A simple library for working with Hugging Face models.☆14Dec 30, 2024Updated last year
- new optimizer☆20Aug 4, 2024Updated last year
- Official repository of Sparse ISO-FLOP Transformations for Maximizing Training Efficiency☆25Jul 31, 2024Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆34Aug 14, 2024Updated last year