Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry
☆42Jan 15, 2024Updated 2 years ago
Alternatives and similar repositories for SparseFinetuning
Users that are interested in SparseFinetuning are comparing it to the libraries listed below
Sorting:
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Sep 4, 2024Updated last year
- The official code for "Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation" | [MM2…☆14Dec 7, 2024Updated last year
- ☆12Jul 30, 2025Updated 7 months ago
- An official implementation of Random Policy Valuation is Enough for LLM Reasoning with Verifiable Rewards☆36Oct 3, 2025Updated 4 months ago
- ☆30Jul 22, 2024Updated last year
- ☆16Dec 9, 2023Updated 2 years ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- ☆41Mar 28, 2024Updated last year
- A simple library for working with Hugging Face models.☆14Dec 30, 2024Updated last year
- GPU operators for sparse tensor operations☆35Mar 11, 2024Updated last year
- This repo is to demo the concept of lossless compression with Transformers as encoder and decoder.☆14May 2, 2024Updated last year
- Code for paper "Open-Domain Hierarchical Event Schema Induction by Incremental Prompting and Verification"☆16Jul 4, 2023Updated 2 years ago
- A basic pure pytorch implementation of flash attention☆16Oct 28, 2024Updated last year
- PB-LLM: Partially Binarized Large Language Models☆156Nov 20, 2023Updated 2 years ago
- ESRGAN implemented in rust with candle☆17Dec 6, 2023Updated 2 years ago
- Hacks for PyTorch☆19Apr 18, 2023Updated 2 years ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆39Jan 9, 2025Updated last year
- Targeted Data Generation with Large Language Models☆19Jun 25, 2024Updated last year
- Scaling Sparse Fine-Tuning to Large Language Models☆18Jan 31, 2024Updated 2 years ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆402Aug 13, 2024Updated last year
- Running inference on the ZeroSCROLLS benchmark☆20Apr 18, 2024Updated last year
- This repository contains code for the MicroAdam paper.☆21Dec 14, 2024Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆44Feb 18, 2026Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆254Updated this week
- ☆23Oct 4, 2024Updated last year
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆184Apr 16, 2024Updated last year
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆870Aug 20, 2024Updated last year
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning☆86Dec 14, 2023Updated 2 years ago
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆90Sep 13, 2024Updated last year
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- Prototype routines for GPU quantization written using PyTorch.☆21Feb 8, 2026Updated 2 weeks ago
- ☆25Oct 31, 2024Updated last year
- new optimizer☆20Aug 4, 2024Updated last year
- ☆19Nov 6, 2023Updated 2 years ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Nov 22, 2025Updated 3 months ago
- ☆352Apr 2, 2024Updated last year
- FireQ: Fast INT4-FP8 Kernel and RoPE-aware Quantization for LLM Inference Acceleration☆20Jun 27, 2025Updated 8 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Sep 20, 2024Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆31Jan 28, 2026Updated last month