[ICLR 2025๐ฅ] SVD-LLM & [NAACL 2025๐ฅ] SVD-LLM V2
โ281Aug 28, 2025Updated 6 months ago
Alternatives and similar repositories for SVD-LLM
Users that are interested in SVD-LLM are comparing it to the libraries listed below
Sorting:
- Activation-aware Singular Value Decomposition for Compressing Large Language Modelsโ88Oct 22, 2024Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projectionโ155Feb 20, 2025Updated last year
- The official implementation of the DAC 2024 paper GQA-LUTโ20Dec 20, 2024Updated last year
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"โ373Feb 14, 2025Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"โ81Jul 7, 2025Updated 7 months ago
- Official PyTorch implementation of CD-MOEโ12Mar 29, 2025Updated 11 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheโ358Nov 20, 2025Updated 3 months ago
- For releasing code related to compression methods for transformers, accompanying our publicationsโ454Jan 16, 2025Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.โ485Nov 26, 2024Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interfaceโ285Oct 19, 2025Updated 4 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingโ277Aug 31, 2024Updated last year
- Awesome LLM compression research papers and tools.โ1,780Feb 23, 2026Updated last week
- โ32Nov 11, 2024Updated last year
- Unified KV Cache Compression Methods for Auto-Regressive Modelsโ1,305Jan 4, 2025Updated last year
- Awesome list for LLM pruning.โ288Oct 11, 2025Updated 4 months ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Modelsโ39Jan 9, 2025Updated last year
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"โ37Aug 20, 2024Updated last year
- This repository provides the official implementation of QSVD, a method for efficient low-rank approximation that unifies Query-Key-Value โฆโ25Dec 1, 2025Updated 3 months ago
- โ19Jun 1, 2025Updated 9 months ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compressionโ72Mar 25, 2025Updated 11 months ago
- YiTu is an easy-to-use runtime to fully exploit the hybrid parallelism of different hardwares (e.g., GPU) to efficiently support the execโฆโ254Jan 7, 2026Updated last month
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"โ211Nov 25, 2025Updated 3 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMsโ98Nov 25, 2024Updated last year
- Explainable Person Re-Identification with Attribute-guided Metric Distillationโ99Jul 18, 2022Updated 3 years ago
- โ19Oct 2, 2024Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMsโ15Jul 18, 2024Updated last year
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantizationโ172Nov 26, 2025Updated 3 months ago
- Vue2็ฎก็็ณป็ปไธ้ฎ้ ็ฝฎcrudๆๅๆ็300%โ42May 25, 2023Updated 2 years ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costsโ23Nov 11, 2025Updated 3 months ago
- ACL 2023โ39Jun 6, 2023Updated 2 years ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Modelsโ327Nov 26, 2025Updated 3 months ago
- PyTorch implementation of Language model compression with weighted low-rank factorizationโ13Jun 28, 2023Updated 2 years ago
- The official implementation of the EMNLP 2023 paper LLM-FP4โ220Dec 15, 2023Updated 2 years ago
- Official code of ICML 2025 paper "NTPP: Generative Speech Language Modeling for Dual-Channel Spoken Dialogue via Next-Token-Pair Predictiโฆโ135Oct 27, 2025Updated 4 months ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.โ27Apr 21, 2025Updated 10 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seโฆโ816Mar 6, 2025Updated 11 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMsโ123Jul 4, 2025Updated 7 months ago
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMsโ27Jun 25, 2024Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLMโ177Jul 12, 2024Updated last year