[ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2
β284Aug 28, 2025Updated 6 months ago
Alternatives and similar repositories for SVD-LLM
Users that are interested in SVD-LLM are comparing it to the libraries listed below
Sorting:
- Activation-aware Singular Value Decomposition for Compressing Large Language Modelsβ90Oct 22, 2024Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projectionβ155Feb 20, 2025Updated last year
- This repository provides the official implementation of QSVD, a method for efficient low-rank approximation that unifies Query-Key-Value β¦β25Dec 1, 2025Updated 3 months ago
- The official implementation of the DAC 2024 paper GQA-LUTβ21Dec 20, 2024Updated last year
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"β380Feb 14, 2025Updated last year
- β21Oct 2, 2024Updated last year
- For releasing code related to compression methods for transformers, accompanying our publicationsβ455Jan 16, 2025Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"β81Jul 7, 2025Updated 8 months ago
- Awesome LLM compression research papers and tools.β1,789Feb 23, 2026Updated 3 weeks ago
- Official PyTorch implementation of CD-MOEβ12Mar 13, 2026Updated last week
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Modelsβ39Jan 9, 2025Updated last year
- PyTorch implementation of Language model compression with weighted low-rank factorizationβ13Jun 28, 2023Updated 2 years ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.β492Nov 26, 2024Updated last year
- [ICLR 2025] Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"β52Oct 19, 2025Updated 5 months ago
- ACL 2023β39Jun 6, 2023Updated 2 years ago
- YiTu is an easy-to-use runtime to fully exploit the hybrid parallelism of different hardwares (e.g., GPU) to efficiently support the execβ¦β254Jan 7, 2026Updated 2 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ359Nov 20, 2025Updated 4 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ1,311Jan 4, 2025Updated last year
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"β211Nov 25, 2025Updated 3 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantizationβ172Nov 26, 2025Updated 3 months ago
- Awesome list for LLM pruning.β290Oct 11, 2025Updated 5 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Modelsβ330Nov 26, 2025Updated 3 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ277Aug 31, 2024Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interfaceβ293Mar 10, 2026Updated last week
- The official implementation of the EMNLP 2023 paper LLM-FP4β222Dec 15, 2023Updated 2 years ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decompositionβ18Apr 16, 2025Updated 11 months ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMsβ15Jul 18, 2024Updated last year
- [NeurIPS 2024] Search for Efficient LLMsβ16Jan 16, 2025Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMsβ98Nov 25, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β818Mar 6, 2025Updated last year
- Explainable Person Re-Identification with Attribute-guided Metric Distillationβ99Jul 18, 2022Updated 3 years ago
- Official code of ICML 2025 paper "NTPP: Generative Speech Language Modeling for Dual-Channel Spoken Dialogue via Next-Token-Pair Predictiβ¦β134Oct 27, 2025Updated 4 months ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.β27Apr 21, 2025Updated 11 months ago
- β32Nov 11, 2024Updated last year
- Visualization, simulation, manipulation of Intrinsically disorder proteins with Gibbs samplingβ288Oct 24, 2024Updated last year
- D^2-MoE: Delta Decompression for MoE-based LLMs Compressionβ73Mar 25, 2025Updated 11 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichβ¦β1,111Oct 7, 2024Updated last year
- The official implementation of the ICML 2023 paper OFQ-ViTβ39Oct 3, 2023Updated 2 years ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)β254Mar 13, 2025Updated last year