[ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2
β289Aug 28, 2025Updated 7 months ago
Alternatives and similar repositories for SVD-LLM
Users that are interested in SVD-LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Activation-aware Singular Value Decomposition for Compressing Large Language Modelsβ91Oct 22, 2024Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projectionβ153Feb 20, 2025Updated last year
- This repository provides the official implementation of QSVD, a method for efficient low-rank approximation that unifies Query-Key-Value β¦β26Dec 1, 2025Updated 4 months ago
- The official implementation of the DAC 2024 paper GQA-LUTβ22Dec 20, 2024Updated last year
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"β383Feb 14, 2025Updated last year
- NordVPN Special Discount Offer β’ AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- β21Oct 2, 2024Updated last year
- For releasing code related to compression methods for transformers, accompanying our publicationsβ459Jan 16, 2025Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"β81Jul 7, 2025Updated 9 months ago
- Awesome LLM compression research papers and tools.β1,796Feb 23, 2026Updated last month
- Official PyTorch implementation of CD-MOEβ12Mar 18, 2026Updated 3 weeks ago
- PyTorch implementation of Language model compression with weighted low-rank factorizationβ13Jun 28, 2023Updated 2 years ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Modelsβ39Jan 9, 2025Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.β501Nov 26, 2024Updated last year
- [ICLR 2025] Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"β52Oct 19, 2025Updated 5 months ago
- Wordpress hosting with auto-scaling on Cloudways β’ AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ACL 2023β39Jun 6, 2023Updated 2 years ago
- YiTu is an easy-to-use runtime to fully exploit the hybrid parallelism of different hardwares (e.g., GPU) to efficiently support the execβ¦β254Jan 7, 2026Updated 3 months ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"β212Nov 25, 2025Updated 4 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ1,319Jan 4, 2025Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ381Nov 20, 2025Updated 4 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantizationβ170Nov 26, 2025Updated 4 months ago
- Awesome list for LLM pruning.β287Oct 11, 2025Updated 6 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Modelsβ336Nov 26, 2025Updated 4 months ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ279Aug 31, 2024Updated last year
- NordVPN Special Discount Offer β’ AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Fast Hadamard transform in CUDA, with a PyTorch interfaceβ304Mar 10, 2026Updated last month
- The official implementation of the EMNLP 2023 paper LLM-FP4β222Dec 15, 2023Updated 2 years ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decompositionβ19Apr 16, 2025Updated 11 months ago
- [NeurIPS 2024] Search for Efficient LLMsβ16Jan 16, 2025Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMsβ15Jul 18, 2024Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMsβ98Nov 25, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β822Mar 6, 2025Updated last year
- Explainable Person Re-Identification with Attribute-guided Metric Distillationβ99Jul 18, 2022Updated 3 years ago
- Official code of ICML 2025 paper "NTPP: Generative Speech Language Modeling for Dual-Channel Spoken Dialogue via Next-Token-Pair Predictiβ¦β134Oct 27, 2025Updated 5 months ago
- Proton VPN Special Offer - Get 70% off β’ AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.β27Apr 21, 2025Updated 11 months ago
- β33Nov 11, 2024Updated last year
- Visualization, simulation, manipulation of Intrinsically disorder proteins with Gibbs samplingβ288Oct 24, 2024Updated last year
- D^2-MoE: Delta Decompression for MoE-based LLMs Compressionβ79Mar 25, 2025Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichβ¦β1,115Oct 7, 2024Updated last year
- The official implementation of the ICML 2023 paper OFQ-ViTβ39Oct 3, 2023Updated 2 years ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)β256Mar 13, 2025Updated last year