[ICLR 2025π₯] SVD-LLM & [NAACL 2025π₯] SVD-LLM V2
β290Aug 28, 2025Updated 8 months ago
Alternatives and similar repositories for SVD-LLM
Users that are interested in SVD-LLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Activation-aware Singular Value Decomposition for Compressing Large Language Modelsβ92Oct 22, 2024Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projectionβ154Feb 20, 2025Updated last year
- This repository provides the official implementation of QSVD, a method for efficient low-rank approximation that unifies Query-Key-Value β¦β26Dec 1, 2025Updated 5 months ago
- The official implementation of the DAC 2024 paper GQA-LUTβ22Dec 20, 2024Updated last year
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"β390Feb 14, 2025Updated last year
- End-to-end encrypted cloud storage - Proton Drive β’ AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- β21Oct 2, 2024Updated last year
- For releasing code related to compression methods for transformers, accompanying our publicationsβ461Jan 16, 2025Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"β81Jul 7, 2025Updated 9 months ago
- Awesome LLM compression research papers and tools.β1,824Feb 23, 2026Updated 2 months ago
- Official PyTorch implementation of CD-MOEβ12Mar 18, 2026Updated last month
- PyTorch implementation of Language model compression with weighted low-rank factorizationβ13Jun 28, 2023Updated 2 years ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Modelsβ39Jan 9, 2025Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.β506Nov 26, 2024Updated last year
- [ICLR 2025] Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"β54Oct 19, 2025Updated 6 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ACL 2023β39Jun 6, 2023Updated 2 years ago
- YiTu is an easy-to-use runtime to fully exploit the hybrid parallelism of different hardwares (e.g., GPU) to efficiently support the execβ¦β254Jan 7, 2026Updated 3 months ago
- [ICML 2025] Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"β213Nov 25, 2025Updated 5 months ago
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ1,328Jan 4, 2025Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ387Nov 20, 2025Updated 5 months ago
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantizationβ171Nov 26, 2025Updated 5 months ago
- Awesome list for LLM pruning.β291Oct 11, 2025Updated 6 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Modelsβ337Apr 10, 2026Updated 3 weeks ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ279Aug 31, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Fast Hadamard transform in CUDA, with a PyTorch interfaceβ310Mar 10, 2026Updated last month
- The official implementation of the EMNLP 2023 paper LLM-FP4β224Dec 15, 2023Updated 2 years ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMsβ15Jul 18, 2024Updated last year
- [NeurIPS 2024] Search for Efficient LLMsβ16Jan 16, 2025Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMsβ99Nov 25, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β834Mar 6, 2025Updated last year
- Explainable Person Re-Identification with Attribute-guided Metric Distillationβ99Jul 18, 2022Updated 3 years ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decompositionβ21Apr 16, 2025Updated last year
- Official code of ICML 2025 paper "NTPP: Generative Speech Language Modeling for Dual-Channel Spoken Dialogue via Next-Token-Pair Predictiβ¦β134Oct 27, 2025Updated 6 months ago
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.β27Apr 21, 2025Updated last year
- β33Nov 11, 2024Updated last year
- Visualization, simulation, manipulation of Intrinsically disorder proteins with Gibbs samplingβ288Oct 24, 2024Updated last year
- D^2-MoE: Delta Decompression for MoE-based LLMs Compressionβ82Mar 25, 2025Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichβ¦β1,123Oct 7, 2024Updated last year
- The official implementation of the ICML 2023 paper OFQ-ViTβ39Oct 3, 2023Updated 2 years ago
- A curated list for Efficient Large Language Modelsβ1,993Jun 17, 2025Updated 10 months ago