pvti / Awesome-Tensor-DecompositionLinks
π A curated list of tensor decomposition resources for model compression.
β68Updated this week
Alternatives and similar repositories for Awesome-Tensor-Decomposition
Users that are interested in Awesome-Tensor-Decomposition are comparing it to the libraries listed below
Sorting:
- Efficient tensor decomposition-based filter pruningβ16Updated 11 months ago
- β12Updated 3 years ago
- Enhanced Network Compression Through Tensor Decompositions and Pruningβ8Updated 2 months ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Modelsβ35Updated 5 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costsβ19Updated 6 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)β31Updated 7 months ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Diβ¦β59Updated 8 months ago
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"β34Updated 3 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methodsβ21Updated 3 months ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)β13Updated 11 months ago
- Collect optimizer related papers, data, repositoriesβ91Updated 7 months ago
- β57Updated last year
- TedNet: A Pytorch Toolkit for Tensor Decomposition Networksβ97Updated 3 years ago
- Welcome to the 'In Context Learning Theory' Reading Groupβ28Updated 7 months ago
- β28Updated 11 months ago
- β42Updated last year
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decompositionβ13Updated 2 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMβ¦β47Updated last year
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, Deβ¦β45Updated last year
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.β59Updated last year
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"β20Updated 11 months ago
- β46Updated last year
- [EMNLP 24] Source code for paper 'AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tuβ¦β11Updated 6 months ago
- β208Updated 2 years ago
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".β105Updated 11 months ago
- Efficient LLM Inference Acceleration using Promptingβ48Updated 8 months ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregenerationβ31Updated 2 years ago
- [ICLR 2023] 'Revisiting Pruning At Initialization Through The Lens of Ramanujan Graph" by Duc Hoang, Shiwei Liu, Radu Marculescu, Atlas Wβ¦β13Updated last year
- [ICLR2025]: OSTQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fittβ¦β61Updated 2 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Modelsβ71Updated 8 months ago