pvti / Awesome-Tensor-Decomposition
π A curated list of tensor decomposition resources for model compression.
β51Updated this week
Alternatives and similar repositories for Awesome-Tensor-Decomposition:
Users that are interested in Awesome-Tensor-Decomposition are comparing it to the libraries listed below
- Efficient tensor decomposition-based filter pruningβ16Updated 8 months ago
- β12Updated 3 years ago
- Enhanced Network Compression Through Tensor Decompositions and Pruningβ8Updated last month
- A thoroughly investigated survey for tensorial neural networks.β126Updated 2 months ago
- Collect optimizer related papers, data, repositoriesβ89Updated 4 months ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Modelsβ32Updated 2 months ago
- TedNet: A Pytorch Toolkit for Tensor Decomposition Networksβ94Updated 2 years ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)β30Updated 4 months ago
- MNIST experiment from Tensorizing neural networks (Novikov et al. 2015)β13Updated 5 years ago
- β231Updated 7 months ago
- Awasome Papers and Resources in Deep Neural Network Pruning with Source Code.β152Updated 6 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methodsβ15Updated 3 weeks ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Diβ¦β52Updated 5 months ago
- Apply CP, Tucker, TT/TR, HT to compress neural networks. Train from scratch.β12Updated 4 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, Deβ¦β46Updated last year
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.β59Updated last year
- β19Updated last month
- β22Updated last year
- β50Updated last year
- β75Updated 2 years ago
- Welcome to the 'In Context Learning Theory' Reading Groupβ28Updated 4 months ago
- β40Updated last year
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020β28Updated 3 years ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMβ¦β45Updated 11 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costsβ16Updated 3 months ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregenerationβ32Updated 2 years ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.β76Updated 3 months ago
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"β13Updated 8 months ago
- [ICML 2023] This project is the official implementation of our accepted ICML 2023 paper BiBench: Benchmarking and Analyzing Network Binarβ¦β55Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"β59Updated 8 months ago