vantienpham / Awesome-Tensor-DecompositionLinks
π A curated list of tensor decomposition resources for model compression.
β87Updated last week
Alternatives and similar repositories for Awesome-Tensor-Decomposition
Users that are interested in Awesome-Tensor-Decomposition are comparing it to the libraries listed below
Sorting:
- A thoroughly investigated survey for tensorial neural networks.β141Updated 10 months ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Diβ¦β65Updated last year
- Collect optimizer related papers, data, repositoriesβ98Updated last year
- β45Updated last year
- β284Updated last year
- Neural Tangent Kernel Papersβ119Updated 10 months ago
- β13Updated 3 years ago
- [ICMLβ24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".β117Updated 4 months ago
- (NeurIPS 2024) QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptationβ32Updated this week
- Second-Order Fine-Tuning without Pain for LLMs: a Hessian Informed Zeroth-Order Optimizerβ20Updated 9 months ago
- Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"β48Updated last month
- Code to simulate energy-based analog systems and equilibrium propagationβ30Updated 7 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costsβ21Updated last week
- MNIST experiment from Tensorizing neural networks (Novikov et al. 2015)β13Updated 6 years ago
- β37Updated 3 months ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)β13Updated last year
- A library for calculating the FLOPs in the forward() process based on torch.fxβ132Updated 7 months ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Modelsβ38Updated 10 months ago
- This repository contains low-bit quantization papers from 2020 to 2025 on top conference.β68Updated last month
- This repo contains the code for studying the interplay between quantization and sparsity methodsβ23Updated 8 months ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.β59Updated 2 years ago
- Implementation of LPLR algorithm for matrix compressionβ31Updated 2 years ago
- TedNet: A Pytorch Toolkit for Tensor Decomposition Networksβ96Updated 3 years ago
- summer school materialsβ46Updated 2 years ago
- β220Updated 2 years ago
- Awesome Pruning. β Curated Resources for Neural Network Pruning.β170Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)β36Updated last year
- Torch2Chip (MLSys, 2024)β54Updated 7 months ago
- β61Updated 2 years ago
- Efficient tensor decomposition-based filter pruningβ18Updated 4 months ago