Yanjun-Zhao / HiZOOLinks
Second-Order Fine-Tuning without Pain for LLMs: a Hessian Informed Zeroth-Order Optimizer
☆20Updated 9 months ago
Alternatives and similar repositories for HiZOO
Users that are interested in HiZOO are comparing it to the libraries listed below
Sorting:
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆117Updated 4 months ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆65Updated last year
- ☆25Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆21Updated 2 weeks ago
- ☆62Updated 2 years ago
- ☆61Updated 11 months ago
- A curated list of early exiting (LLM, CV, NLP, etc)☆68Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆36Updated last year
- ☆52Updated last year
- ☆14Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆36Updated last year
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆42Updated 3 years ago
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Updated 2 years ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆14Updated 2 years ago
- Awesome-Low-Rank-Adaptation☆123Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆80Updated last year
- 😎 A curated list of tensor decomposition resources for model compression.☆87Updated 2 weeks ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated 2 years ago
- ☆221Updated 2 years ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆63Updated 8 months ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆13Updated last year
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Updated last year
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆15Updated 7 months ago
- An implementation of the penalty-based bilevel gradient descent (PBGD) algorithm and the iterative differentiation (ITD/RHG) methods.☆19Updated 2 years ago
- Welcome to the 'In Context Learning Theory' Reading Group☆30Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆176Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Updated 8 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆46Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆74Updated 4 months ago
- An implementation of the DISP-LLM method from the NeurIPS 2024 paper: Dimension-Independent Structural Pruning for Large Language Models.☆23Updated 3 months ago