Yanjun-Zhao / HiZOOLinks
Second-Order Fine-Tuning without Pain for LLMs: a Hessian Informed Zeroth-Order Optimizer
☆20Updated 8 months ago
Alternatives and similar repositories for HiZOO
Users that are interested in HiZOO are comparing it to the libraries listed below
Sorting:
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆111Updated 3 months ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆66Updated last year
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated last year
- ☆61Updated 2 years ago
- ☆51Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆72Updated 3 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆21Updated 10 months ago
- ☆59Updated 10 months ago
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆42Updated 3 years ago
- An implementation of the DISP-LLM method from the NeurIPS 2024 paper: Dimension-Independent Structural Pruning for Large Language Models.☆22Updated 2 months ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆14Updated last year
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Updated last year
- Awesome-Low-Rank-Adaptation☆116Updated last year
- ☆14Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆80Updated 11 months ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Updated 2 years ago
- A curated list of early exiting (LLM, CV, NLP, etc)☆65Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆34Updated 11 months ago
- [NeurIPS 2021] code for "Taxonomizing local versus global structure in neural network loss landscapes" https://arxiv.org/abs/2107.11228☆19Updated 3 years ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆13Updated last year
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated 2 years ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆15Updated 6 months ago
- ☆218Updated 2 years ago
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆29Updated 3 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆74Updated 2 years ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 3 years ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆45Updated last year
- 😎 A curated list of tensor decomposition resources for model compression.☆83Updated last month