andyjm3 / SLTrainLinks
SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)
☆39Updated last year
Alternatives and similar repositories for SLTrain
Users that are interested in SLTrain are comparing it to the libraries listed below
Sorting:
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆13Updated last year
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆70Updated last year
- A curated list of Model Merging methods.☆96Updated 2 months ago
- Unofficial Implementation of Selective Attention Transformer☆20Updated last year
- ☆35Updated 11 months ago
- ☆63Updated 2 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 9 months ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Updated last year
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆26Updated last month
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆36Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆32Updated 4 months ago
- ☆63Updated last year
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆25Updated 11 months ago
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆123Updated 7 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆195Updated last year
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆63Updated 10 months ago
- ☆13Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Updated 2 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆88Updated last year
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Updated last year
- ☆36Updated 10 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- ☆53Updated last month
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆33Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆18Updated last year
- ☆23Updated last year