UbiquitousLearning / Backpropagation_Free_Training_SurveyLinks
☆25Updated last year
Alternatives and similar repositories for Backpropagation_Free_Training_Survey
Users that are interested in Backpropagation_Free_Training_Survey are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆110Updated last month
- A curated list of early exiting (LLM, CV, NLP, etc)☆58Updated last year
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆66Updated 10 months ago
- Second-Order Fine-Tuning without Pain for LLMs: a Hessian Informed Zeroth-Order Optimizer☆19Updated 6 months ago
- Survey Paper List - Efficient LLM and Foundation Models☆255Updated 11 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆36Updated last year
- "Efficient Federated Learning for Modern NLP", to appear at MobiCom 2023.☆34Updated 2 years ago
- ☆35Updated last year
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆51Updated last year
- The official implementation of TinyTrain [ICML '24]☆22Updated last year
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆412Updated last month
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆33Updated 10 months ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆14Updated last year
- ☆100Updated last year
- a curated list of high-quality papers on resource-efficient LLMs 🌱☆134Updated 5 months ago
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
- ☆18Updated last year
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆13Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆67Updated 6 months ago
- Efficient LLM Inference Acceleration using Prompting☆50Updated 10 months ago
- ☆59Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆88Updated 10 months ago
- EE-LLM is a framework for large-scale training and inference of early-exit (EE) large language models (LLMs).☆67Updated last year
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- Split Learning Simulation Framework for LLMs☆28Updated 11 months ago
- ☆90Updated 8 months ago
- A curated list of Model Merging methods.☆92Updated 11 months ago
- This repository contains the implementation of the paper "MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models".☆20Updated 3 months ago
- How much energy do GenAI models consume?☆47Updated 3 months ago