JingXuTHU / Random-Masking-Finds-Winning-Tickets-for-Parameter-Efficient-Fine-tuning
☆13Updated 10 months ago
Alternatives and similar repositories for Random-Masking-Finds-Winning-Tickets-for-Parameter-Efficient-Fine-tuning:
Users that are interested in Random-Masking-Finds-Winning-Tickets-for-Parameter-Efficient-Fine-tuning are comparing it to the libraries listed below
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆72Updated 4 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆42Updated 5 months ago
- A curated list of Model Merging methods.☆91Updated 6 months ago
- ☆50Updated last year
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆43Updated 2 years ago
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆91Updated 8 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆52Updated 3 weeks ago
- Awesome-Low-Rank-Adaptation☆83Updated 5 months ago
- ☆50Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆97Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆42Updated 5 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆19Updated 6 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆72Updated last year
- ☆65Updated 3 years ago
- ☆48Updated 3 months ago
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆15Updated last month
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆52Updated 5 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆23Updated 2 months ago
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆17Updated 6 months ago
- Welcome to the 'In Context Learning Theory' Reading Group☆28Updated 4 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆30Updated 4 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆31Updated 2 weeks ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 9 months ago
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆17Updated last month
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆14Updated 8 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆45Updated 11 months ago
- ☆10Updated last month
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated 2 years ago
- Codebase for decoding compressed trust.☆23Updated 10 months ago