JingXuTHU / Random-Masking-Finds-Winning-Tickets-for-Parameter-Efficient-Fine-tuning
☆13Updated 11 months ago
Alternatives and similar repositories for Random-Masking-Finds-Winning-Tickets-for-Parameter-Efficient-Fine-tuning:
Users that are interested in Random-Masking-Finds-Winning-Tickets-for-Parameter-Efficient-Fine-tuning are comparing it to the libraries listed below
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆44Updated 6 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆42Updated 5 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆21Updated 7 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆77Updated 5 months ago
- ☆50Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆56Updated last month
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆23Updated 3 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆47Updated last year
- ☆50Updated last year
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆97Updated 9 months ago
- ☆48Updated 4 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆101Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated 9 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆30Updated 5 months ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆44Updated 2 years ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆74Updated last year
- ☆66Updated 3 years ago
- Awesome-Low-Rank-Adaptation☆93Updated 6 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- ☆40Updated 10 months ago
- ☆18Updated 4 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆36Updated 9 months ago
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆17Updated 2 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆60Updated last year
- Source code of EMNLP 2022 Findings paper "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"☆18Updated last year
- A curated list of Model Merging methods.☆91Updated 7 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆38Updated 3 weeks ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆34Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- This repository contains the implementation of the paper "MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models".☆16Updated 5 months ago