osehmathias / lisa
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning
☆19Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for lisa
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆64Updated 5 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆33Updated last month
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆71Updated last month
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆52Updated 3 weeks ago
- ☆29Updated last year
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆70Updated 8 months ago
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆27Updated last week
- Official Pytorch Implementation of "OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning" b…☆28Updated 5 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆74Updated last month
- ☆47Updated last year
- ☆116Updated 4 months ago
- A Closer Look into Mixture-of-Experts in Large Language Models☆40Updated 3 months ago
- ☆31Updated last year
- ☆27Updated last year
- A curated list of Model Merging methods.☆83Updated 2 months ago
- [ATTRIB @ NeurIPS 2024 Oral] When Attention Sink Emerges in Language Models: An Empirical View☆29Updated last month
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆17Updated 4 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- ☆77Updated 4 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆37Updated last week
- ☆147Updated 4 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆53Updated last month
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆59Updated 7 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆100Updated 3 weeks ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆33Updated last month
- AnchorAttention: Improved attention for LLMs long-context training☆142Updated this week
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆46Updated this week
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆28Updated last month
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆84Updated 6 months ago