GCYZSL / MoLA
☆134Updated 9 months ago
Alternatives and similar repositories for MoLA
Users that are interested in MoLA are comparing it to the libraries listed below
Sorting:
- [SIGIR'24] The official implementation code of MOELoRA.☆162Updated 9 months ago
- ☆174Updated 10 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆160Updated 8 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆119Updated 6 months ago
- ☆194Updated 6 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆97Updated 2 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆138Updated 2 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆331Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆34Updated 4 months ago
- ☆101Updated 10 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆162Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆199Updated 5 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆208Updated 2 weeks ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆37Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆46Updated last month
- This repository collects awesome survey, resource, and paper for Lifelong Learning for Large Language Models. (Updated Regularly)☆47Updated 3 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆19Updated 2 months ago
- ☆97Updated 2 months ago
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆40Updated this week
- Must-read Papers on Large Language Model (LLM) Continual Learning☆141Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆141Updated 3 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆116Updated last month
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆89Updated 3 months ago
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆133Updated 2 months ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆67Updated last year
- ☆81Updated last year
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆83Updated 5 months ago
- ☆29Updated 3 months ago
- ☆33Updated 5 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆76Updated last year