GCYZSL / MoLALinks
☆171Updated last year
Alternatives and similar repositories for MoLA
Users that are interested in MoLA are comparing it to the libraries listed below
Sorting:
- [SIGIR'24] The official implementation code of MOELoRA.☆186Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆200Updated last year
- ☆192Updated last year
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆197Updated 3 weeks ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆391Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆131Updated 9 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆138Updated last year
- ☆216Updated last month
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆235Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆88Updated 10 months ago
- ☆175Updated 2 weeks ago
- ☆124Updated last year
- ☆136Updated 9 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆38Updated 11 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆94Updated 2 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆163Updated 6 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆157Updated 7 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆33Updated 10 months ago
- ☆61Updated last year
- ☆294Updated 5 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆91Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆363Updated 2 years ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆138Updated 8 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆242Updated this week
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆44Updated 5 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆73Updated 5 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆150Updated 5 months ago
- this is an implementation for the paper Improve Mathematical Reasoning in Language Models by Automated Process Supervision from google de…☆44Updated 5 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆118Updated 6 months ago