Ablustrund / LoRAMoELinks
LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
☆382Updated last year
Alternatives and similar repositories for LoRAMoE
Users that are interested in LoRAMoE are comparing it to the libraries listed below
Sorting:
- ☆166Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆184Updated last year
- ☆190Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆386Updated 9 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆360Updated 2 years ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆500Updated last year
- ☆213Updated last year
- ☆47Updated 9 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆256Updated 3 months ago
- ☆130Updated 5 months ago
- [NAACL'24] Self-data filtering of LLM instruction-tuning data using a novel perplexity-based difficulty score, without using any other mo…☆400Updated 4 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆188Updated 4 months ago
- ☆282Updated 4 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.☆495Updated last week
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆229Updated 11 months ago
- ☆57Updated 11 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆165Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆195Updated last year
- ☆548Updated 10 months ago
- [TMLR 2025] Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models☆685Updated 3 weeks ago
- Paper List for In-context Learning 🌷☆188Updated last year
- A series of technical report on Slow Thinking with LLM☆743Updated 3 months ago
- The related works and background techniques about Openai o1☆221Updated 10 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆313Updated 3 weeks ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆136Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆394Updated 4 months ago
- This repository collects awesome survey, resource, and paper for Lifelong Learning for Large Language Models. (Updated Regularly)☆67Updated 5 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆398Updated this week
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆266Updated last year
- ☆163Updated last month