ambisinister / lossfreebalanceLinks
toy reproduction of Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts
☆26Updated last year
Alternatives and similar repositories for lossfreebalance
Users that are interested in lossfreebalance are comparing it to the libraries listed below
Sorting:
- ☆124Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆162Updated 5 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Updated last year
- ☆151Updated last year
- Official implementation of "Unifying Multimodal Large Language Model Capabilities and Modalities via Model Merging".☆39Updated last month
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆112Updated 5 months ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆86Updated 2 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆138Updated 8 months ago
- code for Learning the Unlearned: Mitigating Feature Suppression in Contrastive Learning☆18Updated last year
- ☆28Updated last year
- dParallel: Learnable Parallel Decoding for dLLMs☆49Updated 2 months ago
- ☆65Updated 6 months ago
- Recent Advances on MLLM's Reasoning Ability☆26Updated 8 months ago
- Data distillation benchmark☆71Updated 6 months ago
- One-shot Entropy Minimization☆187Updated 6 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆61Updated last year
- (ICLR 2025 Spotlight) DEEM: Official implementation of Diffusion models serve as the eyes of large language models for image perception.☆44Updated 5 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆59Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆50Updated last year
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆63Updated 2 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆235Updated last year
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆53Updated 11 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆149Updated 5 months ago
- [NeurIPS 2025] Unsupervised Post-Training for Multi-Modal LLM Reasoning via GRPO☆70Updated last month
- ☆41Updated last year
- A Collection of Papers on Diffusion Language Models☆149Updated 2 months ago
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆144Updated 2 months ago
- LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently (ICML2025 Oral)☆27Updated last month