THUDM / MoELoRA_RiemannianLinks
Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)
☆24Updated 2 months ago
Alternatives and similar repositories for MoELoRA_Riemannian
Users that are interested in MoELoRA_Riemannian are comparing it to the libraries listed below
Sorting:
- ☆74Updated 11 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 4 months ago
- Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) 🍓☆34Updated 2 months ago
- [ICLR 2025] Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality☆30Updated last month
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- ☆138Updated 10 months ago
- ☆77Updated 4 months ago
- Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models?☆23Updated 2 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆50Updated 7 months ago
- codes for Efficient Test-Time Scaling via Self-Calibration☆14Updated 3 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆120Updated last week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆69Updated 3 months ago
- ☆105Updated 2 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆41Updated 2 months ago
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆82Updated 5 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆62Updated 2 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆86Updated 6 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆64Updated 2 weeks ago
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆51Updated last week
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆112Updated last month
- ICLR 2025☆26Updated 2 weeks ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 3 months ago
- ☆42Updated 3 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 6 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆97Updated 3 months ago
- A comprehensive collection of process reward models.☆85Updated 2 weeks ago
- ☆100Updated last month
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆36Updated last month
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆88Updated last year