shawnricecake / HeimaLinks
Code for Heima
☆58Updated 7 months ago
Alternatives and similar repositories for Heima
Users that are interested in Heima are comparing it to the libraries listed below
Sorting:
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆81Updated last month
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆97Updated 2 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 4 months ago
- Official Repository of LatentSeek☆68Updated 5 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆48Updated last year
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆61Updated 5 months ago
- ☆45Updated last month
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆46Updated last year
- ☆46Updated 7 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆123Updated 7 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆51Updated last month
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆166Updated 5 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆87Updated 9 months ago
- ☆30Updated 2 months ago
- A Self-Training Framework for Vision-Language Reasoning☆86Updated 9 months ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Updated 11 months ago
- ☆20Updated 11 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆80Updated 5 months ago
- ☆136Updated 2 months ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆74Updated 4 months ago
- ☆22Updated 6 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated 3 weeks ago
- V1: Toward Multimodal Reasoning by Designing Auxiliary Task☆36Updated 7 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆47Updated 4 months ago
- Official repository for paper "DeepCritic: Deliberate Critique with Large Language Models"☆41Updated 4 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆88Updated 11 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆59Updated 6 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆150Updated 4 months ago