zitian-gao / one-shot-emLinks
One-shot Entropy Minimization
☆186Updated 4 months ago
Alternatives and similar repositories for one-shot-em
Users that are interested in one-shot-em are comparing it to the libraries listed below
Sorting:
- ☆174Updated 5 months ago
- repo for paper https://arxiv.org/abs/2504.13837☆200Updated 3 months ago
- ☆334Updated 2 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 8 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆360Updated 3 months ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆155Updated last week
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆137Updated 3 months ago
- ☆129Updated 7 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆258Updated 5 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆272Updated last week
- 📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.☆247Updated 3 weeks ago
- ☆108Updated 4 months ago
- ☆275Updated 3 months ago
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆308Updated last month
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆157Updated 4 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆79Updated 4 months ago
- ☆46Updated 6 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆147Updated 2 weeks ago
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆73Updated last week
- Extrapolating RLVR to General Domains without Verifiers☆174Updated 2 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆348Updated 3 weeks ago
- ☆211Updated 8 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆192Updated last year
- Paper List of Inference/Test Time Scaling/Computing☆317Updated last month
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆91Updated 6 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆131Updated 3 months ago
- ☆161Updated last year
- ☆133Updated last month
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆88Updated 10 months ago
- ☆104Updated last month