dinobby / Symbolic-MoELinks
The code implementation of Symbolic-MoE
☆45Updated 3 months ago
Alternatives and similar repositories for Symbolic-MoE
Users that are interested in Symbolic-MoE are comparing it to the libraries listed below
Sorting:
- JudgeLRM: Large Reasoning Models as a Judge☆40Updated 3 weeks ago
- ☆50Updated 10 months ago
- ☆54Updated 2 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 4 months ago
- Official implementation of the NeurIPS 2025 paper "Soft Thinking: Unlocking the Reasoning Potential of LLMs in Continuous Concept Space"☆291Updated 2 weeks ago
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆74Updated 6 months ago
- [ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆83Updated last year
- SIFT: Grounding LLM Reasoning in Contexts via Stickers☆57Updated 9 months ago
- SSRL: Self-Search Reinforcement Learning☆195Updated 4 months ago
- [NeurIPS 2025] Thinkless: LLM Learns When to Think☆246Updated 3 months ago
- ☆35Updated 7 months ago
- [AAAI26] LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs☆49Updated 3 weeks ago
- ☆68Updated 3 months ago
- ☆140Updated 3 months ago
- Official code repository for Sketch-of-Thought (SoT)☆129Updated 7 months ago
- Diffusion Language Models For Code Infilling Beyond Fixed-size Canvas☆95Updated 3 months ago
- Process Reward Models That Think☆67Updated last month
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆97Updated 10 months ago
- [NeurIPS'25] Router-R1: Teaching LLMs Multi-Round Routing and Aggregation via Reinforcement Learning☆101Updated 3 months ago
- Code for Heima☆58Updated 8 months ago
- Discriminative Constrained Optimization for Reinforcing Large Reasoning Models☆49Updated last month
- ☆71Updated 2 months ago
- ☆363Updated last month
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆86Updated 9 months ago
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"☆36Updated 11 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆55Updated 3 months ago
- Official repo of paper LM2☆46Updated 10 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆116Updated 6 months ago
- ☆226Updated 10 months ago
- Official implementation of paper "Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models"☆55Updated last week