StarDewXXX / AdaR1Links
The official repository of paper "AdaR1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"
☆16Updated 2 months ago
Alternatives and similar repositories for AdaR1
Users that are interested in AdaR1 are comparing it to the libraries listed below
Sorting:
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 4 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 8 months ago
- ☆36Updated 2 months ago
- ☆122Updated last month
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆27Updated 7 months ago
- ☆48Updated last month
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆85Updated 4 months ago
- ☆19Updated 4 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆102Updated 2 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆74Updated 3 months ago
- [ACL 2025] Knowledge Unlearning for Large Language Models☆38Updated 2 months ago
- ☆47Updated 5 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆137Updated this week
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆49Updated 8 months ago
- A Sober Look at Language Model Reasoning☆75Updated 3 weeks ago
- Exploration of automated dataset selection approaches at large scales.☆47Updated 4 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆108Updated 5 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆91Updated last month
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆76Updated 5 months ago
- ☆22Updated last year
- Large Language Models Can Self-Improve in Long-context Reasoning☆71Updated 7 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆53Updated 11 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆22Updated 6 months ago
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆113Updated this week
- ☆132Updated last month
- ☆15Updated 2 months ago
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆99Updated last month
- The official repository of the Omni-MATH benchmark.☆85Updated 6 months ago
- ☆113Updated 4 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 9 months ago