Essential-AI / reflectionLinks
β42Updated 3 months ago
Alternatives and similar repositories for reflection
Users that are interested in reflection are comparing it to the libraries listed below
Sorting:
- [NeurIPS-2024] π Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623β86Updated 9 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β63Updated 8 months ago
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large β¦β84Updated 6 months ago
- Resources for the Enigmata Project.β52Updated last month
- This is the official implementation of the paper "SΒ²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"β67Updated 2 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scalingβ137Updated last week
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]β138Updated 9 months ago
- β52Updated 5 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learningβ102Updated 2 months ago
- RL Scaling and Test-Time Scaling (ICML'25)β108Updated 5 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluationsβ119Updated 2 months ago
- On Memorization of Large Language Models in Logical Reasoningβ69Updated 3 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracyβ63Updated 7 months ago
- The official repository of the Omni-MATH benchmark.β85Updated 6 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modelingβ50Updated last month
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"β74Updated last week
- The code and data for the paper JiuZhang3.0β47Updated last year
- RM-R1: Unleashing the Reasoning Potential of Reward Modelsβ113Updated 3 weeks ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"β41Updated 9 months ago
- Code implementation of synthetic continued pretrainingβ117Updated 6 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β124Updated 3 months ago
- Repo of paper "Free Process Rewards without Process Labels"β154Updated 4 months ago
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]β88Updated 3 months ago
- Large Language Models Can Self-Improve in Long-context Reasoningβ71Updated 7 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architectβ¦β63Updated last month
- Exploration of automated dataset selection approaches at large scales.β47Updated 4 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domainsβ149Updated last month
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ178Updated last year
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward modelβ¦β50Updated last month
- Code for "Reasoning to Learn from Latent Thoughts"β112Updated 3 months ago