huggingface / Math-VerifyLinks
☆773Updated last month
Alternatives and similar repositories for Math-Verify
Users that are interested in Math-Verify are comparing it to the libraries listed below
Sorting:
- A series of technical report on Slow Thinking with LLM☆695Updated last week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆713Updated 3 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆639Updated 5 months ago
- Large Reasoning Models☆804Updated 6 months ago
- RewardBench: the first evaluation tool for reward models.☆604Updated last week
- ☆938Updated 4 months ago
- ☆569Updated 2 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆226Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆897Updated 4 months ago
- ☆297Updated 3 weeks ago
- ☆331Updated 2 weeks ago
- ☆540Updated 5 months ago
- LIMO: Less is More for Reasoning☆960Updated 2 months ago
- Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models☆451Updated this week
- SkyRL-v0: Train Real-World Long-Horizon Agents via Reinforcement Learning☆410Updated last week
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆455Updated 8 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆556Updated 6 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆379Updated last week
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆236Updated last month
- ReCall: Learning to Reason with Tool Call for LLMs via Reinforcement Learning☆936Updated last month
- R1-searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning☆561Updated 3 weeks ago
- Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning☆561Updated 3 weeks ago
- Understanding R1-Zero-Like Training: A Critical Perspective☆988Updated 3 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆731Updated 8 months ago
- ☆286Updated 10 months ago
- A project to improve skills of large language models☆423Updated this week
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆369Updated 5 months ago
- Recipes to train reward model for RLHF.☆1,380Updated last month
- An Open-source RL System from ByteDance Seed and Tsinghua AIR☆1,349Updated last month
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆857Updated last week