zhentingqi / rStar
☆868Updated this week
Alternatives and similar repositories for rStar:
Users that are interested in rStar are comparing it to the libraries listed below
- Large Reasoning Models☆801Updated last month
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆548Updated last week
- Scalable RL solution for advanced reasoning of language models☆981Updated this week
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,497Updated last week
- Official repository for ICLR 2025 paper "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient an…☆587Updated last week
- Recipes to scale inference-time compute of open models☆975Updated last week
- A series of technical report on Slow Thinking with LLM☆359Updated this week
- O1 Replication Journey☆1,910Updated 2 weeks ago
- ☆1,150Updated 2 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆536Updated last month
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆803Updated 2 months ago
- AN O1 REPLICATION FOR CODING☆311Updated last month
- Code for Quiet-STaR☆706Updated 5 months ago
- veRL: Volcano Engine Reinforcement Learning for LLM☆1,135Updated this week
- ☆301Updated last week
- ☆997Updated last month
- ☆450Updated 3 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆694Updated 4 months ago
- An Open Large Reasoning Model for Real-World Solutions☆1,411Updated 2 months ago
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)☆928Updated 3 weeks ago
- ☆251Updated 6 months ago
- RewardBench: the first evaluation tool for reward models.☆493Updated this week
- Search-o1: Agentic Search-Enhanced Large Reasoning Models☆515Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆450Updated 10 months ago
- Scalable toolkit for efficient model alignment☆693Updated this week
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆283Updated 5 months ago
- ☆489Updated 2 months ago
- Recipes to train reward model for RLHF.☆1,119Updated last week
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆890Updated last week