ezelikman / STaRLinks
Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)
☆220Updated 2 years ago
Alternatives and similar repositories for STaR
Users that are interested in STaR are comparing it to the libraries listed below
Sorting:
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆329Updated this week
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆285Updated last year
- ☆224Updated 10 months ago
- Reasoning with Language Model is Planning with World Model☆185Updated 2 years ago
- ☆282Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆114Updated this week
- ☆341Updated 7 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆132Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆202Updated 9 months ago
- Data and Code for Program of Thoughts [TMLR 2023]☆303Updated last year
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆268Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- RewardBench: the first evaluation tool for reward models.☆683Updated 2 weeks ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago
- Critique-out-Loud Reward Models☆73Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 10 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆389Updated last year
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆115Updated last year
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆150Updated last year
- ☆117Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆304Updated 11 months ago
- ☆273Updated 2 years ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆361Updated 2 years ago
- ☆328Updated 8 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆326Updated last year
- An extensible benchmark for evaluating large language models on planning☆445Updated 4 months ago