1989Ryan / llm-mctsView external linksLinks
[NeurIPS 2023] We use large language models as commonsense world model and heuristic policy within Monte-Carlo Tree Search, enabling better-reasoned decision-making for daily task planning problems.
☆295Nov 16, 2024Updated last year
Alternatives and similar repositories for llm-mcts
Users that are interested in llm-mcts are comparing it to the libraries listed below
Sorting:
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆690Jan 20, 2025Updated last year
- ☆130Jun 18, 2024Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆329Jan 29, 2026Updated 2 weeks ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆285May 26, 2024Updated last year
- Reasoning with Language Model is Planning with World Model☆185Aug 25, 2023Updated 2 years ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆817Jul 30, 2024Updated last year
- Official Code for "Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents"☆278May 16, 2022Updated 3 years ago
- ICLR 2021: "Monte-Carlo Planning and Learning with Language Action Value Estimates"☆33Nov 30, 2023Updated 2 years ago