rxlqn / awesome-llm-self-reflectionLinks
augmented LLM with self reflection
☆130Updated last year
Alternatives and similar repositories for awesome-llm-self-reflection
Users that are interested in awesome-llm-self-reflection are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆104Updated last month
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆113Updated 2 weeks ago
- ☆238Updated last year
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆141Updated 9 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆147Updated 9 months ago
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization☆165Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆340Updated last year
- ☆115Updated 7 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆159Updated last year
- Reasoning with Language Model is Planning with World Model☆169Updated 2 years ago
- A banchmark list for evaluation of large language models.☆137Updated last week
- [NeurIPS 2024] Agent Planning with World Knowledge Model☆148Updated 8 months ago
- Code for ICLR 2024 paper "CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets"☆58Updated last year
- [ICLR 2024] MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use☆92Updated last year
- ☆103Updated 8 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆209Updated 2 years ago
- ☆126Updated 10 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆107Updated last month
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated last year
- Critique-out-Loud Reward Models☆70Updated 10 months ago
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- ☆183Updated 7 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆238Updated 2 weeks ago
- Self-Reflection in LLM Agents: Effects on Problem-Solving Performance☆81Updated 9 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆129Updated last year
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆118Updated 4 months ago
- This repository contains a LLM benchmark for the social deduction game `Resistance Avalon'☆125Updated 2 months ago
- ☆123Updated last year