mengdi-li / awesome-RLAIFLinks
A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)
☆171Updated 5 months ago
Alternatives and similar repositories for awesome-RLAIF
Users that are interested in awesome-RLAIF are comparing it to the libraries listed below
Sorting:
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆144Updated 7 months ago
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆108Updated 2 months ago
- ☆114Updated 5 months ago
- ☆190Updated 2 months ago
- Reasoning with Language Model is Planning with World Model☆169Updated last year
- ☆276Updated 5 months ago
- ☆136Updated 6 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆318Updated 10 months ago
- Direct Preference Optimization from scratch in PyTorch☆98Updated 2 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆177Updated 5 months ago
- augmented LLM with self reflection☆126Updated last year
- ☆97Updated 11 months ago
- ☆130Updated 11 months ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- Paper collections of the continuous effort start from World Models.☆173Updated 11 months ago
- RewardBench: the first evaluation tool for reward models.☆604Updated 2 weeks ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆326Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆179Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆121Updated 9 months ago
- A brief and partial summary of RLHF algorithms.☆129Updated 3 months ago
- [NeurIPS 2024] Agent Planning with World Knowledge Model☆141Updated 6 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆131Updated 11 months ago
- ☆142Updated 7 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆185Updated last year
- ☆95Updated 11 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆206Updated 2 years ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated last year
- An extensible benchmark for evaluating large language models on planning☆382Updated 2 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆141Updated 4 months ago
- Awesome LLM papers, news and projects about learning to reason with LLM, OpenAI o1, reasonning techniques, chain-of-thought (COT), Large …☆27Updated 8 months ago