mengdi-li / awesome-RLAIFLinks
A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)
☆170Updated 4 months ago
Alternatives and similar repositories for awesome-RLAIF
Users that are interested in awesome-RLAIF are comparing it to the libraries listed below
Sorting:
- ☆114Updated 4 months ago
- Paper collections of the continuous effort start from World Models.☆172Updated 10 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆177Updated 4 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆184Updated last year
- ☆97Updated 11 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆141Updated 7 months ago
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆108Updated 2 months ago
- ☆141Updated 6 months ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆177Updated last month
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆102Updated last year
- ☆93Updated 11 months ago
- ☆276Updated 4 months ago
- A brief and partial summary of RLHF algorithms.☆128Updated 3 months ago
- Reasoning with Language Model is Planning with World Model☆167Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆140Updated 3 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆314Updated 9 months ago
- RewardBench: the first evaluation tool for reward models.☆590Updated this week
- ☆129Updated 10 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 8 months ago
- Self-Alignment with Principle-Following Reward Models☆161Updated 3 weeks ago
- ☆173Updated 2 months ago
- Direct Preference Optimization from scratch in PyTorch☆94Updated last month
- ☆102Updated last month
- Code for Paper: Autonomous Evaluation and Refinement of Digital Agents [COLM 2024]☆136Updated 6 months ago
- augmented LLM with self reflection☆124Updated last year
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆149Updated 2 months ago
- An extensible benchmark for evaluating large language models on planning☆375Updated last month
- RLHF implementation details of OAI's 2019 codebase☆187Updated last year
- AI Alignment: A Comprehensive Survey☆134Updated last year