holarissun / Prompt-OIRLLinks
code for paper Query-Dependent Prompt Evaluation and Optimization with Offline Inverse Reinforcement Learning
☆41Updated last year
Alternatives and similar repositories for Prompt-OIRL
Users that are interested in Prompt-OIRL are comparing it to the libraries listed below
Sorting:
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆147Updated 9 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆107Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆146Updated 5 months ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆35Updated last month
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆85Updated 11 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆183Updated 6 months ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆185Updated 3 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆66Updated 4 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)☆177Updated last week
- ☆48Updated 9 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆189Updated last year
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆38Updated 5 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 4 months ago
- Natural Language Reinforcement Learning☆92Updated last week
- ☆32Updated 9 months ago
- Reasoning with Language Model is Planning with World Model☆168Updated last year
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆112Updated 4 months ago
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆48Updated 2 years ago
- ☆114Updated 6 months ago
- Reinforced Multi-LLM Agents training☆35Updated 2 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆61Updated 9 months ago
- ☆152Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 10 months ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- GenRM-CoT: Data release for verification rationales☆63Updated 9 months ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆26Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆58Updated last year
- AI Alignment: A Comprehensive Survey☆135Updated last year