zjunlp / KnowRLLinks
KnowRL: Exploring Knowledgeable Reinforcement Learning for Factuality
☆33Updated 3 weeks ago
Alternatives and similar repositories for KnowRL
Users that are interested in KnowRL are comparing it to the libraries listed below
Sorting:
- Source code of “Reinforcement Learning with Token-level Feedback for Controllable Text Generation (NAACL 2024)☆14Updated 10 months ago
- [ACL 2025 Findings] Official implementation of the paper "Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning".☆19Updated 8 months ago
- ☆17Updated 2 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- ☆25Updated 6 months ago
- ☆69Updated last month
- [NeurIPS 2025] What Makes a Reward Model a Good Teacher? An Optimization Perspective☆38Updated last month
- ☆30Updated last year
- [EMNLP 2025] WebAgent-R1: Training Web Agents via End-to-End Multi-Turn Reinforcement Learning☆49Updated last week
- [ICML 2025] Official code of "AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization"☆22Updated last year
- ☆31Updated 5 months ago
- ☆50Updated last year
- ☆50Updated 8 months ago
- ☆23Updated last year
- ☆45Updated 3 weeks ago
- [ACL 2024] Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models☆26Updated last year
- A curated list of resources on Reinforcement Learning with Verifiable Rewards (RLVR) and the reasoning capability boundary of Large Langu…☆70Updated last week
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆66Updated 6 months ago
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆29Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 6 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 11 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆40Updated 5 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆27Updated last year
- A Sober Look at Language Model Reasoning☆86Updated 3 weeks ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆91Updated last year
- Official repository for ALT (ALignment with Textual feedback).☆10Updated last year
- Reinforced Multi-LLM Agents training☆56Updated 4 months ago
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆29Updated last year
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆112Updated last year