EIT-NLP / AccuracyParadox-RLHF
[EMNLP 2024 Main] Official implementation of the paper "The Accuracy Paradox in RLHF: When Better Reward Models Don't Yield Better Language Models". (by Yanjun Chen)
☆12Updated last week
Related projects ⓘ
Alternatives and complementary repositories for AccuracyParadox-RLHF
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆36Updated 8 months ago
- [EMNLP 2024 Main] Official implementation of the paper "Unveiling In-Context Learning: A Coordinate System to Understand Its Working Mech…☆11Updated last month
- ☆27Updated 9 months ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆24Updated 8 months ago
- Self-Supervised Alignment with Mutual Information☆14Updated 5 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆14Updated 3 weeks ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆30Updated 3 months ago
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models☆42Updated 11 months ago
- Unofficial implementation of Chain of Hindsight (https://arxiv.org/abs/2302.02676) using pytorch and huggingface Trainers.☆11Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆47Updated 4 months ago
- Official Implementation for the paper "Integrative Decoding: Improving Factuality via Implicit Self-consistency"☆17Updated last month
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆33Updated 10 months ago
- ☆31Updated 3 weeks ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated 7 months ago
- Code and data for "Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue" (ACL 2024)☆21Updated 3 months ago
- ☆22Updated 2 months ago
- [ACL 2024] Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models☆14Updated 4 months ago
- PyTorch implementation of experiments in the paper Aligning Language Models with Human Preferences via a Bayesian Approach☆30Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆49Updated 5 months ago
- Code and data for "Target-constrained Bidirectional Planning for Generation of Target-oriented Proactive Dialogue" (ACM TOIS)☆9Updated last month
- Evaluate the Quality of Critique☆35Updated 5 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆24Updated 7 months ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆20Updated 8 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆39Updated 3 months ago
- ☆10Updated 4 months ago
- The code and data for the paper JiuZhang3.0☆35Updated 5 months ago
- Repository for Skill Set Optimization☆12Updated 3 months ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆35Updated last year
- ☆33Updated last month
- ☆13Updated 9 months ago