EIT-NLP / AccuracyParadox-RLHFLinks
[EMNLP 2024 Main] Official implementation of the paper "The Accuracy Paradox in RLHF: When Better Reward Models Don't Yield Better Language Models". (by Yanjun Chen)
☆13Updated last year
Alternatives and similar repositories for AccuracyParadox-RLHF
Users that are interested in AccuracyParadox-RLHF are comparing it to the libraries listed below
Sorting:
- ☆23Updated last year
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆27Updated last year
- Implementation for the paper "Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning"☆11Updated last year
- Repository for Skill Set Optimization☆14Updated last year
- Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lo…☆16Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- Sotopia-RL: Reward Design for Social Intelligence☆46Updated 4 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆52Updated 7 months ago
- Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue (ACL 2024)☆24Updated 2 months ago
- RewardAnything: Generalizable Principle-Following Reward Models☆45Updated 7 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆46Updated 8 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆27Updated 6 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- PreAct: Prediction Enhances Agent's Planning Ability (Coling2025)☆30Updated last year
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆33Updated last year
- ☆15Updated last year
- Evaluate the Quality of Critique☆36Updated last year
- Echos is a headless, API-driven DAW engine. It’s the backend for building AI tools that automate the entire music production lifecycle.☆53Updated 2 months ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆48Updated 6 months ago
- Official Repo for SvS: A Self-play with Variational Problem Synthesis strategy for RLVR training☆50Updated 3 weeks ago
- ☆20Updated 10 months ago
- PyTorch implementation of experiments in the paper Aligning Language Models with Human Preferences via a Bayesian Approach☆32Updated 2 years ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆65Updated 11 months ago
- ☆69Updated 7 months ago
- Official repository for ALT (ALignment with Textual feedback).☆10Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated 2 years ago
- ☆14Updated 2 years ago
- Source code of “Reinforcement Learning with Token-level Feedback for Controllable Text Generation (NAACL 2024)☆17Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated last year