EIT-NLP / AccuracyParadox-RLHFLinks
[EMNLP 2024 Main] Official implementation of the paper "The Accuracy Paradox in RLHF: When Better Reward Models Don't Yield Better Language Models". (by Yanjun Chen)
☆13Updated last year
Alternatives and similar repositories for AccuracyParadox-RLHF
Users that are interested in AccuracyParadox-RLHF are comparing it to the libraries listed below
Sorting:
- Implementation for the paper "Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning"☆11Updated 11 months ago
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆27Updated last year
- Repository for Skill Set Optimization☆14Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆64Updated last year
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- ☆23Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 6 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆46Updated 8 months ago
- Source code of “Reinforcement Learning with Token-level Feedback for Controllable Text Generation (NAACL 2024)☆17Updated last year
- Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lo…☆16Updated last year
- ☆15Updated last year
- PreAct: Prediction Enhances Agent's Planning Ability (Coling2025)☆30Updated last year
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆48Updated 5 months ago
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆33Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- QRHead: Query-Focused Retrieval Heads Improve Long-Context Reasoning and Re-ranking☆32Updated last week
- ☆35Updated last year
- A framework for evolving and testing question-answering datasets with various models.☆21Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated last year
- Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue (ACL 2024)☆24Updated 2 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- PyTorch implementation of experiments in the paper Aligning Language Models with Human Preferences via a Bayesian Approach☆32Updated 2 years ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 7 months ago
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆27Updated 5 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated last year
- The code and data for the paper JiuZhang3.0☆49Updated last year
- Code for "[COLM'25] RepoST: Scalable Repository-Level Coding Environment Construction with Sandbox Testing"☆22Updated 9 months ago
- Sotopia-RL: Reward Design for Social Intelligence☆46Updated 4 months ago
- Evaluate the Quality of Critique☆36Updated last year
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Updated 2 years ago