EIT-NLP / AccuracyParadox-RLHFLinks
[EMNLP 2024 Main] Official implementation of the paper "The Accuracy Paradox in RLHF: When Better Reward Models Don't Yield Better Language Models". (by Yanjun Chen)
☆13Updated 7 months ago
Alternatives and similar repositories for AccuracyParadox-RLHF
Users that are interested in AccuracyParadox-RLHF are comparing it to the libraries listed below
Sorting:
- A framework for evolving and testing question-answering datasets with various models.☆16Updated last year
- [EMNLP 2024 Main] Official implementation of the paper "Unveiling In-Context Learning: A Coordinate System to Understand Its Working Mech…☆17Updated 8 months ago
- [ACL 2025 Findings] Official implementation of the paper "Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning". (By Xingh…☆15Updated 4 months ago
- [ACL 2025 (Findings)] DEMO: Reframing Dialogue Interaction with Fine-grained Element Modeling☆14Updated 6 months ago
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆24Updated 7 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated 3 weeks ago
- Official Implementation for the paper "Integrative Decoding: Improving Factuality via Implicit Self-consistency"☆27Updated 2 months ago
- Implementation for the paper "Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning"☆10Updated 5 months ago
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆29Updated 3 weeks ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 6 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆26Updated 6 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 2 months ago
- Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue (ACL 2024)☆23Updated 10 months ago
- ☆46Updated 8 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- ☆40Updated 2 weeks ago
- ☆19Updated 3 weeks ago
- ☆22Updated 11 months ago
- ☆59Updated 9 months ago
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆27Updated last month
- This is a unified platform for implementing and evaluating test-time reasoning mechanisms in Large Language Models (LLMs).☆19Updated 5 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- ☆14Updated last year
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 4 months ago
- ☆19Updated last month
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆61Updated 6 months ago
- ☆43Updated 3 months ago
- [ACL 2024] Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models☆21Updated 11 months ago
- Self-Supervised Alignment with Mutual Information☆19Updated last year
- ☆18Updated 3 months ago