shenao-zhang / reward-augmented-preferenceLinks
The official implementation of Preference Data Reward-Augmentation.
☆17Updated last month
Alternatives and similar repositories for reward-augmented-preference
Users that are interested in reward-augmented-preference are comparing it to the libraries listed below
Sorting:
- Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models☆35Updated 8 months ago
- [ACL 2025] Knowledge Unlearning for Large Language Models☆34Updated last month
- The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"☆53Updated 2 weeks ago
- ☆24Updated 8 months ago
- ☆45Updated 3 months ago
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆91Updated 3 months ago
- [ACL 2025 Findings] Implicit Reasoning in Transformers is Reasoning through Shortcuts☆14Updated 2 months ago
- Official repository for Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning [ICLR 2025]☆45Updated 4 months ago
- AgentRewardBench: Evaluating Automatic Evaluations of Web Agent Trajectories☆15Updated 3 weeks ago
- The official implementation of Cross-Task Experience Sharing (COPS)☆22Updated 7 months ago
- ☆29Updated last year
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models☆45Updated 6 months ago
- This is the repository for NAACL'25 paper "TART: An Open-Source Tool-Augmented Framework for Explainable Table-based Reasoning"☆53Updated last month
- Verifiers for LLM Reinforcement Learning☆56Updated last month
- Code and data releases for the paper -- DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory☆44Updated 3 months ago
- [EMNLP 2024 Findings] ProSA: Assessing and Understanding the Prompt Sensitivity of LLMs☆27Updated 2 weeks ago
- ☆116Updated last month
- ☆34Updated last week
- The official implementation of Self-Exploring Language Models (SELM)☆64Updated last year
- ☆27Updated last month
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆64Updated 3 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆59Updated 4 months ago
- [NAACL'25] "Revealing the Barriers of Language Agents in Planning"☆12Updated 7 months ago
- Code repository for the paper "The Inherent Limits of Pretrained LLMs: The Unexpected Convergence of Instruction Tuning and In-Context Le…☆13Updated 4 months ago
- Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆33Updated 8 months ago
- ☆38Updated 5 months ago
- Source code for our paper: "Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction A…☆44Updated last year
- LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models☆18Updated 2 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆25Updated 2 months ago
- Improving Your Model Ranking on Chatbot Arena by Vote Rigging (ICML 2025)☆21Updated 3 months ago