Linear95 / DSP
Domain-specific preference (DSP) data and customized RM fine-tuning.
☆24Updated 11 months ago
Alternatives and similar repositories for DSP:
Users that are interested in DSP are comparing it to the libraries listed below
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆54Updated 7 months ago
- ☆30Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 2 months ago
- Directional Preference Alignment☆56Updated 5 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆23Updated 5 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆15Updated last month
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆51Updated 9 months ago
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated last year
- ☆17Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆48Updated 2 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆34Updated last year
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆26Updated last year
- The code and data for the paper JiuZhang3.0☆40Updated 9 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆70Updated last year
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆48Updated last year
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 8 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆50Updated 8 months ago
- Analyzing LLM Alignment via Token distribution shift☆15Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆42Updated 7 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 6 months ago
- ☆65Updated 11 months ago
- GenRM-CoT: Data release for verification rationales☆49Updated 4 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆38Updated 5 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆27Updated 7 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆41Updated 4 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆20Updated 2 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆53Updated 10 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆29Updated 8 months ago
- ☆93Updated last year