mathllm / Step-Controlled_DPO
☆21Updated 8 months ago
Alternatives and similar repositories for Step-Controlled_DPO:
Users that are interested in Step-Controlled_DPO are comparing it to the libraries listed below
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 3 months ago
- ☆43Updated 4 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆24Updated 3 months ago
- [ACL 2024] Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models☆17Updated 8 months ago
- ☆29Updated 2 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆55Updated 3 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆37Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆26Updated 6 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆56Updated 5 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆17Updated 3 months ago
- ☆16Updated last month
- ☆13Updated 8 months ago
- ☆59Updated 6 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆84Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆101Updated this week
- Public code repo for paper "Aligning LLMs with Individual Preferences via Interaction"☆24Updated 5 months ago
- Evaluate the Quality of Critique☆35Updated 9 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆60Updated 4 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated last month
- The code and data for the paper JiuZhang3.0☆42Updated 9 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆58Updated 3 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆55Updated last month
- The official repository of the Omni-MATH benchmark.☆77Updated 3 months ago
- ☆15Updated 8 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆51Updated 3 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆50Updated 4 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆35Updated last week
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆45Updated 2 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆27Updated this week
- Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation, ICML 2024☆22Updated 8 months ago