architsharma97 / dpo-rlaif
☆95Updated 8 months ago
Alternatives and similar repositories for dpo-rlaif:
Users that are interested in dpo-rlaif are comparing it to the libraries listed below
- ☆80Updated 8 months ago
- ☆64Updated 4 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆62Updated 9 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆101Updated 11 months ago
- ☆153Updated last week
- ☆101Updated last month
- Replicating O1 inference-time scaling laws☆83Updated 3 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- Self-Alignment with Principle-Following Reward Models☆156Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆53Updated 5 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆118Updated 6 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 9 months ago
- ☆110Updated 3 weeks ago
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆48Updated 10 months ago
- Critique-out-Loud Reward Models☆53Updated 4 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆49Updated last month
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆107Updated last week
- Long Context Extension and Generalization in LLMs☆50Updated 5 months ago
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆76Updated last week
- ☆73Updated 6 months ago
- ☆120Updated 4 months ago
- Directional Preference Alignment☆56Updated 5 months ago
- ☆135Updated 3 months ago
- ☆85Updated 5 months ago
- ☆59Updated 10 months ago
- ☆46Updated 7 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆26Updated 5 months ago