architsharma97 / dpo-rlaif
☆94Updated 7 months ago
Alternatives and similar repositories for dpo-rlaif:
Users that are interested in dpo-rlaif are comparing it to the libraries listed below
- ☆87Updated last week
- ☆50Updated 2 months ago
- ☆98Updated this week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆98Updated this week
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆58Updated last week
- Self-Alignment with Principle-Following Reward Models☆152Updated 11 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆61Updated 7 months ago
- Critique-out-Loud Reward Models☆48Updated 3 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆95Updated 10 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆112Updated 4 months ago
- ☆76Updated 6 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆103Updated this week
- Replicating O1 inference-time scaling laws☆73Updated last month
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆64Updated 5 months ago
- ☆127Updated 2 months ago
- ☆142Updated last week
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆49Updated 3 months ago
- A brief and partial summary of RLHF algorithms.☆89Updated 2 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆135Updated 3 months ago
- ☆59Updated 9 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆51Updated 10 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆114Updated 2 months ago
- ☆116Updated 3 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated 11 months ago
- This repository is maintained to release dataset and models for multimodal puzzle reasoning.☆55Updated 2 months ago
- [EMNLP Findings 2024 & ACL 2024 NLRSE Oral] Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards☆49Updated 8 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆98Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆107Updated 8 months ago