facebookresearch / RLCD
Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment
☆66Updated last year
Alternatives and similar repositories for RLCD:
Users that are interested in RLCD are comparing it to the libraries listed below
- Directional Preference Alignment☆56Updated 5 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 5 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆43Updated 7 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆70Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆118Updated 6 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆80Updated 5 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆56Updated 3 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆51Updated 9 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆130Updated 3 weeks ago
- GenRM-CoT: Data release for verification rationales☆49Updated 4 months ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆62Updated 7 months ago
- ☆78Updated this week
- ☆38Updated 4 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆53Updated 2 months ago
- ☆43Updated last year
- Self-Alignment with Principle-Following Reward Models