sail-sg / diceLinks
Official implementation of Bootstrapping Language Models via DPO Implicit Rewards
โ44Updated 6 months ago
Alternatives and similar repositories for dice
Users that are interested in dice are comparing it to the libraries listed below
Sorting:
- [๐๐๐๐๐ ๐ ๐ข๐ง๐๐ข๐ง๐ ๐ฌ ๐๐๐๐ & ๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐ซ๐๐ฅ] ๐๐ฏ๐ฉ๐ข๐ฏ๐ค๐ช๐ฏ๐จ ๐๐ข๐ต๐ฉ๐ฆ๐ฎ๐ข๐ต๐ช๐ค๐ข๐ญ ๐๐ฆ๐ข๐ด๐ฐ๐ฏ๐ช๐ฏโฆโ51Updated last year
- Directional Preference Alignmentโ57Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"โ75Updated 5 months ago
- Self-Supervised Alignment with Mutual Informationโ21Updated last year
- Optimizing Anytime Reasoning via Budget Relative Policy Optimizationโ47Updated 3 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"โ102Updated 2 weeks ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Modelsโ65Updated 8 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversityโ47Updated last year
- Exploration of automated dataset selection approaches at large scales.โ47Updated 7 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".โ61Updated 2 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignmentโ69Updated 2 years ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervisionโ123Updated last year
- [ACL 2024] Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Modelsโ26Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, andโฆโ66Updated 6 months ago
- โ103Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"โ121Updated 6 months ago
- Reinforcing General Reasoning without Verifiersโ91Updated 4 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or reโฆโ37Updated last year
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.โ23Updated 2 weeks ago
- Long Context Extension and Generalization in LLMsโ62Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"โ38Updated last year
- RENT (Reinforcement Learning via Entropy Minimization) is an unsupervised method for training reasoning LLMs.โ40Updated 3 months ago
- Sotopia-RL: Reward Design for Social Intelligenceโ43Updated 2 months ago
- This is the official repo for Towards Uncertainty-Aware Language Agent.โ29Updated last year
- โ63Updated 4 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasonersโ85Updated 5 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)โ40Updated 5 months ago
- โ19Updated 6 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracyโ73Updated 2 weeks ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learningโ114Updated 5 months ago