schauppi / Self-Rewarding-Language-ModelsLinks
☆47Updated last year
Alternatives and similar repositories for Self-Rewarding-Language-Models
Users that are interested in Self-Rewarding-Language-Models are comparing it to the libraries listed below
Sorting:
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated 11 months ago
- Verifiers for LLM Reinforcement Learning☆77Updated 6 months ago
- A repository for research on medium sized language models.☆78Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆71Updated last year
- ☆23Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆108Updated 4 months ago
- ☆122Updated 8 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆108Updated 3 months ago
- ☆128Updated last year
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆55Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆111Updated 9 months ago
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆130Updated 11 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆90Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆59Updated last year
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 8 months ago
- This is the official repository for Inheritune.☆115Updated 8 months ago
- ☆100Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆64Updated 8 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆93Updated 5 months ago
- ☆55Updated 11 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Exploration of automated dataset selection approaches at large scales.☆48Updated 7 months ago
- ☆83Updated last week
- ☆35Updated 5 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year
- ☆86Updated last year
- Data preparation code for CrystalCoder 7B LLM☆45Updated last year
- ☆46Updated 4 months ago