vicgalle / zero-shot-reward-modelsLinks
ZYN: Zero-Shot Reward Models with Yes-No Questions
☆35Updated 2 years ago
Alternatives and similar repositories for zero-shot-reward-models
Users that are interested in zero-shot-reward-models are comparing it to the libraries listed below
Sorting:
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆49Updated 2 years ago
- Code for RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs. ACL 2023.☆64Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆55Updated last year
- About The corresponding code from our paper " REFINER: Reasoning Feedback on Intermediate Representations" (EACL 2024). Do not hesitate t…☆72Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 6 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆76Updated 7 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆75Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- ☆35Updated last year
- Sotopia-π: Interactive Learning of Socially Intelligent Language Agents (ACL 2024)☆79Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year
- Syntax Error-Free and Generalizable Tool Use for LLMs via Finite-State Decoding☆28Updated last year
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆65Updated 2 years ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated 11 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- ☆102Updated 2 years ago
- On Transferability of Prompt Tuning for Natural Language Processing☆100Updated last year
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models