vicgalle / zero-shot-reward-modelsLinks
ZYN: Zero-Shot Reward Models with Yes-No Questions
☆35Updated 2 years ago
Alternatives and similar repositories for zero-shot-reward-models
Users that are interested in zero-shot-reward-models are comparing it to the libraries listed below
Sorting:
- Code for RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs. ACL 2023.☆64Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆55Updated last year
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆50Updated 2 years ago
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- About The corresponding code from our paper " REFINER: Reasoning Feedback on Intermediate Representations" (EACL 2024). Do not hesitate t…☆74Updated last year
- ☆102Updated 2 years ago
- PreAct: Prediction Enhances Agent's Planning Ability (Coling2025)☆30Updated last year
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆86Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆90Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆53Updated 7 months ago
- Sotopia-π: Interactive Learning of Socially Intelligent Language Agents (ACL 2024)☆81Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- [ACL'24] Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements☆23Updated last year
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆64Updated 2 years ago
- RL algorithm: Advantage induced policy alignment☆66Updated 2 years ago
- Repository for Skill Set Optimization☆14Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 8 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆85Updated last year
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Directional Preference Alignment☆58Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆85Updated 8 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year
- DialOp: Decision-oriented dialogue environments for collaborative language agents☆111Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆116Updated 2 years ago
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆76Updated last year
- Improving Language Understanding from Screenshots. Paper: https://arxiv.org/abs/2402.14073☆31Updated last year