IBM / SALMONLinks
Self-Alignment with Principle-Following Reward Models
☆166Updated 2 weeks ago
Alternatives and similar repositories for SALMON
Users that are interested in SALMON are comparing it to the libraries listed below
Sorting:
- ☆102Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated 11 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆123Updated 10 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆141Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 4 months ago
- ☆100Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆108Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated last year
- ☆159Updated 2 years ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆112Updated this week
- ☆136Updated 10 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- [EMNLP '23] Discriminator-Guided Chain-of-Thought Reasoning☆49Updated 11 months ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆107Updated last year
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆108Updated last year
- ☆52Updated 4 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆52Updated last year
- ☆69Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆163Updated last year
- Directional Preference Alignment☆59Updated last year
- ☆67Updated 3 years ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 11 months ago
- Code for RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs. ACL 2023.☆64Updated 10 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆130Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆52Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year