Self-Alignment with Principle-Following Reward Models
☆170Sep 18, 2025Updated 6 months ago
Alternatives and similar repositories for SALMON
Users that are interested in SALMON are comparing it to the libraries listed below
Sorting:
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,144Sep 18, 2025Updated 6 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Sep 9, 2024Updated last year
- [EMNLP '23] Discriminator-Guided Chain-of-Thought Reasoning☆50Oct 11, 2024Updated last year
- ☆282Jan 6, 2025Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆591Dec 9, 2024Updated last year
- ☆124Feb 21, 2025Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆86May 21, 2025Updated 10 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Oct 1, 2025Updated 5 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆490Mar 19, 2024Updated 2 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- Aligning LMMs with Factually Augmented RLHF☆393Nov 1, 2023Updated 2 years ago
- Recipes to train reward model for RLHF.☆1,521Apr 24, 2025Updated 10 months ago
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆35Aug 9, 2023Updated 2 years ago
- ☆102Dec 22, 2023Updated 2 years ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Jan 12, 2024Updated 2 years ago
- ☆16Jul 23, 2024Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Jun 25, 2024Updated last year
- Official Code Repository for [AutoScale📈: Scale-Aware Data Mixing for Pre-Training LLMs] Published as a conference paper at **COLM 2025*…☆13Aug 8, 2025Updated 7 months ago
- An experiment to see if chatgpt can improve the output of the stanford alpaca dataset☆12Mar 29, 2023Updated 2 years ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Aug 18, 2023Updated 2 years ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆40Jun 10, 2024Updated last year
- AllenAI's post-training codebase☆3,629Updated this week
- ☆20May 12, 2022Updated 3 years ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆352Dec 26, 2023Updated 2 years ago
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆241Nov 3, 2023Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆132May 16, 2023Updated 2 years ago
- RL algorithm: Advantage induced policy alignment☆66Aug 11, 2023Updated 2 years ago
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆355Jun 18, 2023Updated 2 years ago
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 8 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,667Mar 8, 2024Updated 2 years ago
- AdaPlanner: Language Models for Decision Making via Adaptive Planning from Feedback☆125Mar 31, 2025Updated 11 months ago
- Official repository for ORPO☆473May 31, 2024Updated last year
- Scalable toolkit for efficient model alignment☆851Oct 6, 2025Updated 5 months ago
- Simple next-token-prediction for RLHF☆229Sep 30, 2023Updated 2 years ago
- ☆160Nov 23, 2024Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆201Dec 8, 2025Updated 3 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,827Jun 17, 2025Updated 9 months ago