[NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct
☆191Jan 16, 2025Updated last year
Alternatives and similar repositories for aligner
Users that are interested in aligner are comparing it to the libraries listed below
Sorting:
- Self-Supervised Alignment with Mutual Information☆20May 24, 2024Updated last year
- Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts☆16Feb 26, 2024Updated 2 years ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,585Nov 24, 2025Updated 3 months ago
- ☆13Jan 22, 2025Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆339Feb 23, 2024Updated 2 years ago
- ☆21Jun 22, 2025Updated 8 months ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆88Mar 15, 2024Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆90May 2, 2025Updated 9 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 9 months ago
- ☆41Jul 6, 2025Updated 7 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆66Dec 10, 2024Updated last year
- ☆195Nov 26, 2023Updated 2 years ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆86Oct 26, 2025Updated 4 months ago
- ☆16Jul 23, 2024Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Mar 8, 2024Updated last year
- Improving Math reasoning through Direct Preference Optimization with Verifiable Pairs☆19Mar 20, 2025Updated 11 months ago
- Official repository for ORPO☆471May 31, 2024Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Feb 29, 2024Updated 2 years ago
- Implementation of "Decoding-time Realignment of Language Models", ICML 2024.☆21Jun 17, 2024Updated last year
- ☆20Nov 3, 2024Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆512Oct 20, 2024Updated last year
- ☆313Jun 9, 2024Updated last year
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆391Jan 19, 2025Updated last year
- Recipes to train reward model for RLHF.☆1,515Apr 24, 2025Updated 10 months ago
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Feb 11, 2024Updated 2 years ago
- Align, a general text alignment function☆15Dec 7, 2023Updated 2 years ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Dec 19, 2023Updated 2 years ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 9 months ago
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆28Apr 2, 2025Updated 10 months ago
- The reproduct of the paper - Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction☆22May 29, 2024Updated last year
- SafeSora is a human preference dataset designed to support safety alignment research in the text-to-video generation field, aiming to enh…☆34Aug 20, 2024Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆125Mar 22, 2024Updated last year
- ☆21Jul 26, 2025Updated 7 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆305Sep 11, 2024Updated last year
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆87Mar 23, 2025Updated 11 months ago
- Responsible Robotic Manipulation☆16Aug 31, 2025Updated 6 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Jan 19, 2024Updated 2 years ago