limenlp / safer-instructLinks
This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"
☆17Updated last year
Alternatives and similar repositories for safer-instruct
Users that are interested in safer-instruct are comparing it to the libraries listed below
Sorting:
- Aioli: A unified optimization framework for language model data mixing☆32Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- The repository contains code for Adaptive Data Optimization☆32Updated last year
- ☆16Updated last year
- ☆52Updated 11 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆32Updated 6 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 4 months ago
- Exploration of automated dataset selection approaches at large scales.☆52Updated 11 months ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33Updated last year
- ☆64Updated last year
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception o…☆28Updated 7 months ago
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆35Updated 2 years ago
- ☆15Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆73Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Sotopia-RL: Reward Design for Social Intelligence