dunzeng / MORELinks
Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment
☆16Updated last year
Alternatives and similar repositories for MORE
Users that are interested in MORE are comparing it to the libraries listed below
Sorting:
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- Official implementation of AAAI 2025 paper "Augmenting Math Word Problems via Iterative Question Composing"(https://arxiv.org/abs/2401.09…☆20Updated 9 months ago
- Evaluate the Quality of Critique☆36Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆58Updated last year
- ☆31Updated last year
- ☆14Updated last year
- ☆100Updated last year
- Offcial Repo of Paper "Eliminating Position Bias of Language Models: A Mechanistic Approach""☆16Updated 3 months ago
- ☆21Updated 2 weeks ago
- This repository includes code for the paper "Does Localization Inform Editing? Surprising Differences in Where Knowledge Is Stored vs. Ca…☆61Updated 2 years ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- Official Code Repository for [AutoScale📈: Scale-Aware Data Mixing for Pre-Training LLMs] Published as a conference paper at **COLM 2025*…☆12Updated last month
- Directional Preference Alignment☆59Updated 11 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- ☆14Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆47Updated last year
- The repository contains code for Adaptive Data Optimization☆25Updated 9 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆72Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 5 months ago
- Analyzing LLM Alignment via Token distribution shift☆17Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆40Updated 4 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆59Updated last year
- Learning adapter weights from task descriptions☆19Updated last year
- Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner☆28Updated last year
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆56Updated 2 years ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- Codebase for Inference-Time Policy Adapters☆24Updated last year