dunzeng / MORELinks
Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment
☆16Updated 10 months ago
Alternatives and similar repositories for MORE
Users that are interested in MORE are comparing it to the libraries listed below
Sorting:
- ☆14Updated last year
- Self-Supervised Alignment with Mutual Information☆19Updated last year
- [NAACL 2024 Findings] Evaluation suite for the systematic evaluation of instruction selection methods.☆22Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- Evaluate the Quality of Critique☆35Updated last year
- Offcial Repo of Paper "Eliminating Position Bias of Language Models: A Mechanistic Approach""☆14Updated 2 weeks ago
- Official implementation of AAAI 2025 paper "Augmenting Math Word Problems via Iterative Question Composing"(https://arxiv.org/abs/2401.09…☆20Updated 6 months ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated last year
- Learning adapter weights from task descriptions☆19Updated last year
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated last year
- ☆16Updated 7 months ago
- ☆19Updated 9 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 5 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 2 months ago
- ☆29Updated 11 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 9 months ago
- DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails☆24Updated 4 months ago
- The repository contains code for Adaptive Data Optimization☆25Updated 6 months ago
- The code of paper "Toward Optimal LLM Alignments Using Two-Player Games".☆17Updated last year
- ☆39Updated 2 years ago
- ☆27Updated 2 years ago
- ☆16Updated 11 months ago
- ☆18Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year
- ☆14Updated last year
- Directional Preference Alignment☆57Updated 9 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated last month