glorgao / SelectiveDPOLinks
Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples
☆37Updated 2 months ago
Alternatives and similar repositories for SelectiveDPO
Users that are interested in SelectiveDPO are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆34Updated 5 months ago
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆20Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆80Updated 10 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆24Updated 7 months ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆32Updated 4 months ago
- ☆40Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆27Updated 2 months ago
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆11Updated 3 months ago
- Official repository for 'Safety Challenges in Large Reasoning Models: A Survey' - Exploring safety risks, attacks, and defenses for Large…☆44Updated 2 weeks ago
- ☆49Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆72Updated 2 weeks ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆28Updated 3 months ago
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆37Updated 10 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆26Updated 5 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting☆19Updated 10 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆79Updated 2 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆45Updated 8 months ago
- A Sober Look at Language Model Reasoning☆74Updated last week
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆72Updated last week
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆75Updated last year
- ☆26Updated last year
- ☆13Updated last year
- SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆17Updated 2 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆134Updated 2 months ago
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆18Updated 3 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆35Updated 7 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆25Updated last week
- Code and data repository for "The Mirage of Model Editing: Revisiting Evaluation in the Wild"☆14Updated 3 weeks ago