srzer / MOD
Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".
☆18Updated 2 months ago
Alternatives and similar repositories for MOD:
Users that are interested in MOD are comparing it to the libraries listed below
- ☆24Updated 3 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆61Updated 2 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆55Updated 3 weeks ago
- ☆34Updated 11 months ago
- Lightweight Adapting for Black-Box Large Language Models☆19Updated 11 months ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆23Updated 6 months ago
- ☆20Updated 6 months ago
- Official Code for Paper: Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆65Updated 3 months ago
- ☆47Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆63Updated 6 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆15Updated 10 months ago
- ☆29Updated 8 months ago
- Rewarded soups official implementation☆54Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆53Updated 3 months ago
- ☆29Updated 8 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆32Updated 2 months ago
- ☆39Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆20Updated 7 months ago
- ☆18Updated last month
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆22Updated 3 months ago
- ☆44Updated 6 months ago
- ☆16Updated 10 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated 8 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆38Updated 2 months ago
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆33Updated 5 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆57Updated 3 months ago
- Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆18Updated 3 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆88Updated 7 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆71Updated 3 weeks ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆65Updated 3 months ago