Lingkai-Kong / RE-Control
☆20Updated last month
Alternatives and similar repositories for RE-Control:
Users that are interested in RE-Control are comparing it to the libraries listed below
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆18Updated 3 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆65Updated 6 months ago
- ☆34Updated 11 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆61Updated 3 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆39Updated 3 months ago
- Lightweight Adapting for Black-Box Large Language Models☆19Updated 11 months ago
- Implementation of PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆31Updated 2 months ago
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆14Updated last month
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆62Updated 2 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆30Updated 2 weeks ago
- Official Code for Paper: Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆68Updated 3 months ago
- ☆26Updated 3 months ago
- ☆26Updated 11 months ago
- ☆48Updated last year
- ☆86Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆92Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆50Updated 10 months ago
- [NAACL'25] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆41Updated 2 months ago
- ☆43Updated 5 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆135Updated 3 months ago
- ☆19Updated 6 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆112Updated 4 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆64Updated 5 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆103Updated 10 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆55Updated last month
- ☆30Updated 9 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆22Updated 2 weeks ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆21Updated 7 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆53Updated 4 months ago
- ☆83Updated last year