Lingkai-Kong / RE-ControlLinks
Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective
☆32Updated 4 months ago
Alternatives and similar repositories for RE-Control
Users that are interested in RE-Control are comparing it to the libraries listed below
Sorting:
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- ☆40Updated 3 months ago
- ☆40Updated last year
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆36Updated last month
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆79Updated 2 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 4 months ago
- ☆38Updated 3 months ago
- awesome SAE papers☆34Updated last week
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆85Updated 7 months ago
- A curated list of resources for activation engineering☆85Updated last week
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆25Updated 2 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆39Updated 4 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆56Updated 5 months ago
- A Sober Look at Language Model Reasoning☆63Updated last week
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 8 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆79Updated 9 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆125Updated last month
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 2 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆34Updated 6 months ago
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attrib…☆17Updated 2 months ago
- ☆131Updated 3 weeks ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆58Updated 6 months ago
- SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆15Updated 2 months ago
- ☆58Updated 10 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆28Updated 2 months ago
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆22Updated 5 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆67Updated 8 months ago
- ☆49Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆44Updated 7 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆69Updated 5 months ago