yfqiu-nlp / sea-llm
Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"
☆24Updated 4 months ago
Alternatives and similar repositories for sea-llm:
Users that are interested in sea-llm are comparing it to the libraries listed below
- ☆29Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆24Updated 11 months ago
- ☆42Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆57Updated 5 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 2 months ago
- General-purpose activation steering library☆66Updated this week
- ☆40Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 3 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆76Updated 4 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆38Updated 3 months ago
- ☆29Updated 2 months ago
- ☆4Updated 3 months ago
- ☆49Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆35Updated 2 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 5 months ago
- Augmenting Statistical Models with Natural Language Parameters☆26Updated 7 months ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆96Updated 2 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 3 months ago
- ☆58Updated 9 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆22Updated 6 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆31Updated 5 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆92Updated 11 months ago
- ☆37Updated last year
- ☆44Updated 8 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆20Updated last month
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated last year
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆77Updated 6 months ago
- ☆54Updated 2 years ago