javiferran / sae_entities
☆29Updated 2 months ago
Alternatives and similar repositories for sae_entities:
Users that are interested in sae_entities are comparing it to the libraries listed below
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 2 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆54Updated 5 months ago
- ☆29Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆64Updated 7 months ago
- General-purpose activation steering library☆66Updated 4 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆37Updated 3 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆76Updated last month
- ☆36Updated 7 months ago
- A curated list of resources for activation engineering☆67Updated last month
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆11Updated 3 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆58Updated 7 months ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆31Updated 5 months ago
- A resource repository for representation engineering in large language models☆121Updated 5 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆75Updated 4 months ago
- ☆21Updated last month
- ☆40Updated last year
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆21Updated 4 months ago
- ☆51Updated 3 weeks ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆24Updated 10 months ago
- ☆49Updated last year
- ☆93Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆20Updated last month
- ☆58Updated 9 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆35Updated 2 months ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆24Updated 4 months ago
- ☆31Updated last week
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆41Updated 3 weeks ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆29Updated 3 months ago