chrisliu298 / awesome-representation-engineering
A resource repository for representation engineering in large language models
☆54Updated this week
Related projects ⓘ
Alternatives and complementary repositories for awesome-representation-engineering
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆54Updated 2 weeks ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆78Updated last year
- Official Code for Paper: Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆59Updated last month
- Official implementation repository for the paper Towards General Conceptual Model Editing via Adversarial Representation Engineering.☆12Updated last month
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆71Updated 6 months ago
- Landing Page for TOFU☆98Updated 5 months ago
- ☆49Updated last year
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆63Updated 10 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆119Updated last month
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆79Updated 6 months ago
- ☆81Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆71Updated 2 months ago
- ☆16Updated 4 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆84Updated 5 months ago
- ☆39Updated last year
- ☆170Updated 8 months ago
- Steering Llama 2 with Contrastive Activation Addition☆97Updated 5 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆75Updated this week
- ☆26Updated 6 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆53Updated last month
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆28Updated 4 months ago
- AI Logging for Interpretability and Explainability🔬☆89Updated 5 months ago
- ☆35Updated 4 months ago
- ☆21Updated last month
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆39Updated last month
- Weak-to-Strong Jailbreaking on Large Language Models☆67Updated 8 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆75Updated last month
- ☆38Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆47Updated last month
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆45Updated 7 months ago