bertiev / SimpleSafetyTests
☆17Updated last year
Alternatives and similar repositories for SimpleSafetyTests:
Users that are interested in SimpleSafetyTests are comparing it to the libraries listed below
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆95Updated last month
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆78Updated 10 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆68Updated 3 months ago
- ☆19Updated 5 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 8 months ago
- General-purpose activation steering library☆54Updated 2 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆107Updated last year
- ☆38Updated 4 months ago
- ☆53Updated 2 years ago
- Steering Llama 2 with Contrastive Activation Addition☆134Updated 10 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆71Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆88Updated 6 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆71Updated 3 weeks ago
- Weak-to-Strong Jailbreaking on Large Language Models☆72Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 10 months ago
- LLM Unlearning☆151Updated last year
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆29Updated 4 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆34Updated 2 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆124Updated 8 months ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆95Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- ☆65Updated 4 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆52Updated 4 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆190Updated 6 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆134Updated 3 weeks ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆59Updated 2 months ago
- Code and data of the EMNLP 2022 paper "Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversaria…☆47Updated 2 years ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆73Updated last year