allenai / safety-evalLinks
A simple evaluation of generative language models and safety classifiers.
☆59Updated last year
Alternatives and similar repositories for safety-eval
Users that are interested in safety-eval are comparing it to the libraries listed below
Sorting:
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆87Updated 8 months ago
- Improving Alignment and Robustness with Circuit Breakers☆225Updated 10 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆106Updated 5 months ago
- ☆91Updated 9 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆112Updated last month
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆99Updated this week
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆132Updated 2 months ago
- ☆47Updated last year
- [ICLR 2025] General-purpose activation steering library☆87Updated last week
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆38Updated 11 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆96Updated last year
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆72Updated last year
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆182Updated 5 months ago
- AI Logging for Interpretability and Explainability🔬☆125Updated last year
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆86Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆59Updated 2 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 5 months ago
- ☆36Updated 2 years ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆190Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆113Updated last year
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Updated last year
- The Paper List on Data Contamination for Large Language Models Evaluation.☆98Updated 3 weeks ago
- ☆23Updated 10 months ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆299Updated 10 months ago
- Steering Llama 2 with Contrastive Activation Addition☆167Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆249Updated last month
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆64Updated 6 months ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆130Updated 11 months ago
- ☆99Updated last year