allenai / safety-eval
A simple evaluation of generative language models and safety classifiers.
☆19Updated last month
Related projects: ⓘ
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆29Updated 2 months ago
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆55Updated 8 months ago
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆28Updated 6 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆50Updated this week
- ☆32Updated 10 months ago
- This repository contains data, code and models for contextual noncompliance.☆17Updated 2 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; arXiv preprint arXiv:2403.…☆34Updated 2 months ago
- Restore safety in fine-tuned language models through task arithmetic☆25Updated 5 months ago
- ☆30Updated last year
- ☆30Updated last month
- ☆22Updated 2 months ago
- ☆38Updated 5 months ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆72Updated 4 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆40Updated 8 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆39Updated 7 months ago
- ☆47Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆48Updated 6 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs?☆19Updated 3 months ago
- ☆29Updated 10 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆39Updated 6 months ago
- ☆11Updated 2 weeks ago
- PASTA: Post-hoc Attention Steering for LLMs☆96Updated last week
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆37Updated 2 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆56Updated 3 months ago
- ☆44Updated 2 weeks ago
- ☆44Updated 2 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆46Updated 5 months ago
- AI Logging for Interpretability and Explainability🔬☆74Updated 3 months ago
- ☆92Updated 4 months ago
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆27Updated 10 months ago