allenai / safety-eval
A simple evaluation of generative language models and safety classifiers.
☆48Updated 8 months ago
Alternatives and similar repositories for safety-eval:
Users that are interested in safety-eval are comparing it to the libraries listed below
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆68Updated 3 months ago
- [arXiv preprint] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆34Updated 3 months ago
- ☆65Updated 4 months ago
- ☆19Updated 5 months ago
- ☆38Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆62Updated this week
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆95Updated last month
- This repository contains data, code and models for contextual noncompliance.☆20Updated 8 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆67Updated last year
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆83Updated this week
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆108Updated 11 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆73Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆33Updated last month
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆34Updated 7 months ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆95Updated last year
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆44Updated 3 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆22Updated 9 months ago
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆33Updated last year
- ☆40Updated last month
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated 2 months ago
- ☆22Updated 3 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆71Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆49Updated last month
- ☆23Updated 2 months ago
- ☆53Updated 2 years ago
- Weak-to-Strong Jailbreaking on Large Language Models☆72Updated last year
- ☆38Updated 4 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆183Updated 8 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆84Updated 4 months ago