Libr-AI / do-not-answerLinks
Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
☆294Updated last year
Alternatives and similar repositories for do-not-answer
Users that are interested in do-not-answer are comparing it to the libraries listed below
Sorting:
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆517Updated last year
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆571Updated last year
- A Comprehensive Assessment of Trustworthiness in GPT Models☆306Updated last year
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆205Updated 10 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆328Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆163Updated 7 months ago
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆232Updated last year
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆260Updated 3 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆172Updated 6 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆163Updated 2 years ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆105Updated last year
- This repo contains the code for generating the ToxiGen dataset, published at ACL 2022.☆335Updated last year
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆92Updated 10 months ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆394Updated 6 months ago
- Papers about red teaming LLMs and Multimodal models.☆145Updated 5 months ago
- Code and data of the EMNLP 2022 paper "Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversaria…☆61Updated 2 years ago
- ☆221Updated 4 years ago
- Improving Alignment and Robustness with Circuit Breakers☆238Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆116Updated 8 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆106Updated last year
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆601Updated 4 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆146Updated last year
- ✨✨Latest Papers about LLM-based Evaluators☆30Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆90Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆522Updated 9 months ago
- Generative Judge for Evaluating Alignment☆247Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆381Updated 2 years ago
- S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models☆99Updated 2 weeks ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆91Updated 5 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆292Updated 4 months ago