Libr-AI / do-not-answer
Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
☆156Updated 3 months ago
Related projects: ⓘ
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆441Updated 2 months ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆250Updated this week
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆386Updated 7 months ago
- Papers about red teaming LLMs and Multimodal models.☆66Updated this week
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆198Updated 10 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆81Updated this week
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆102Updated 2 months ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆76Updated 6 months ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆271Updated 4 months ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety.☆141Updated 2 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web"☆106Updated last week
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆219Updated 6 months ago
- A Survey of Attributions for Large Language Models☆155Updated 3 weeks ago
- ☆160Updated last year
- AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark☆98Updated 2 weeks ago
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆65Updated 2 months ago
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors☆134Updated 2 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆50Updated this week
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆134Updated 6 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆61Updated 4 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆182Updated last month
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆275Updated last month
- An Open Robustness Benchmark for Jailbreaking Language Models [arXiv 2024]☆169Updated last month
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆300Updated 8 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆99Updated 10 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆118Updated 6 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆82Updated last month
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆133Updated 10 months ago
- A Survey on Data Selection for Language Models☆148Updated 3 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆89Updated 2 months ago