qiuhuachuan / latent-jailbreakLinks
☆40Updated last year
Alternatives and similar repositories for latent-jailbreak
Users that are interested in latent-jailbreak are comparing it to the libraries listed below
Sorting:
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆87Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆89Updated last year
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆82Updated 2 years ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated 2 years ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆50Updated last year
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆103Updated 11 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆33Updated 4 months ago
- Do Large Language Models Know What They Don’t Know?☆99Updated 11 months ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆56Updated last year
- ☆28Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 5 months ago
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆232Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆105Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆132Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Code and data for the FACTOR paper☆52Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆162Updated last year
- Feeling confused about super alignment? Here is a reading list☆43Updated last year
- ☆47Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆63Updated 2 years ago
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆171Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆91Updated 5 months ago
- Benchmarking LLMs' Psychological Portrayal☆124Updated 9 months ago
- ☆141Updated 2 years ago
- On Transferability of Prompt Tuning for Natural Language Processing☆100Updated last year
- ☆44Updated last year
- PPTC Benchmark: Evaluating Large Language Models for PowerPoint Task Completion☆57Updated last year
- 首个中文心理咨询对话安全检测数据集☆22Updated last year