qiuhuachuan / latent-jailbreakLinks
☆38Updated last year
Alternatives and similar repositories for latent-jailbreak
Users that are interested in latent-jailbreak are comparing it to the libraries listed below
Sorting:
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆76Updated 3 weeks ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆73Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆42Updated 11 months ago
- [EMNLP 2024] A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models.☆17Updated 8 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆64Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 8 months ago
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆48Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆27Updated 2 years ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆68Updated 3 weeks ago
- S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models☆70Updated last month
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆80Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆57Updated 9 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23Updated 2 years ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆100Updated last year
- ☆49Updated last year
- Towards Systematic Measurement for Long Text Quality☆35Updated 9 months ago
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆50Updated last year
- ☆42Updated last year
- Feeling confused about super alignment? Here is a reading list☆42Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated 11 months ago
- A framework for editing the CoTs for better factuality☆49Updated last year
- We have released the code and demo program required for LLM with self-verification☆60Updated last year
- ☆44Updated 9 months ago
- ☆74Updated last year