qiuhuachuan / latent-jailbreakLinks
☆38Updated last year
Alternatives and similar repositories for latent-jailbreak
Users that are interested in latent-jailbreak are comparing it to the libraries listed below
Sorting:
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆65Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆75Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆78Updated last month
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- Official github repo for AutoDetect, an automated weakness detection framework for LLMs.☆42Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆101Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- 🩺 A collection of ChatGPT evaluation reports on various bechmarks.☆49Updated 2 years ago
- Feeling confused about super alignment? Here is a reading list☆43Updated last year
- Implementation of "Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation"☆80Updated last year
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆41Updated last year
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆21Updated 2 years ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆15Updated 5 months ago
- Code and data for the FACTOR paper☆47Updated last year
- ☆55Updated 10 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆85Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- [EMNLP 2024] A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models.☆18Updated 9 months ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated last year
- Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models☆29Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆61Updated 6 months ago
- ☆50Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆150Updated 3 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆140Updated last month
- [ACL 2023] Solving Math Word Problems via Cooperative Reasoning induced Language Models (LLMs + MCTS + Self-Improvement)☆49Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆26Updated last year
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆48Updated last year