collinzrj / output2prompt
☆37Updated 3 months ago
Alternatives and similar repositories for output2prompt:
Users that are interested in output2prompt are comparing it to the libraries listed below
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆66Updated 2 weeks ago
- Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique☆13Updated 6 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆84Updated last week
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆66Updated last year
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆75Updated 2 weeks ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆98Updated 4 months ago
- ☆41Updated 2 weeks ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆117Updated 7 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆61Updated 2 months ago
- Weak-to-Strong Jailbreaking on Large Language Models☆72Updated last year
- Implementation of PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆32Updated 3 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆107Updated 9 months ago
- ☆17Updated 4 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆128Updated 11 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆83Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 8 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆37Updated last month
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆58Updated 3 weeks ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆125Updated 2 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆32Updated last week
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆25Updated this week
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆107Updated 7 months ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 8 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 7 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆64Updated 8 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆91Updated last month
- ☆17Updated 3 months ago
- ☆31Updated last year
- LLM Unlearning☆142Updated last year
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆58Updated last year