collinzrj / output2promptLinks
☆44Updated 4 months ago
Alternatives and similar repositories for output2prompt
Users that are interested in output2prompt are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆76Updated 2 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆102Updated 3 weeks ago
- Improving Alignment and Robustness with Circuit Breakers☆220Updated 9 months ago
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆69Updated 5 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆69Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆101Updated 4 months ago
- ☆175Updated last year
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆50Updated 7 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆84Updated 7 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆151Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆80Updated 2 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 5 months ago
- Code to enable layer-level steering in LLMs using sparse auto encoders☆22Updated 2 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆137Updated 11 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆112Updated last year
- ☆38Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆107Updated 9 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆61Updated last month
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆149Updated 4 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆95Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆59Updated last month
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆181Updated 5 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆95Updated 3 months ago
- ☆66Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆106Updated last month
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆62Updated 6 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆90Updated 8 months ago
- ☆74Updated 5 months ago