collinzrj / output2prompt
☆37Updated 2 months ago
Alternatives and similar repositories for output2prompt:
Users that are interested in output2prompt are comparing it to the libraries listed below
- Repo for the research paper "Aligning LLMs to Be Robust Against Prompt Injection"☆32Updated last month
- Weak-to-Strong Jailbreaking on Large Language Models☆73Updated 11 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆65Updated 3 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆112Updated 6 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆32Updated 3 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆88Updated 7 months ago
- Does Refusal Training in LLMs Generalize to the Past Tense? [NeurIPS 2024 Safe Generative AI Workshop (Oral)]☆60Updated 3 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆125Updated 10 months ago
- ☆39Updated last year
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆56Updated last month
- InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆64Updated 2 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆87Updated last week
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆79Updated 11 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆116Updated last month
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆64Updated 7 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated 11 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆66Updated 10 months ago
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆77Updated last year
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 7 months ago
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆57Updated this week
- Python package for measuring memorization in LLMs.☆134Updated last month
- ☆52Updated 3 weeks ago
- Implementation of PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆31Updated 2 months ago
- ☆36Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆86Updated 5 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆59Updated 3 weeks ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆119Updated 6 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆45Updated 10 months ago
- Official code for the paper: Evaluating Copyright Takedown Methods for Language Models☆16Updated 6 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆27Updated 5 months ago