declare-lab / ferretLinks
Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique
☆18Updated last year
Alternatives and similar repositories for ferret
Users that are interested in ferret are comparing it to the libraries listed below
Sorting:
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆48Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆32Updated 11 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆91Updated 7 months ago
- ☆39Updated 2 years ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆73Updated 7 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Updated last year
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆66Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆31Updated last year
- Codebase for Inference-Time Policy Adapters☆24Updated 2 years ago
- ☆31Updated 2 years ago
- [ACL 2025] Knowledge Unlearning for Large Language Models☆47Updated 3 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆123Updated last year
- ☆22Updated last year
- ☆191Updated 2 years ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆30Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆158Updated 6 months ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Updated 11 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆117Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Updated last year
- [EMNLP 2024 Findings] ProSA: Assessing and Understanding the Prompt Sensitivity of LLMs☆29Updated 7 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆39Updated 4 months ago
- Official repository for Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning [ICLR 2025]☆50Updated 11 months ago
- Test LLMs against jailbreaks and unprecedented harms☆36Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models☆19Updated 4 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆35Updated 10 months ago
- ☆27Updated last year