ryoungj / ToolEmuLinks
[ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use
☆150Updated last year
Alternatives and similar repositories for ToolEmu
Users that are interested in ToolEmu are comparing it to the libraries listed below
Sorting:
- Improving Alignment and Robustness with Circuit Breakers☆218Updated 9 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆112Updated last year
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆50Updated 7 months ago
- ☆175Updated last year
- ☆92Updated 2 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆84Updated 7 months ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆128Updated last month
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆221Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆135Updated 11 months ago
- ☆19Updated 8 months ago
- ☆31Updated 2 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆101Updated 4 months ago
- A banchmark list for evaluation of large language models.☆130Updated 2 weeks ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆96Updated 4 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆221Updated 2 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆80Updated 2 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆188Updated 11 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆76Updated 2 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆154Updated 4 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆61Updated last month
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆100Updated last month
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆154Updated last year
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆130Updated 3 months ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆77Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆146Updated 8 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆329Updated last year
- ☆114Updated 5 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆307Updated last year