ryoungj / ToolEmuLinks
[ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use
☆145Updated last year
Alternatives and similar repositories for ToolEmu
Users that are interested in ToolEmu are comparing it to the libraries listed below
Sorting:
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆109Updated last year
- ☆85Updated last month
- 【ACL 2024】 SALAD benchmark & MD-Judge☆150Updated 3 months ago
- Improving Alignment and Robustness with Circuit Breakers☆214Updated 9 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆144Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆121Updated 9 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆326Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆134Updated 11 months ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆61Updated last month
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated 11 months ago
- ☆85Updated 7 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆76Updated last month
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆82Updated 6 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆220Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆100Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆188Updated 11 months ago
- A banchmark list for evaluation of large language models.☆128Updated last month
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆225Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆61Updated 5 months ago
- Self-Alignment with Principle-Following Reward Models☆161Updated last month
- Code for paper "LEVER: Learning to Verifiy Language-to-Code Generation with Execution" (ICML'23)☆88Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆125Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆77Updated last month
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆98Updated 4 months ago
- ☆19Updated 8 months ago
- ☆232Updated 10 months ago
- ☆175Updated last year
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆262Updated last year