TrustAgent: Towards Safe and Trustworthy LLM-based Agents
☆56Feb 7, 2025Updated last year
Alternatives and similar repositories for TrustAgent
Users that are interested in TrustAgent are comparing it to the libraries listed below
Sorting:
- Code for paper "Membership Inference Attacks Against Vision-Language Models"☆26Jan 25, 2025Updated last year
- [ACL 2025] The official code for "AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection".☆33Aug 4, 2025Updated 7 months ago
- ☆10Jul 9, 2020Updated 5 years ago
- LLAP is an LLVM-based tool for generating enriched program dependency graphs (ePDGs) from program source code that are suitable for use i…☆16May 17, 2023Updated 2 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Aug 29, 2022Updated 3 years ago
- The official implementation of CVPR 2025 paper "Invisible Backdoor Attack against Self-supervised Learning"☆17Jul 5, 2025Updated 8 months ago
- ☆13Jan 7, 2025Updated last year
- Evolving fuzzers with large language models☆17Dec 14, 2023Updated 2 years ago
- ☆19Aug 23, 2024Updated last year
- Security Vulnerability Repair via Concolic Execution and Code Mutations☆19Sep 12, 2024Updated last year
- ☆22Dec 22, 2024Updated last year
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- Data for LLM4Vuln☆14Jan 28, 2024Updated 2 years ago
- ☆17Jul 11, 2025Updated 7 months ago
- ☆27Jun 6, 2023Updated 2 years ago
- Code for paper "Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication"☆22Mar 30, 2024Updated last year
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆30Jun 23, 2025Updated 8 months ago
- A repostory with several COBOL libraries [intended to be used with the latest COBOL standard], each containing a number of very useful fu…☆12Feb 10, 2025Updated last year
- Concealed Data Poisoning Attacks on NLP Models☆21Sep 4, 2023Updated 2 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Apr 27, 2023Updated 2 years ago
- Process Orchestration Framework: A camunda 7 fork☆21Updated this week
- ☆55Dec 7, 2024Updated last year
- Code & Data for Comparative Opinion Summarization via Collaborative Decoding (Iso et al; Findings of ACL 2022)☆23Mar 3, 2025Updated last year
- Python GUI for seeing what's happening inside a fuzzer☆26Oct 1, 2021Updated 4 years ago
- FastKLEE: Faster Symbolic Execution via Reducing Redundant Bound Checking of Type-Safe Pointers☆23May 31, 2023Updated 2 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Jan 24, 2024Updated 2 years ago
- A package that achieves 95%+ transfer attack success rate against GPT-4☆26Oct 24, 2024Updated last year
- ☆86Mar 20, 2025Updated 11 months ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆33May 21, 2025Updated 9 months ago
- Repository for the paper "RTP-LX: Can LLMs Evaluate Toxicity in Multilingual Scenarios?"☆27May 1, 2025Updated 10 months ago
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 5 months ago
- ☆24Sep 5, 2023Updated 2 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Nov 18, 2024Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- ☆23Dec 30, 2025Updated 2 months ago
- TVMFuzz: fuzzing tensor-level intermediate representation in TVM☆30May 24, 2020Updated 5 years ago
- ☆37Aug 28, 2025Updated 6 months ago