a CLI that provides a generic automation layer for assessing the security of ML models
☆912Jul 18, 2025Updated 7 months ago
Alternatives and similar repositories for counterfit
Users that are interested in counterfit are comparing it to the libraries listed below
Sorting:
- Adversarial Threat Landscape for AI Systems☆1,104Apr 27, 2023Updated 2 years ago
- Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and…☆5,844Dec 12, 2025Updated 2 months ago
- Data Scientists Go To Jupyter☆68Mar 3, 2025Updated 11 months ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,468Updated this week
- Tree of Attacks (TAP) Jailbreaking Implementation☆118Feb 7, 2024Updated 2 years ago
- ☆19Aug 16, 2021Updated 4 years ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆112Aug 7, 2023Updated 2 years ago
- ARMORY Adversarial Robustness Evaluation Test Bed☆189Jan 19, 2024Updated 2 years ago
- A Python pickling decompiler and static analyzer☆604Updated this week
- the LLM vulnerability scanner☆7,088Updated this week
- TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs…☆3,364Jul 10, 2025Updated 7 months ago
- Privacy Testing for Deep Learning☆211Sep 5, 2025Updated 5 months ago
- Copy cat model for Proofpoint☆91Apr 30, 2020Updated 5 years ago
- Collection of red machine learning projects☆41Aug 16, 2021Updated 4 years ago
- A research project to add some brrrrrr to Burp☆207Feb 16, 2026Updated last week
- Minimal workflows☆20Mar 19, 2024Updated last year
- Create adversarial attacks against machine learning Windows malware detectors☆244Nov 27, 2025Updated 3 months ago
- A utility to inspect, validate, sign and verify machine learning model files.☆66Feb 5, 2025Updated last year
- A Python library for Secure and Explainable Machine Learning☆191Jun 23, 2025Updated 8 months ago
- Lightweight LLM Interaction Framework☆407Updated this week
- Malware Bypass Research using Reinforcement Learning☆107Apr 22, 2024Updated last year
- An experimentation and research platform to investigate the interaction of automated agents in an abstract simulated network environments…☆1,754Feb 11, 2026Updated 2 weeks ago
- Test Software for the Characterization of AI Technologies☆278Updated this week
- Understand adversary tradecraft and improve detection strategies☆712Mar 9, 2023Updated 2 years ago
- An adversarial example library for constructing attacks, building defenses, and benchmarking both☆6,412Apr 10, 2024Updated last year
- Central repo for talks and presentations☆47Jul 23, 2024Updated last year
- New ways of breaking app-integrated LLMs☆2,053Jul 17, 2025Updated 7 months ago
- Protection against Model Serialization Attacks☆646Feb 18, 2026Updated last week
- Payloads for Attacking Large Language Models☆122Jan 13, 2026Updated last month
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆635Feb 16, 2026Updated 2 weeks ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆458Jan 31, 2024Updated 2 years ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆55May 7, 2025Updated 9 months ago
- Elastic Malware Benchmark for Empowering Researchers☆1,129Nov 22, 2024Updated last year
- Sophos-ReversingLabs 20 million sample dataset☆691Apr 7, 2021Updated 4 years ago
- Attack Flow helps executives, SOC managers, and defenders easily understand how attackers compose ATT&CK techniques into attacks by devel…☆719Nov 14, 2025Updated 3 months ago
- A sandbox environment designed for loading, running and profiling a wide range of files, including machine learning models, ELFs, Pickle,…☆349Updated this week
- Automated Adversary Emulation Platform☆6,761Feb 17, 2026Updated last week
- Secure Jupyter Notebooks and Experimentation Environment☆85Feb 6, 2025Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆157Dec 18, 2024Updated last year