code-philia / PhishVLMLinks
☆24Updated 3 weeks ago
Alternatives and similar repositories for PhishVLM
Users that are interested in PhishVLM are comparing it to the libraries listed below
Sorting:
- pretrained BERT model for cyber security text, learned CyberSecurity Knowledge☆189Updated 2 years ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆288Updated 2 months ago
- ☆64Updated 11 months ago
- The automated prompt injection framework for LLM-integrated applications.☆230Updated last year
- CTINexus is a framework that leverages optimized in-context learning of LLMs to enable data-efficient extraction of cyber threat intellig…☆49Updated last week
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆91Updated 7 months ago
- Datasets for cybersecurity☆12Updated last month
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆417Updated last year
- An extended version of SecureBERT, trained on top of both base and large version of RoBERTa using 10 GB cybersecurity-related data☆28Updated last year
- Automated Safety Testing of Large Language Models☆16Updated 7 months ago
- CyberMetric dataset☆103Updated 8 months ago
- The repository of paper "HackMentor: Fine-Tuning Large Language Models for Cybersecurity".☆130Updated last year
- Machine learning on knowledge graphs for context-aware security monitoring (data and model)☆18Updated 3 years ago
- SecureBERT is a domain-specific language model to represent cybersecurity textual data.☆97Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆718Updated 5 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆275Updated 3 weeks ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆63Updated 3 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆529Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆149Updated 9 months ago
- ☆62Updated 9 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆424Updated last year
- Implementations of 3 phishing detection and identification baselines☆19Updated 10 months ago
- ☆622Updated 2 months ago
- [USENIX Security 2024] Official Repository of 'KnowPhish: Large Language Models Meet Multimodal Knowledge Graphs for Enhancing Reference-…☆13Updated last month
- Code to generate NeuralExecs (prompt injection for LLMs)☆22Updated 10 months ago
- SMET : Semantic Mapping of CVE to ATT&CK and its Application to Cybersecurity☆48Updated last year
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆50Updated last year
- Extracting Attack Behavior from Threat Reports☆77Updated 2 years ago
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins☆28Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆52Updated 6 months ago