logic-star-ai / baxbenchLinks
☆80Updated 2 months ago
Alternatives and similar repositories for baxbench
Users that are interested in baxbench are comparing it to the libraries listed below
Sorting:
- ☆125Updated last year
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆82Updated 2 years ago
- ☆182Updated 2 weeks ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆101Updated 2 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆389Updated last month
- ☆49Updated last year
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆131Updated last month
- An autonomous LLM-agent for large-scale, repository-level code auditing☆296Updated 3 weeks ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆78Updated 3 months ago
- Repository for PrimeVul Vulnerability Detection Dataset☆208Updated last year
- The automated prompt injection framework for LLM-integrated applications.☆246Updated last year
- ☆54Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- CodeGuard+: Constrained Decoding for Secure Code Generation☆17Updated last year
- CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software☆310Updated last year
- Automated Benchmarking of LLM Agents on Real-World Software Security Tasks [NeurIPS 2025]☆46Updated last month
- DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection (RAID 2023) https://surrealyz.github.io/…☆171Updated last year
- ☆15Updated last year
- Guardrails for secure and robust agent development☆377Updated 5 months ago
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆64Updated this week
- 🔮Reasoning for Safer Code Generation; 🥇Winner Solution of Amazon Nova AI Challenge 2025☆34Updated 4 months ago
- The goal of this repo is to become a benchmark for pentesting☆17Updated last year
- A Novel Benchmark evaluating the Deep Capability of Vulnerability Detection with Large Language Models☆32Updated 8 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆64Updated 2 weeks ago
- EvoEval: Evolving Coding Benchmarks via LLM☆80Updated last year
- This repository contains the replication package of our paper "Assessing the Security of GitHub Copilot’s Generated Code - A Targeted Rep…☆10Updated 2 years ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆54Updated 9 months ago
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆63Updated 7 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆76Updated 5 months ago
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆16Updated 9 months ago