CGCL-codes / PathEvalLinks
This is an evaluation set for the problem of directed/targeted test input generation. We use it to benchmark the ability of Large Language Models for generating inputs to reach a certain code location or produce a particular result.
☆34Updated 10 months ago
Alternatives and similar repositories for PathEval
Users that are interested in PathEval are comparing it to the libraries listed below
Sorting:
- tool of llm-based indirect-call analyzer☆31Updated 11 months ago
- ☆90Updated 2 years ago
- LLMDFA: Analyzing Dataflow in Code with Large Language Models (NeurIPS 2024)☆180Updated 3 months ago
- For our ISSTA22 paper "DocTer: Documentation-Guided Fuzzing for Testing Deep Learning API Functions" by Danning Xie, Yitong Li, Mijung Ki…☆39Updated 3 years ago
- WhiteFox: White-Box Compiler Fuzzing Empowered by Large Language Models (OOPSLA 2024)☆76Updated 5 months ago
- The source code of project "LLift" (Enhancing static analysis with LLM)☆85Updated last year
- VulTrigger is a tool to for identifying vulnerability-triggering statements across functions and investigating the effectiveness of funct…☆43Updated 2 years ago
- A manually vetted dataset for security vulnerability detection in Java projects☆88Updated 5 months ago
- Research artifact for Oakland (S&P) 2022, "BEACON: Directed Grey-Box Fuzzing with Provable Path Pruning"☆41Updated 2 months ago
- LLMSAN: Sanitizing Large Language Models in Bug Detection with Data-Flow (EMNLP Findings 2024)☆84Updated 3 months ago
- Parsing-based Analyzer☆69Updated 7 months ago
- Two-Level Collaborative Fuzzing for Python Runtimes☆19Updated 2 years ago
- Research artifact for Oakland (S&P) 2024, "Titan: Efficient Multi-target Directed Greybox Fuzzing"