CUHK-Shenzhen-SE / D4C
[ICSE'25] Aligning the Objective of LLM-based Program Repair
☆14Updated 2 months ago
Alternatives and similar repositories for D4C
Users that are interested in D4C are comparing it to the libraries listed below
Sorting:
- ☆11Updated 3 months ago
- ☆38Updated 10 months ago
- WhiteFox: White-Box Compiler Fuzzing Empowered by Large Language Models (OOPSLA 2024)☆57Updated 5 months ago
- AutoLog: A Log Sequence Synthesis Framework for Anomaly Detection [ASE'23]☆38Updated last year
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆59Updated 3 months ago
- CodeGuard+: Constrained Decoding for Secure Code Generation☆11Updated 9 months ago
- ☆12Updated 8 months ago
- Bugs in Pods: Understanding Bugs in Container Runtime Systems (ISSTA 2024)☆19Updated 9 months ago
- A toolkit for testing and improving named entity recognition [ESEC/FSE'23]☆11Updated last year
- A lightweight tool for detecting bugs on Graph Database Management Systems☆14Updated last year
- Free Lunch for Testing: Fuzzing Deep-Learning Libraries from Open Source (ICSE'22)☆77Updated 2 years ago
- Code and Results of the Paper: On the Resilience of Multi-Agent Systems with Malicious Agents☆20Updated 3 months ago
- [ICSE 2023] Differentiable interpretation and failure-inducing input generation for neural network numerical bugs.☆12Updated last year
- [TOSEM 2023] A Survey of Learning-based Automated Program Repair☆71Updated last year
- This is the artifact for paper “Are Machine Learning Cloud APIs Used Correctly? (#421)” in ICSE2021☆15Updated 4 years ago
- ☆23Updated 7 months ago
- ☆9Updated last year
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆14Updated last month
- TensorFlow API analysis tool and malicious model detection tool☆27Updated 2 months ago
- Code for the AAAI 2023 paper "CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models☆29Updated 2 years ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆77Updated 10 months ago
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆36Updated 2 weeks ago
- Simultaneous evaluation on both functionality and security of LLM-generated code.☆15Updated 4 months ago
- Structure-Invariant Testing for Machine Translation [ICSE'20]☆16Updated 4 years ago
- This is the tool released in the ASE'23 paper "Generative Type Inference for Python".☆26Updated last year
- ☆111Updated 10 months ago
- Must-read papers on Repository-level Code Generation & Issue Resolution 🔥☆57Updated this week
- [ICLR'25] OpenRCA: Can Large Language Models Locate the Root Cause of Software Failures?☆55Updated 2 weeks ago
- Fuzzing Automatic Differentiation in Deep-Learning Libraries (ICSE'23)☆22Updated last year
- ☆15Updated 5 months ago