niuliang42 / CodexLeaksLinks
CodexLeaks: Privacy Leaks from Code Generation Language Models in GitHub Copilot
โ11Updated 2 years ago
Alternatives and similar repositories for CodexLeaks
Users that are interested in CodexLeaks are comparing it to the libraries listed below
Sorting:
- โ49Updated last year
- ๐ฎReasoning for Safer Code Generation; ๐ฅWinner Solution of Amazon Nova AI Challenge 2025โ31Updated 3 months ago
- A curated list of trustworthy Generative AI papers. Daily updating...โ75Updated last year
- A toolkit to assess data privacy in LLMs (under development)โ64Updated 11 months ago
- Machine Learning & Security Seminar @Purdue Universityโ25Updated 2 years ago
- โ70Updated 9 months ago
- โ19Updated last year
- โ20Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]โ102Updated last year
- โ36Updated last year
- Code for paper "SrcMarker: Dual-Channel Source Code Watermarking via Scalable Code Transformations" (IEEE S&P 2024)โ33Updated last year
- Code for the AAAI 2023 paper "CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Modelsโ33Updated 2 years ago
- โ21Updated last year
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Modelsโ43Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Modelsโ218Updated 3 weeks ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineerโ46Updated last year
- An LLM can Fool Itself: A Prompt-Based Adversarial Attack (ICLR 2024)โ107Updated 10 months ago
- ๐ฅ๐ฅ๐ฅ Detecting hidden backdoors in Large Language Models with only black-box accessโ50Updated 6 months ago
- Fingerprint large language modelsโ46Updated last year
- [ACL 2024] The official GitHub repo for the paper "The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Peโฆโ79Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"โ75Updated 4 months ago
- Simultaneous evaluation on both functionality and security of LLM-generated code.โ28Updated 2 weeks ago
- Backdooring Neural Code Searchโ14Updated 2 years ago
- Adversarial Attack for Pre-trained Code Modelsโ10Updated 3 years ago
- โ124Updated last year
- โ16Updated 2 years ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agaiโฆโ53Updated 8 months ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant codeโ68Updated last year
- Repository for Towards Codable Watermarking for Large Language Modelsโ38Updated 2 years ago
- Python package for measuring memorization in LLMs.โ175Updated 4 months ago