RiskySignal / record_what_i_readLinks
AI Model Security Reading Notes
☆40Updated 6 months ago
Alternatives and similar repositories for record_what_i_read
Users that are interested in record_what_i_read are comparing it to the libraries listed below
Sorting:
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆105Updated 11 months ago
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆22Updated last year
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆43Updated 3 months ago
- The official repository of the paper "The Digital Cybersecurity Expert: How Far Have We Come?" presented in IEEE S&P 2025☆21Updated 3 months ago
- ☆19Updated last year
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆148Updated 3 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆86Updated 7 months ago
- ☆18Updated 3 years ago
- Simple PyTorch implementations of Badnets on MNIST and CIFAR10.☆184Updated 2 years ago
- A curated list of malware-related papers.☆32Updated last year
- ☆10Updated 10 months ago
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆292Updated 9 months ago
- This resource mainly counts papers related to APT attacks, including APT traceability, APT knowledge graph construction, APT malicious sa…☆214Updated last year
- This project aims to consolidate and share high-quality resources and tools across the cybersecurity domain.☆251Updated 2 weeks ago
- ☆223Updated last month
- ☆13Updated last year
- ☆25Updated 3 years ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆51Updated 5 months ago
- A collection of security papers on top-tier publications☆53Updated last month
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 2 months ago
- Machine Learning & Security Seminar @Purdue University☆25Updated 2 years ago
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆57Updated 4 months ago
- ☆66Updated 4 years ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆191Updated 7 months ago
- ☆35Updated 11 months ago
- ☆35Updated last year
- TensorFlow API analysis tool and malicious model detection tool☆34Updated 3 months ago
- ☆47Updated 11 months ago
- Seminar 2022☆21Updated 2 months ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Updated 10 months ago