kkkk-cyber / generate_textLinks
☆43Updated 10 months ago
Alternatives and similar repositories for generate_text
Users that are interested in generate_text are comparing it to the libraries listed below
Sorting:
- multi-bit language model watermarking (NAACL 24)☆15Updated last year
- Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)☆23Updated 10 months ago
- ☆12Updated last year
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,695Updated this week
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,635Updated this week
- ☆16Updated 5 months ago
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆297Updated 10 months ago
- Composite Backdoor Attacks Against Large Language Models☆18Updated last year
- ☆32Updated 6 months ago
- MarkLLM: An Open-Source Toolkit for LLM Watermarking.(EMNLP 2024 System Demonstration)☆630Updated 2 weeks ago
- 2022年华中科技大学网络空间安全学院《网络安全课程设计》☆21Updated 10 months ago
- A collection list for Large Language Model (LLM) Watermark☆42Updated 8 months ago
- ☆223Updated 2 months ago
- [TDSC 2024] Official code for our paper "FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model"☆21Updated 5 months ago
- The lastest paper about detection of LLM-generated text and code☆278Updated 3 months ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆44Updated 11 months ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.☆154Updated 4 months ago
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated 2 years ago
- ☆27Updated last year
- ☆14Updated 9 months ago
- Academic Cooperation Lab☆15Updated 3 weeks ago
- Collection of papers, tools, datasets for fairness of LLM☆16Updated last year
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆21Updated last month
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆358Updated 10 months ago
- ☆25Updated 2 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆270Updated 9 months ago
- ☆15Updated last year
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆19Updated 10 months ago
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆223Updated 3 weeks ago
- ☆82Updated last month