Zqzqsb / LabServerDocsLinks
This repo contains docs for FDU NISL servers. It will be maintained by server adminstrators.
β17Updated last year
Alternatives and similar repositories for LabServerDocs
Users that are interested in LabServerDocs are comparing it to the libraries listed below
Sorting:
- Generate a fun snake game animation from your contributions on both Gitee and GitHub platforms!β26Updated last year
- An SSG tool for quickly building modern documentation sites. ποΈποΈποΈβ27Updated last year
- A tool that is 100% programmed in bash, designed to simplify the work of operations and maintenance personnel.β31Updated last year
- Simulator.β103Updated 4 months ago
- Composite Backdoor Attacks Against Large Language Modelsβ16Updated last year
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"β18Updated 8 months ago
- β20Updated 11 months ago
- π up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.β376Updated this week
- This is the code repository of our submission: Understanding the Dark Side of LLMsβ Intrinsic Self-Correction.β62Updated 8 months ago
- β40Updated 9 months ago
- Global AI Safety and Governance: Never Compromise to Vulnerabilitiesβ29Updated this week
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk oβ¦β55Updated 7 months ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wateβ¦β41Updated 10 months ago
- β16Updated 3 months ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Modelsβ185Updated 6 months ago
- β223Updated 3 weeks ago
- A list of recent papers about adversarial learningβ204Updated last week
- Safety at Scale: A Comprehensive Survey of Large Model Safetyβ187Updated 6 months ago
- This Github repository summarizes a list of research papers on AI security from the four top academic conferences.β147Updated 3 months ago
- β31Updated 5 months ago
- LLM for Index Recommendationβ11Updated 2 months ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Modelsβ217Updated this week
- GPTuner is a manual-reading database tuning system leveraging domain knowlege automatically and extensively to enhance knob tuning procesβ¦β114Updated 2 months ago
- [MM'23] ProTegO: Protect Text Content against OCR Extraction Attackβ12Updated last year
- A toolbox for backdoor attacks.β22Updated 2 years ago
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.β183Updated 5 months ago
- A survey on harmful fine-tuning attack for large language modelβ205Updated this week
- β25Updated last year
- β12Updated last year
- An official implementation of "Rethinking Graph Backdoor Attacks: A Distribution-Preserving Perspective" (KDD 2024)β12Updated 11 months ago