agiresearch / EmojiCryptLinks
EmojiCrypt: Prompt Encryption for Secure Communication with Large Language Models
☆22Updated last year
Alternatives and similar repositories for EmojiCrypt
Users that are interested in EmojiCrypt are comparing it to the libraries listed below
Sorting:
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆226Updated this week
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆46Updated last year
- A toolkit to assess data privacy in LLMs (under development)☆67Updated last year
- ☆70Updated 11 months ago
- LLM Unlearning☆181Updated 2 years ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆69Updated last year
- ☆23Updated last year
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆618Updated 7 months ago
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk o…☆64Updated last year
- Shepherd: A foundational framework enabling federated instruction tuning for large language models☆249Updated 2 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆107Updated last year
- The lastest paper about detection of LLM-generated text and code☆282Updated 7 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Updated last year
- ☆78Updated 3 years ago
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆370Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆98Updated 3 weeks ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆336Updated last year
- ☆83Updated 10 months ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆75Updated last year
- ☆173Updated 3 months ago
- ☆89Updated 5 months ago
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆47Updated last year
- FedJudge: Federated Legal Large Language Model☆37Updated last year
- Code and data for our paper "Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark"…☆51Updated 2 years ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆109Updated last year
- ☆119Updated 2 years ago
- Papers and resources related to the security and privacy of LLMs 🤖☆560Updated 7 months ago
- [COLING 2025] Official repo of paper: "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jail…☆12Updated last year
- Revisiting Character-level Adversarial Attacks for Language Models, ICML 2024☆19Updated 11 months ago
- Privacy-Preserving Prompt Tuning for Large Language Model☆29Updated last year