chrisliu298 / awesome-llm-unlearningLinks
A resource repository for machine unlearning in large language models
☆481Updated last month
Alternatives and similar repositories for awesome-llm-unlearning
Users that are interested in awesome-llm-unlearning are comparing it to the libraries listed below
Sorting:
- The one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE, WMDP, and many unlearning methods. All features: …☆363Updated last month
- A survey on harmful fine-tuning attack for large language model☆206Updated this week
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆354Updated 9 months ago
- LLM Unlearning☆174Updated last year
- ☆166Updated last month
- A resource repository for representation engineering in large language models☆135Updated 10 months ago
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆269Updated 5 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆148Updated 4 months ago
- Toolkit for evaluating the trustworthiness of generative foundation models.☆117Updated 3 weeks ago
- ☆28Updated last year
- The lastest paper about detection of LLM-generated text and code☆277Updated 2 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆525Updated last week
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆320Updated last year
- ☆51Updated last year
- Python package for measuring memorization in LLMs.☆165Updated 2 months ago
- Accepted by IJCAI-24 Survey Track☆214Updated last year
- A curated list of resources for activation engineering☆102Updated 3 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆533Updated 3 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆84Updated 5 months ago
- ☆60Updated last year
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆106Updated last year
- A toolkit to assess data privacy in LLMs (under development)☆62Updated 8 months ago
- ☆148Updated 2 years ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆143Updated last year
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆594Updated 2 months ago
- ☆44Updated 3 months ago
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk o…☆55Updated 7 months ago
- Accepted by ECCV 2024☆151Updated 11 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆46Updated 10 months ago
- ☆623Updated this week