chrisliu298 / awesome-llm-unlearningLinks
A resource repository for machine unlearning in large language models
☆410Updated last week
Alternatives and similar repositories for awesome-llm-unlearning
Users that are interested in awesome-llm-unlearning are comparing it to the libraries listed below
Sorting:
- The one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE, WMDP, and many unlearning methods. All features: …☆273Updated 2 weeks ago
- ☆143Updated 2 months ago
- A survey on harmful fine-tuning attack for large language model☆178Updated last week
- LLM Unlearning☆162Updated last year
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆339Updated 5 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆125Updated last month
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆243Updated 2 months ago
- Python package for measuring memorization in LLMs.☆154Updated 6 months ago
- ☆58Updated 10 months ago
- A toolkit to assess data privacy in LLMs (under development)☆57Updated 5 months ago
- The lastest paper about detection of LLM-generated text and code☆268Updated 2 weeks ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆57Updated 8 months ago
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆155Updated last year
- A resource repository for representation engineering in large language models☆124Updated 6 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆79Updated 2 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆84Updated last year
- Awesome LLM Jailbreak academic papers☆98Updated last year
- Accepted by IJCAI-24 Survey Track☆205Updated 9 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆147Updated 2 months ago
- Accepted by ECCV 2024☆130Updated 7 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆504Updated 6 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆133Updated 10 months ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆342Updated 2 months ago
- LLM hallucination paper list☆316Updated last year
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆39Updated 6 months ago
- ☆49Updated 11 months ago
- Official github page for the paper "Evaluating Deep Unlearning in Large Language Model"☆14Updated last month
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆298Updated last year
- Toolkit for evaluating the trustworthiness of generative foundation models.☆102Updated 3 weeks ago
- ☆39Updated 7 months ago