guanchuwang / Taylor-Unswift
☆22Updated 5 months ago
Alternatives and similar repositories for Taylor-Unswift:
Users that are interested in Taylor-Unswift are comparing it to the libraries listed below
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆67Updated 2 weeks ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆71Updated 2 weeks ago
- ☆16Updated 5 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆40Updated 3 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆18Updated 2 months ago
- A survey on harmful fine-tuning attack for large language model☆147Updated last week
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆17Updated 6 months ago
- ☆10Updated 10 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆81Updated 8 months ago
- A toolkit to assess data privacy in LLMs (under development)☆54Updated 2 months ago
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆91Updated 8 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 9 months ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆26Updated 3 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆57Updated 2 months ago
- LLM Unlearning☆145Updated last year
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆55Updated 2 months ago
- ☆28Updated 8 months ago
- ☆49Updated 7 months ago
- ☆20Updated 7 months ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆38Updated 9 months ago
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆24Updated last year
- ☆65Updated 2 months ago
- Accepted LLM Papers in NeurIPS 2024☆33Updated 5 months ago
- Codebase for decoding compressed trust.☆23Updated 10 months ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆68Updated last year