jugechengzi / Rationalization-MGRLinks
ACL 2023 *oral* paper "MGR: Multi-generator based Rationalization"
☆10Updated 8 months ago
Alternatives and similar repositories for Rationalization-MGR
Users that are interested in Rationalization-MGR are comparing it to the libraries listed below
Sorting:
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆42Updated 8 months ago
- LLM Unlearning☆172Updated last year
- A survey on harmful fine-tuning attack for large language model☆193Updated 3 weeks ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆142Updated 2 months ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆67Updated last year
- [ICLR 2025] A Closer Look at Machine Unlearning for Large Language Models☆33Updated 7 months ago
- The one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE, WMDP, and many unlearning methods. All features: …☆318Updated 3 weeks ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- Accepted by ECCV 2024☆144Updated 9 months ago
- ☆11Updated 4 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆45Updated 8 months ago
- awesome SAE papers☆39Updated last month
- Official implementation repository for the paper Towards General Conceptual Model Editing via Adversarial Representation Engineering.☆18Updated 7 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 9 months ago
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆57Updated 8 months ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆61Updated last month
- ☆49Updated 11 months ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆48Updated 8 months ago
- ☆175Updated last year
- ☆31Updated last month
- ☆54Updated 4 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆81Updated 3 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆29Updated last year
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆17Updated 5 months ago
- Accepted by IJCAI-24 Survey Track☆207Updated 10 months ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆31Updated 2 months ago
- ☆153Updated 3 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 4 months ago
- ☆25Updated last month
- 【ACL 2024】 SALAD benchmark & MD-Judge☆155Updated 4 months ago