zzwjames / FailureLLMUnlearning
An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)
☆26Updated last month
Alternatives and similar repositories for FailureLLMUnlearning:
Users that are interested in FailureLLMUnlearning are comparing it to the libraries listed below
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆33Updated 2 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 5 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆76Updated 5 months ago
- Knowledge Unlearning for Large Language Models☆25Updated 2 weeks ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 6 months ago
- Improving Your Model Ranking on Chatbot Arena by Vote Rigging☆20Updated last month
- Codebase for decoding compressed trust.☆23Updated 11 months ago
- ☆31Updated 3 months ago
- ☆13Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆63Updated 2 weeks ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated 2 months ago
- ☆20Updated last month
- ☆16Updated this week
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆25Updated 4 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- ☆27Updated last year
- Mosaic IT: Enhancing Instruction Tuning with Data Mosaics☆17Updated 2 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆48Updated last month
- ☆21Updated 6 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- ☆42Updated 2 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated 9 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 5 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆32Updated last year
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated 11 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆23Updated 10 months ago
- [ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models☆22Updated 10 months ago
- ☆28Updated last month
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 5 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆84Updated 5 months ago