zzwjames / FailureLLMUnlearningLinks
An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)
☆29Updated 5 months ago
Alternatives and similar repositories for FailureLLMUnlearning
Users that are interested in FailureLLMUnlearning are comparing it to the libraries listed below
Sorting:
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆28Updated 8 months ago
- ☆22Updated 2 months ago
- A holistic benchmark for LLM abstention☆41Updated 2 weeks ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 9 months ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆81Updated 9 months ago
- ☆18Updated this week
- Exploration of automated dataset selection approaches at large scales.☆47Updated 5 months ago
- ☆34Updated 6 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆31Updated 6 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆39Updated 9 months ago
- [ACL 2025] Knowledge Unlearning for Large Language Models☆39Updated 3 months ago
- AIR-Bench 2024 is a safety benchmark that aligns with emerging government regulations and company policies☆23Updated 11 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 3 months ago
- ☆20Updated 3 months ago
- ☆28Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 2 months ago
- [ACL'25] Mosaic-IT: Cost-Free Compositional Data Synthesis for Instruction Tuning☆19Updated last month
- Implementation for the paper "Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning"☆10Updated 6 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆96Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆112Updated last month
- ☆24Updated 5 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 5 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 5 months ago
- ☆38Updated last year
- Codebase for Instruction Following without Instruction Tuning☆35Updated 10 months ago
- ☆20Updated 11 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- ☆47Updated 5 months ago
- ☆15Updated last year
- [ICLR 2024] Unveiling the Pitfalls of Knowledge Editing for Large Language Models☆22Updated last year