arumaekawa / dataset-distillation-with-attention-labels
Implementation of "Dataset Distillation with Attention Labels for fine-tuning BERT" (accepted by ACL2023 main (short))
☆22Updated last year
Alternatives and similar repositories for dataset-distillation-with-attention-labels:
Users that are interested in dataset-distillation-with-attention-labels are comparing it to the libraries listed below
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆67Updated 4 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆52Updated 5 months ago
- ☆41Updated 3 weeks ago
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆17Updated 3 weeks ago
- ☆65Updated 2 years ago
- A unified, easily extensible repository for LLM unlearning benchmarks (TOFU, MUSE) - enabling new evaluations, methods, and tasks.☆132Updated this week
- Official code implementation of SKU, Accepted by ACL 2024 Findings☆13Updated 2 months ago
- 🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Con…☆39Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆72Updated 2 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆50Updated 3 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆45Updated this week
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆35Updated 2 weeks ago
- ☆59Updated 2 months ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆64Updated last year
- ☆28Updated 8 months ago
- ☆47Updated 7 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆23Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆105Updated 11 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆62Updated 5 months ago
- ☆19Updated 7 months ago
- ☆117Updated last month
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆37Updated 9 months ago
- ☆50Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆40Updated 4 months ago
- ☆32Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆31Updated 2 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆29Updated 3 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 7 months ago
- Official Repository for Dataset Inference for LLMs☆32Updated 7 months ago