arumaekawa / DiLMLinks
Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".
☆21Updated 4 months ago
Alternatives and similar repositories for DiLM
Users that are interested in DiLM are comparing it to the libraries listed below
Sorting:
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆26Updated 5 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆21Updated 9 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆83Updated 7 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆33Updated 7 months ago
- ☆22Updated 3 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆45Updated 8 months ago
- ☆86Updated 2 years ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆44Updated 7 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆28Updated 2 months ago
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆21Updated 3 years ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 3 months ago
- ☆69Updated 3 years ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 8 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆28Updated 3 months ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆102Updated 2 years ago
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆15Updated 8 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆70Updated 8 months ago
- ☆13Updated 4 months ago
- A Task of Fictitious Unlearning for VLMs☆19Updated 2 months ago
- ☆20Updated 6 months ago
- Awesome-Low-Rank-Adaptation☆104Updated 8 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆45Updated 8 months ago
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆11Updated 3 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆34Updated 5 months ago
- Codebase for decoding compressed trust.☆24Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆78Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆39Updated 2 years ago
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆22Updated 9 months ago
- Official code for "Evaluations of Machine Learning Privacy Defenses are Misleading" (https://arxiv.org/abs/2404.17399)☆10Updated last year