lancopku / FedMNMT
[Findings of ACL 2023] Communication Efficient Federated Learning for Multilingual Machine Translation with Adapter
☆12Updated last year
Alternatives and similar repositories for FedMNMT:
Users that are interested in FedMNMT are comparing it to the libraries listed below
- Code for the paper "Pretrained Models for Multilingual Federated Learning" at NAACL 2022☆10Updated 2 years ago
- Crawl & visualize ICLR papers and reviews.☆18Updated 2 years ago
- Code for paper: “What Data Benefits My Classifier?” Enhancing Model Performance and Interpretability through Influence-Based Data Selecti…☆22Updated 9 months ago
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆38Updated 4 months ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆55Updated last year
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 8 months ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆12Updated 8 months ago
- ☆40Updated last year
- ☆17Updated 2 months ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆11Updated 9 months ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- ☆21Updated last year
- ☆15Updated 8 months ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated last year
- [ACL 2023] Code for paper “Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation”(https://arxiv.org/abs/2305.…☆38Updated last year
- Codebase for decoding compressed trust.☆23Updated 9 months ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆27Updated 10 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆18Updated 5 months ago
- ☆15Updated last year
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆15Updated last year
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆30Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 8 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated 9 months ago
- ☆24Updated 3 years ago
- ☆32Updated last year
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆26Updated 9 months ago
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆15Updated last year