lancopku / FedMNMTLinks
[Findings of ACL 2023] Communication Efficient Federated Learning for Multilingual Machine Translation with Adapter
☆12Updated 2 years ago
Alternatives and similar repositories for FedMNMT
Users that are interested in FedMNMT are comparing it to the libraries listed below
Sorting:
- Code for the paper "Pretrained Models for Multilingual Federated Learning" at NAACL 2022☆11Updated 3 years ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆57Updated 2 years ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆37Updated last year
- ☆16Updated last year
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago
- Crawl & visualize ICLR papers and reviews.☆18Updated 3 years ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆46Updated last year
- Code for paper: “What Data Benefits My Classifier?” Enhancing Model Performance and Interpretability through Influence-Based Data Selecti…☆24Updated last year
- ☆23Updated 2 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated 2 years ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- ☆43Updated 2 years ago
- Code for the paper "Mehta, S. V., Patil, D., Chandar, S., & Strubell, E. (2023). An Empirical Investigation of the Role of Pre-training i…☆17Updated last year
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 3 years ago
- Code for LLM_Catastrophic_Forgetting via SAM.☆11Updated last year
- Code for NeurIPS 2020 Paper --- Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks☆21Updated 3 years ago
- DP-Rewrite: Towards Reproducibility and Transparency in Differentially Private Text Rewriting☆15Updated 2 years ago
- ☆43Updated 2 years ago
- ☆80Updated 3 years ago
- ☆25Updated 4 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆37Updated 6 months ago
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆15Updated last year
- Official Implementation of Unweighted Data Subsampling via Influence Function - AAAI 2020☆64Updated 4 years ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 7 months ago
- Restore safety in fine-tuned language models through task arithmetic☆29Updated last year
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆82Updated last year
- Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).☆59Updated 3 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Updated 2 years ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year