[NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li, Peng Fu, Yanan Cao, Weiping Wang, Jie Zhou
☆21Jan 9, 2024Updated 2 years ago
Alternatives and similar repositories for sparse-and-robust-PLM
Users that are interested in sparse-and-robust-PLM are comparing it to the libraries listed below
Sorting:
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Mar 16, 2023Updated 2 years ago
- Code for CascadeBERT, Findings of EMNLP 2021☆12Mar 30, 2022Updated 3 years ago
- ☆16Apr 11, 2022Updated 3 years ago
- ☆16May 6, 2021Updated 4 years ago
- Dataset and baseline for Coling 2022 long paper (oral): "ConFiguRe: Exploring Discourse-level Chinese Figures of Speech"☆13Jul 27, 2023Updated 2 years ago
- Code for "SCL-RAI: Span-based Contrastive Learning with Retrieval Augmented Inference for Unlabeled Entity Problem in NER" @COLING-2022☆11Aug 20, 2022Updated 3 years ago
- [Findings of ACL 2023] Communication Efficient Federated Learning for Multilingual Machine Translation with Adapter☆12Sep 4, 2023Updated 2 years ago
- [ICLR 2026] SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs☆44Oct 14, 2025Updated 4 months ago
- Code and data for Distributional Correlation–Aware Knowledge Distillation for Stock Trading Volume Prediction (ECML-PKDD 22)☆15Sep 6, 2022Updated 3 years ago
- Source code for paper "On the Pareto Front of Multilingual Neural Machine Translation" @ NeurIPS 2023☆16Sep 27, 2023Updated 2 years ago
- Code for our TSD paper "TOKEN is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models"☆14Aug 19, 2022Updated 3 years ago
- The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization…☆16Feb 15, 2024Updated 2 years ago
- Vision Large Language Models trained on M3IT instruction tuning dataset☆17Aug 16, 2023Updated 2 years ago
- ☆15Dec 22, 2021Updated 4 years ago
- Source code for paper "ATP: AMRize Than Parse! Enhancing AMR Parsing with PseudoAMRs" @NAACL-2022☆14Mar 31, 2023Updated 2 years ago
- ☆15Mar 26, 2024Updated last year
- ☆15May 23, 2022Updated 3 years ago
- Word acquisition in neural language models (TACL 2022).☆20Jan 30, 2025Updated last year
- Source code for the ACL-IJCNLP 2021 paper entitled "T-DNA: Taming Pre-trained Language Models with N-gram Representations for Low-Resourc…☆19Jan 12, 2023Updated 3 years ago
- Visual and Embodied Concepts evaluation benchmark☆21Oct 10, 2023Updated 2 years ago
- Code for EMNLP2021 paper "Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training"☆20Nov 12, 2021Updated 4 years ago
- Source code of our paper "Focus on the Target’s Vocabulary: Masked Label Smoothing for Machine Translation" @ACL-2022☆18May 19, 2022Updated 3 years ago
- [ACL 2023] Delving into the Openness of CLIP☆24Jan 11, 2023Updated 3 years ago
- ☆25Jun 5, 2023Updated 2 years ago
- ☆20Dec 16, 2020Updated 5 years ago
- Code for NAACL2022 Long Paper "An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling"☆28Nov 9, 2022Updated 3 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Oct 27, 2022Updated 3 years ago
- ☆25Feb 27, 2023Updated 3 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆94Jun 8, 2022Updated 3 years ago
- [ICML 2025] Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment (https://arxiv.org/abs/2410.02197)☆39Sep 8, 2025Updated 5 months ago
- Must-read papers on improving efficiency for pre-trained language models.☆105Oct 21, 2022Updated 3 years ago
- Robust Self-augmentation for NER with Meta-reweighting☆29Nov 8, 2022Updated 3 years ago
- ☆223Jun 2, 2025Updated 9 months ago
- Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit☆28Aug 19, 2022Updated 3 years ago
- ☆59May 16, 2018Updated 7 years ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆25Dec 15, 2021Updated 4 years ago
- Group Meeting Record for Baobao Chang Group in Peking University☆26May 17, 2021Updated 4 years ago
- OFA-Compress is a unified framework which provides OFA model finetuning, distillation and inference capabilities in Huggingface version, …☆29Sep 22, 2022Updated 3 years ago
- MLPs for Vision and Langauge Modeling (Coming Soon)☆27Dec 9, 2021Updated 4 years ago