Spico197 / watchmen
😎 A simple and easy-to-use toolkit for GPU scheduling.
☆42Updated 3 years ago
Alternatives and similar repositories for watchmen:
Users that are interested in watchmen are comparing it to the libraries listed below
- 服务器 GPU 监控程序,当 GPU 属性满足预设条件时通过微信发送提示消息☆29Updated 3 years ago
- ☆72Updated 2 years ago
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆56Updated 3 years ago
- ☆32Updated 3 years ago
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆29Updated last year
- Must-read papers on improving efficiency for pre-trained language models.☆102Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆80Updated last year
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆59Updated 2 years ago
- Mixture of Attention Heads☆41Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆117Updated 10 months ago
- A light-weight script for maintaining a LOT of machine learning experiments.☆90Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆23Updated 3 years ago
- 🎮 A toolkit for Relation Extraction and more...☆24Updated 2 months ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆83Updated 2 years ago
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year
- Ladder Side-Tuning在CLUE上的简单尝试☆19Updated 2 years ago
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated last year
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆35Updated 3 months ago
- ☆49Updated 2 weeks ago
- code for promptCSE, emnlp 2022☆11Updated last year
- my commonly-used tools☆48Updated last week
- Source code for COLING 2022 paper "Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models"☆24Updated 2 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated last year
- ☆46Updated last week
- Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"☆48Updated 2 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆43Updated 2 years ago
- Implementation of ICLR 2022 paper "Enhancing Cross-lingual Transfer by Manifold Mixup".☆21Updated 2 years ago
- Released code for our ICLR23 paper.☆63Updated last year