RenShuhuai-Andy / gpu_lurker
服务器 GPU 监控程序,当 GPU 属性满足预设条件时通过微信发送提示消息
☆30Updated 3 years ago
Alternatives and similar repositories for gpu_lurker:
Users that are interested in gpu_lurker are comparing it to the libraries listed below
- my commonly-used tools☆51Updated 2 months ago
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- ☆17Updated last year
- ☆33Updated 3 years ago
- The code and data for the paper JiuZhang3.0☆42Updated 9 months ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆23Updated 3 years ago
- Mixture of Attention Heads☆42Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- ☆39Updated last year
- 😎 A simple and easy-to-use toolkit for GPU scheduling.☆42Updated 3 years ago
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- ☆46Updated this week
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- Feeling confused about super alignment? Here is a reading list☆42Updated last year
- ☆34Updated 2 weeks ago
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆58Updated 3 years ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆36Updated 11 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆31Updated last year
- ☆61Updated 2 years ago
- ☆98Updated 5 months ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- ☆17Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆82Updated 2 years ago
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated last year
- Must-read papers on improving efficiency for pre-trained language models.☆103Updated 2 years ago
- ☆14Updated last year
- self-adaptive in-context learning☆43Updated last year
- Released code for our ICLR23 paper.☆64Updated 2 years ago