RenShuhuai-Andy / gpu_lurkerLinks
服务器 GPU 监控程序,当 GPU 属性满足预设条件时通过微信发送提示消息
☆32Updated 4 years ago
Alternatives and similar repositories for gpu_lurker
Users that are interested in gpu_lurker are comparing it to the libraries listed below
Sorting:
- my commonly-used tools☆61Updated 7 months ago
- The code and data for the paper JiuZhang3.0☆49Updated last year
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- ☆17Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- Must-read papers on improving efficiency for pre-trained language models.☆105Updated 2 years ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆77Updated 2 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- ☆32Updated 3 years ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆25Updated 3 years ago
- 😎 A simple and easy-to-use toolkit for GPU scheduling.☆45Updated 3 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆38Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- ☆104Updated last month
- Visual and Embodied Concepts evaluation benchmark☆21Updated last year
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆48Updated 3 years ago
- [NAACL 2024] A Synthetic, Scalable and Systematic Evaluation Suite for Large Language Models☆33Updated last year
- Feeling confused about super alignment? Here is a reading list☆43Updated last year
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 3 years ago
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆40Updated 11 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 10 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆157Updated last year
- ☆14Updated last year
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆83Updated last year
- [ACL 2025, Main Conference, Oral] Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆30Updated last year
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆166Updated 6 months ago