NUS-HPC-AI-Lab / oh-my-serverLinks
☆30Updated 2 years ago
Alternatives and similar repositories for oh-my-server
Users that are interested in oh-my-server are comparing it to the libraries listed below
Sorting:
- PyTorch bindings for CUTLASS grouped GEMM.☆184Updated last month
- ☆43Updated 3 years ago
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆161Updated 2 weeks ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆269Updated last week
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Updated last year
- Performance benchmarking with ColossalAI☆38Updated 3 years ago
- Memory footprint reduction for transformer models☆11Updated 3 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Updated 10 months ago
- ☆77Updated 4 years ago
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- Pipeline Parallelism Emulation and Visualization☆77Updated last month
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆191Updated last week
- Triton implementation of FlashAttention2 that adds Custom Masks.☆167Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Updated 6 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆142Updated 8 months ago
- ☆221Updated 2 months ago
- Examples for MS-AMP package.☆30Updated 6 months ago
- ☆89Updated 3 years ago
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆278Updated 2 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 7 months ago
- ☆47Updated last year
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆124Updated 2 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆133Updated 2 years ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆191Updated this week
- LLM training technologies developed by kwai☆70Updated 3 weeks ago
- Allow torch tensor memory to be released and resumed later☆216Updated 3 weeks ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆95Updated last year
- Zero Bubble Pipeline Parallelism☆449Updated 9 months ago
- A simple calculation for LLM MFU.☆66Updated 5 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year