NUS-HPC-AI-Lab / oh-my-serverLinks
☆30Updated last year
Alternatives and similar repositories for oh-my-server
Users that are interested in oh-my-server are comparing it to the libraries listed below
Sorting:
- ☆74Updated 4 years ago
- ☆42Updated 2 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆67Updated 3 months ago
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆38Updated 4 years ago
- Performance benchmarking with ColossalAI☆39Updated 2 years ago
- Complete GPU residency for ML.☆17Updated last week
- ☆84Updated 3 years ago
- A simple calculation for LLM MFU.☆38Updated 3 months ago
- pytorch-profiler☆51Updated 2 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated last year
- Memory footprint reduction for transformer models☆11Updated 2 years ago
- nnScaler: Compiling DNN models for Parallel Training☆113Updated last week
- Estimate MFU for DeepSeekV3☆24Updated 5 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆39Updated 2 years ago
- ☆105Updated 10 months ago
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆24Updated last week
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆112Updated last year
- Allow torch tensor memory to be released and resumed later☆40Updated last week
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆210Updated last week
- VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework☆355Updated last month
- Patch convolution to avoid large GPU memory usage of Conv2D☆88Updated 5 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆127Updated 5 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆210Updated 10 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 7 months ago
- PyTorch implementation of LAMB for ImageNet/ResNet-50 training☆13Updated 4 years ago
- Quantized Attention on GPU☆44Updated 7 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated last year
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆59Updated last week
- Efficient 2:4 sparse training algorithms and implementations☆54Updated 6 months ago