NUS-HPC-AI-Lab / oh-my-serverLinks
☆30Updated last year
Alternatives and similar repositories for oh-my-server
Users that are interested in oh-my-server are comparing it to the libraries listed below
Sorting:
- ☆74Updated 4 years ago
- Performance benchmarking with ColossalAI☆39Updated 2 years ago
- ☆42Updated 2 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆63Updated 2 months ago
- nnScaler: Compiling DNN models for Parallel Training☆114Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆93Updated last week
- Memory footprint reduction for transformer models☆11Updated 2 years ago
- Github mirror of trition-lang/triton repo.☆37Updated last week
- ☆84Updated 3 years ago
- Quantized Attention on GPU☆44Updated 6 months ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆51Updated 2 years ago
- High Performance Grouped GEMM in PyTorch☆30Updated 3 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆112Updated last year
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆38Updated 4 years ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated last year
- Sequence-level 1F1B schedule for LLMs.☆17Updated last year
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆23Updated last month
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated last month
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆210Updated 9 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆86Updated 2 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 6 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆87Updated 4 months ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 3 weeks ago
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 11 months ago
- A simple calculation for LLM MFU.☆38Updated 3 months ago
- Python package for rematerialization-aware gradient checkpointing☆24Updated last year
- ☆144Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 5 months ago
- ☆12Updated last year