NUS-HPC-AI-Lab / oh-my-server
☆31Updated last year
Alternatives and similar repositories for oh-my-server:
Users that are interested in oh-my-server are comparing it to the libraries listed below
- Performance benchmarking with ColossalAI☆39Updated 2 years ago
- ☆42Updated 2 years ago
- ☆72Updated 4 years ago
- Memory footprint reduction for transformer models☆11Updated 2 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆88Updated last week
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆38Updated 3 years ago
- PyTorch implementation of LAMB for ImageNet/ResNet-50 training☆13Updated 3 years ago
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆20Updated 3 weeks ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆86Updated 3 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆62Updated last month
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆51Updated last year
- Quantized Attention on GPU☆45Updated 5 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆120Updated 4 months ago
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆132Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 5 months ago
- ☆82Updated 3 years ago
- pytorch-profiler☆51Updated last year
- A Python library transfers PyTorch tensors between CPU and NVMe☆115Updated 5 months ago
- ☆27Updated 3 years ago
- Examples for MS-AMP package.☆29Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- A sparse attention kernel supporting mix sparse patterns☆202Updated 2 months ago
- High Performance Grouped GEMM in PyTorch☆29Updated 2 years ago
- A parallelism VAE avoids OOM for high resolution image generation☆61Updated 3 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 10 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆40Updated last week
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆36Updated 8 months ago
- ☆35Updated 9 months ago
- Python package for rematerialization-aware gradient checkpointing☆24Updated last year
- SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆19Updated 7 months ago