IST-DASLab / torch_cgxLinks
Pytorch distributed backend extension with compression support
☆16Updated 9 months ago
Alternatives and similar repositories for torch_cgx
Users that are interested in torch_cgx are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆91Updated 8 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆267Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆413Updated last week
- ☆152Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆135Updated 6 months ago
- A schedule language for large model training☆152Updated 4 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆230Updated 2 years ago
- Collection of kernels written in Triton language☆173Updated 8 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆257Updated 2 months ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆69Updated 9 months ago
- FTPipe and related pipeline model parallelism research.☆43Updated 2 years ago
- ☆80Updated 2 months ago
- ☆77Updated 4 years ago
- ☆99Updated last year
- A resilient distributed training framework☆96Updated last year
- Cataloging released Triton kernels.☆278Updated 3 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆277Updated 5 months ago
- Github mirror of trition-lang/triton repo.☆109Updated this week
- ☆83Updated 3 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆331Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆265Updated 2 months ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆24Updated 7 months ago
- Microsoft Collective Communication Library☆66Updated last year
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52Updated 2 years ago
- ☆268Updated this week
- ☆60Updated last year
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆36Updated 3 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆127Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆121Updated 3 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆148Updated last month