IST-DASLab / torch_cgxLinks
Pytorch distributed backend extension with compression support
☆17Updated 10 months ago
Alternatives and similar repositories for torch_cgx
Users that are interested in torch_cgx are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆95Updated 10 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆281Updated 3 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 2 months ago
- A schedule language for large model training☆152Updated 5 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆263Updated 4 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆142Updated 8 months ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25Updated 8 months ago
- Collection of kernels written in Triton language☆178Updated 2 weeks ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- ☆159Updated last year
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52Updated 2 years ago
- ☆60Updated last year
- Fast low-bit matmul kernels in Triton☆427Updated last week
- QJL: 1-Bit Quantized JL transform for KV Cache Quantization with Zero Overhead☆31Updated last year
- Triton-based Symmetric Memory operators and examples☆81Updated 3 weeks ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆80Updated 7 months ago
- ☆164Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- FTPipe and related pipeline model parallelism research.☆44Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Updated last year
- A resilient distributed training framework☆96Updated last year
- ☆104Updated last year
- FlashInfer Bench @ MLSys 2026: Building AI agents to write high performance GPU kernels☆84Updated 2 weeks ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Updated 10 months ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆340Updated 7 months ago
- ☆77Updated 4 years ago
- This repository contains integer operators on GPUs for PyTorch.☆237Updated 2 years ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆233Updated 2 years ago
- ☆93Updated 2 months ago
- Microsoft Collective Communication Library☆66Updated last year