IST-DASLab / torch_cgxLinks
Pytorch distributed backend extension with compression support
☆17Updated 10 months ago
Alternatives and similar repositories for torch_cgx
Users that are interested in torch_cgx are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆95Updated 10 months ago
- A schedule language for large model training☆152Updated 5 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 2 months ago
- Triton-based Symmetric Memory operators and examples☆81Updated 3 weeks ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆281Updated 3 months ago
- ☆159Updated last year
- FTPipe and related pipeline model parallelism research.☆44Updated 2 years ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆233Updated 2 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆142Updated 8 months ago
- Github mirror of trition-lang/triton repo.☆128Updated this week
- ☆164Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆335Updated last year
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆25Updated 8 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Updated 6 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆263Updated 4 months ago
- ☆104Updated last year
- ☆77Updated 4 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Updated 3 years ago
- ☆115Updated last year
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆80Updated 7 months ago
- ☆60Updated last year
- Collection of kernels written in Triton language☆178Updated last week
- Thunder Research Group's Collective Communication Library☆47Updated 7 months ago
- ☆85Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆95Updated 4 months ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52Updated 2 years ago
- Fast low-bit matmul kernels in Triton☆427Updated last week
- ☆89Updated 3 years ago