mk1-project / quickreduceLinks
QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.
☆33Updated 3 weeks ago
Alternatives and similar repositories for quickreduce
Users that are interested in quickreduce are comparing it to the libraries listed below
Sorting:
- ☆116Updated 8 months ago
- Fast low-bit matmul kernels in Triton☆371Updated last week
- Applied AI experiments and examples for PyTorch☆294Updated 3 weeks ago
- ☆233Updated last year
- Fastest kernels written from scratch☆346Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆353Updated this week
- ☆139Updated 4 months ago
- Ahead of Time (AOT) Triton Math Library☆76Updated this week
- Development repository for the Triton language and compiler☆131Updated this week
- OpenAI Triton backend for Intel® GPUs☆207Updated this week
- Shared Middle-Layer for Triton Compilation☆286Updated 2 weeks ago
- An experimental CPU backend for Triton☆153Updated 3 months ago
- extensible collectives library in triton☆87Updated 5 months ago
- AI Tensor Engine for ROCm☆276Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆221Updated 4 months ago
- Cataloging released Triton kernels.☆257Updated last week
- ☆44Updated this week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆307Updated this week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆221Updated this week
- ☆237Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 2 months ago
- kernels, of the mega variety☆496Updated 3 months ago
- ☆118Updated 6 months ago
- Perplexity GPU Kernels☆461Updated last month
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆223Updated this week
- oneCCL Bindings for Pytorch*☆102Updated last month
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆114Updated last year
- A lightweight design for computation-communication overlap.☆167Updated last week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆465Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated 2 months ago