Dao-AILab / quackLinks
A Quirky Assortment of CuTe Kernels
☆126Updated last week
Alternatives and similar repositories for quack
Users that are interested in quack are comparing it to the libraries listed below
Sorting:
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆176Updated last week
- ☆225Updated this week
- Fast low-bit matmul kernels in Triton☆327Updated this week
- Cataloging released Triton kernels.☆242Updated 6 months ago
- Applied AI experiments and examples for PyTorch☆281Updated last month
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆176Updated this week
- extensible collectives library in triton☆87Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆100Updated last month
- ring-attention experiments☆144Updated 8 months ago
- ☆83Updated 8 months ago
- kernels, of the mega variety☆430Updated last month
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 11 months ago
- Collection of kernels written in Triton language☆136Updated 3 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆224Updated 7 months ago
- ☆106Updated 10 months ago
- A bunch of kernels that might make stuff slower 😉☆54Updated this week
- ☆214Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆254Updated 8 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆205Updated this week
- A collection of memory efficient attention operators implemented in the Triton language.☆272Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆195Updated 2 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆255Updated this week
- Fast Hadamard transform in CUDA, with a PyTorch interface☆201Updated last year
- Fastest kernels written from scratch☆289Updated 3 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆112Updated last year
- ☆94Updated 6 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆212Updated 10 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 10 months ago
- Best practices for testing advanced Mixtral, DeepSeek, and Qwen series MoE models using Megatron Core MoE.☆29Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆130Updated 6 months ago