meta-pytorch / krakenLinks
Triton-based Symmetric Memory operators and examples
☆81Updated 3 weeks ago
Alternatives and similar repositories for kraken
Users that are interested in kraken are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆95Updated 10 months ago
- ☆104Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆96Updated 4 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆143Updated 8 months ago
- A bunch of kernels that might make stuff slower 😉☆75Updated last week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- DeeperGEMM: crazy optimized version☆73Updated 9 months ago
- Ship correct and fast LLM kernels to PyTorch☆141Updated 3 weeks ago
- ☆115Updated last year
- ring-attention experiments☆165Updated last year
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- ☆39Updated last month
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 3 months ago
- Collection of kernels written in Triton language☆178Updated 2 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆326Updated this week
- Autonomous GPU Kernel Generation via Deep Agents☆233Updated this week
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆87Updated 2 weeks ago
- Triton-based implementation of Sparse Mixture of Experts.☆263Updated 4 months ago
- ☆159Updated last year
- ☆288Updated this week
- This repository contains the experimental PyTorch native float8 training UX☆226Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Updated last week
- ☆65Updated 9 months ago
- FlashInfer Bench @ MLSys 2026: Building AI agents to write high performance GPU kernels☆112Updated this week
- Fast low-bit matmul kernels in Triton☆429Updated last week
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆112Updated last month
- Distributed MoE in a Single Kernel [NeurIPS '25]☆191Updated this week
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆64Updated 3 weeks ago
- ☆45Updated 2 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Updated 7 months ago