gpu-mode / popcornLinks
β18Updated this week
Alternatives and similar repositories for popcorn
Users that are interested in popcorn are comparing it to the libraries listed below
Sorting:
- extensible collectives library in tritonβ88Updated 5 months ago
- π Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.β209Updated 2 weeks ago
- β234Updated 2 weeks ago
- Cataloging released Triton kernels.β252Updated 7 months ago
- Applied AI experiments and examples for PyTorchβ293Updated 2 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.β217Updated this week
- Fast low-bit matmul kernels in Tritonβ356Updated last week
- β115Updated 8 months ago
- ring-attention experimentsβ150Updated 10 months ago
- β88Updated 9 months ago
- Collection of kernels written in Triton languageβ153Updated 5 months ago
- Home for OctoML PyTorch Profilerβ114Updated 2 years ago
- A Quirky Assortment of CuTe Kernelsβ435Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.β110Updated 3 months ago
- Triton-based Symmetric Memory operators and examplesβ23Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASSβ216Updated 4 months ago
- β110Updated last year
- This repository contains the experimental PyTorch native float8 training UXβ224Updated last year
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.β274Updated this week
- Ahead of Time (AOT) Triton Math Libraryβ76Updated last week
- β74Updated 5 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.β138Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindβ¦β160Updated 2 months ago
- Learn CUDA with PyTorchβ72Updated last week
- β229Updated last year
- kernels, of the mega varietyβ481Updated 3 months ago
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β264Updated last month
- A minimal implementation of vllm.β52Updated last year
- A schedule language for large model trainingβ149Updated 2 weeks ago
- Benchmark code for the "Online normalizer calculation for softmax" paperβ98Updated 7 years ago