albanD / subclass_zooLinks
☆181Updated last year
Alternatives and similar repositories for subclass_zoo
Users that are interested in subclass_zoo are comparing it to the libraries listed below
Sorting:
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆585Updated this week
- extensible collectives library in triton☆90Updated 7 months ago
- A library to analyze PyTorch traces.☆426Updated last week
- ☆243Updated last year
- Github mirror of trition-lang/triton repo.☆98Updated this week
- Applied AI experiments and examples for PyTorch☆303Updated 2 months ago
- ☆246Updated last week
- ☆337Updated last week
- ☆147Updated 10 months ago
- A library of GPU kernels for sparse matrix operations.☆275Updated 4 years ago
- Collection of kernels written in Triton language☆164Updated 7 months ago
- Shared Middle-Layer for Triton Compilation☆306Updated 2 weeks ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆277Updated last week
- A Quirky Assortment of CuTe Kernels☆651Updated 2 weeks ago
- ☆145Updated 9 months ago
- Fastest kernels written from scratch☆386Updated last month
- MLIR-based partitioning system☆148Updated this week
- Cataloging released Triton kernels.☆265Updated 2 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated last month
- PyTorch RFCs (experimental)☆135Updated 5 months ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆360Updated this week
- Stores documents and resources used by the OpenXLA developer community☆131Updated last year
- An open-source efficient deep learning framework/compiler, written in python.☆733Updated 2 months ago
- Fast low-bit matmul kernels in Triton☆392Updated 2 weeks ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated last week
- A schedule language for large model training☆151Updated 2 months ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆136Updated 3 years ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆55Updated last month
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆104Updated this week