open-lm-engine / flash-model-architecturesLinks
A bunch of kernels that might make stuff slower 😉
☆59Updated this week
Alternatives and similar repositories for flash-model-architectures
Users that are interested in flash-model-architectures are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆88Updated 6 months ago
- ring-attention experiments☆152Updated 11 months ago
- Collection of kernels written in Triton language☆155Updated 5 months ago
- Cataloging released Triton kernels.☆261Updated 3 weeks ago
- ☆90Updated 10 months ago
- ☆240Updated this week
- Applied AI experiments and examples for PyTorch☆296Updated last month
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆230Updated this week
- ☆112Updated last year
- Fast low-bit matmul kernels in Triton☆373Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated 2 weeks ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆99Updated 3 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆121Updated 4 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆82Updated last year
- Triton-based Symmetric Memory operators and examples☆30Updated last week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆115Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆241Updated last month
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆318Updated this week
- The simplest but fast implementation of matrix multiplication in CUDA.☆39Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆213Updated this week
- Effective transpose on Hopper GPU☆24Updated 3 weeks ago
- ☆28Updated 8 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆226Updated 4 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 3 months ago
- Transformers components but in Triton☆34Updated 4 months ago
- Explore training for quantized models☆24Updated 2 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆239Updated 3 weeks ago
- ☆31Updated this week
- ☆159Updated 2 years ago