open-lm-engine / accelerated-model-architecturesLinks
A bunch of kernels that might make stuff slower 😉
☆63Updated this week
Alternatives and similar repositories for accelerated-model-architectures
Users that are interested in accelerated-model-architectures are comparing it to the libraries listed below
Sorting:
- extensible collectives library in triton☆90Updated 6 months ago
- Triton-based Symmetric Memory operators and examples☆48Updated last week
- ring-attention experiments☆155Updated last year
- ☆242Updated this week
- Collection of kernels written in Triton language☆157Updated 6 months ago
- Cataloging released Triton kernels.☆263Updated last month
- ☆93Updated 11 months ago
- Applied AI experiments and examples for PyTorch☆299Updated 2 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆246Updated 3 weeks ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆120Updated this week
- Fast low-bit matmul kernels in Triton☆385Updated last week
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 4 months ago
- ☆112Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆264Updated this week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆84Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month
- ☆13Updated 3 weeks ago
- How to ensure correctness and ship LLM generated kernels in PyTorch☆107Updated last week
- ☆35Updated this week
- Explore training for quantized models☆25Updated 3 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆117Updated last year
- The simplest but fast implementation of matrix multiplication in CUDA.☆39Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆215Updated this week
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆491Updated this week
- Learn CUDA with PyTorch☆92Updated last month
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆197Updated 4 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆124Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆233Updated 5 months ago
- ☆23Updated 5 months ago