NVlabs / vibetensorLinks
Our first fully AI generated deep learning system
☆481Updated last week
Alternatives and similar repositories for vibetensor
Users that are interested in vibetensor are comparing it to the libraries listed below
Sorting:
- Helpful kernel tutorials and examples for tile-based GPU programming☆630Updated this week
- Autonomous GPU Kernel Generation via Deep Agents☆228Updated this week
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆194Updated this week
- Ship correct and fast LLM kernels to PyTorch☆140Updated 3 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆324Updated this week
- ring-attention experiments☆165Updated last year
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆64Updated 2 weeks ago
- ☆118Updated 8 months ago
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆739Updated this week
- Accelerating MoE with IO and Tile-aware Optimizations☆569Updated 3 weeks ago
- Fast low-bit matmul kernels in Triton☆427Updated last week
- ☆286Updated this week
- An early research stage expert-parallel load balancer for MoE models based on linear programming.☆495Updated 2 months ago
- kernels, of the mega variety☆665Updated last week
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆440Updated last month
- mHC kernels implemented in CUDA☆249Updated 3 weeks ago
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- Block Diffusion for Ultra-Fast Speculative Decoding☆459Updated this week
- Collection of kernels written in Triton language☆178Updated last week
- A Quirky Assortment of CuTe Kernels☆781Updated this week
- Cataloging released Triton kernels.☆292Updated 5 months ago
- JAX backend for SGL☆234Updated this week
- Efficient LLM Inference over Long Sequences☆394Updated 7 months ago
- extensible collectives library in triton☆95Updated 10 months ago
- ☆131Updated 8 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆233Updated 7 months ago
- PyTorch-native post-training at scale☆613Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Updated last week
- ☆104Updated last year
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆250Updated this week