lianakoleva / no-libtorch-compile
☆17Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for no-libtorch-compile
- FlexAttention w/ FlashAttention3 Support☆27Updated last month
- A place to store reusable transformer components of my own creation or found on the interwebs☆44Updated 2 weeks ago
- Make triton easier☆41Updated 5 months ago
- Experiment of using Tangent to autodiff triton☆72Updated 9 months ago
- extensible collectives library in triton☆71Updated last month
- Personal solutions to the Triton Puzzles☆16Updated 4 months ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 3 months ago
- ☆18Updated last month
- See https://github.com/cuda-mode/triton-index/ instead!☆11Updated 6 months ago
- ☆73Updated 4 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated 2 weeks ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆35Updated 4 months ago
- Learn CUDA with PyTorch☆14Updated 2 weeks ago
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆39Updated 2 months ago
- ☆18Updated 7 months ago
- TORCH_LOGS parser for PT2☆22Updated last week
- ☆77Updated 5 months ago
- Implementation of Hyena Hierarchy in JAX☆10Updated last year
- ☆20Updated last year
- Awesome Triton Resources☆18Updated last month
- ☆39Updated 10 months ago
- RWKV model implementation☆38Updated last year
- Gpu benchmark☆43Updated last month
- ☆13Updated 4 months ago
- Source-to-Source Debuggable Derivatives in Pure Python☆14Updated 9 months ago
- ☆43Updated 2 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated 11 months ago
- Utilities for PyTorch distributed☆23Updated last year
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆19Updated this week