aredden / torch-cublas-hgemm
PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu
☆39Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for torch-cublas-hgemm
- Experiment of using Tangent to autodiff triton☆72Updated 9 months ago
- ☆73Updated 4 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆50Updated this week
- ☆77Updated 5 months ago
- ☆18Updated last month
- QuIP quantization☆46Updated 8 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last month
- Simple and fast low-bit matmul kernels in CUDA / Triton☆145Updated this week
- Writing FLUX in Triton☆30Updated last month
- ☆15Updated 8 months ago
- ☆44Updated 11 months ago
- ☆17Updated 3 weeks ago
- ☆21Updated 5 months ago
- Here we will test various linear attention designs.☆56Updated 6 months ago
- Triton kernels for Flux☆17Updated last week
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆35Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated 11 months ago
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆23Updated 8 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago
- JAX bindings for Flash Attention v2☆79Updated 4 months ago
- ☆12Updated last month
- [WIP] Context parallel attention that works with torch.compile☆49Updated this week
- ☆49Updated 8 months ago
- ☆47Updated 2 months ago
- extensible collectives library in triton☆72Updated last month
- Patch convolution to avoid large GPU memory usage of Conv2D☆79Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆104Updated last month
- ☆48Updated last week
- An implementation of PSGD Kron second-order optimizer for PyTorch☆16Updated this week
- ☆53Updated 10 months ago