kuterd / opal_ptx
Experimental GPU language with meta-programming
☆22Updated 8 months ago
Alternatives and similar repositories for opal_ptx:
Users that are interested in opal_ptx are comparing it to the libraries listed below
- High-Performance SGEMM on CUDA devices☆90Updated 3 months ago
- Gpu benchmark☆60Updated 3 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated last month
- research impl of Native Sparse Attention (2502.11089)☆53Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- extensible collectives library in triton☆86Updated last month
- supporting pytorch FSDP for optimizers☆80Updated 5 months ago
- Load compute kernels from the Hub☆116Updated this week
- ☆12Updated last year
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆55Updated this week
- ☆68Updated 4 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 5 months ago
- ☆21Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆122Updated last week
- working implimention of deepseek MLA☆41Updated 4 months ago
- ☆88Updated last year
- ☆43Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆64Updated 2 weeks ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆44Updated this week
- ☆13Updated 10 months ago
- ☆14Updated 10 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 9 months ago
- ☆131Updated last month
- LLM training in simple, raw C/CUDA☆94Updated last year
- ☆32Updated 11 months ago
- ☆31Updated 4 months ago
- DPO, but faster 🚀☆42Updated 5 months ago
- prime-rl is a codebase for decentralized RL training at scale☆89Updated this week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆41Updated last year
- FlexAttention w/ FlashAttention3 Support☆26Updated 7 months ago