☆152Jul 4, 2025Updated 9 months ago
Alternatives and similar repositories for mdy_triton
Users that are interested in mdy_triton are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆105Sep 9, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,414Apr 22, 2026Updated last week
- Cataloging released Triton kernels.☆302Sep 9, 2025Updated 7 months ago
- A bunch of kernels that might make stuff slower 😉☆88Updated this week
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Mar 22, 2026Updated last month
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- LLM notes, including model inference, transformer model structure, and llm framework code analysis notes.☆877Apr 16, 2026Updated 2 weeks ago
- Puzzles for learning Triton☆2,404Apr 1, 2026Updated 3 weeks ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- Efficient triton implementation of Native Sparse Attention.☆275May 23, 2025Updated 11 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆679Mar 17, 2026Updated last month
- ☆48Jun 16, 2025Updated 10 months ago
- ☆245Nov 19, 2025Updated 5 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆25Jun 6, 2024Updated last year
- Implement Flash Attention using Cute.☆106Dec 17, 2024Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆989Feb 5, 2026Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 6 months ago
- ☆322Updated this week
- Cute layout visualization☆37Jan 18, 2026Updated 3 months ago
- ☆98Mar 26, 2025Updated last year
- Counting-Stars (★)☆83Nov 24, 2025Updated 5 months ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 6 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- Hands-On Practical MLIR Tutorial☆54Aug 21, 2025Updated 8 months ago
- CUDA kernels for linear attention variants, written in CuTe DSL and CUTLASS C++.☆474Updated this week
- 给llvm17.0.6添加一个新后端Cpu0☆12Apr 22, 2024Updated 2 years ago
- GPGPU-Sim 中文注释版代码,包含 GPGPU-Sim 模拟器的最新版代码,经过中文注释,以帮助中文用户更好地理解和使用该模拟器。☆26Dec 18, 2024Updated last year
- 🚀 Efficient implementations for emerging model architectures☆4,999Updated this week
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,286Jul 29, 2023Updated 2 years ago
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆10,736Apr 20, 2026Updated last week
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆250Sep 12, 2025Updated 7 months ago
- Source code of the paper "Prediction of Molecular Absorption Wavelength Using Deep Neural Networks"☆10May 29, 2022Updated 3 years ago
- ☆186May 7, 2025Updated 11 months ago
- Ongoing research project for code&math LLMs☆31Jul 4, 2025Updated 9 months ago
- ☆124May 28, 2024Updated last year
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- qwen-nsa☆87Oct 14, 2025Updated 6 months ago