☆152Jul 4, 2025Updated 8 months ago
Alternatives and similar repositories for mdy_triton
Users that are interested in mdy_triton are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆105Sep 9, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,398Mar 11, 2026Updated 2 weeks ago
- A bunch of kernels that might make stuff slower 😉☆85Updated this week
- Cataloging released Triton kernels.☆300Sep 9, 2025Updated 6 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 10 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Mar 22, 2026Updated last week
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- LLM notes, including model inference, transformer model structure, and llm framework code analysis notes.☆872Mar 14, 2026Updated 2 weeks ago
- Puzzles for learning Triton☆2,348Mar 18, 2026Updated last week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- Efficient triton implementation of Native Sparse Attention.☆272May 23, 2025Updated 10 months ago
- Cute layout visualization☆33Jan 18, 2026Updated 2 months ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆654Mar 17, 2026Updated last week
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ☆48Jun 16, 2025Updated 9 months ago
- Implement Flash Attention using Cute.☆103Dec 17, 2024Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆978Feb 5, 2026Updated last month
- ☆310Mar 22, 2026Updated last week
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆167Oct 13, 2025Updated 5 months ago
- ☆97Mar 26, 2025Updated last year
- Counting-Stars (★)☆83Nov 24, 2025Updated 4 months ago
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- Hands-On Practical MLIR Tutorial☆53Aug 21, 2025Updated 7 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆247Jun 15, 2025Updated 9 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 5 months ago
- 给llvm17.0.6添加一个新后端Cpu0☆12Apr 22, 2024Updated last year
- Ongoing research project for code&math LLMs☆27Jul 4, 2025Updated 8 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆10,022Mar 23, 2026Updated last week
- GPGPU-Sim 中文注释版代码,包含 GPGPU-Sim 模拟器的最新版代码,经过中文注释,以帮助中文用户更好地理解和使用该模拟器。☆27Dec 18, 2024Updated last year
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,259Jul 29, 2023Updated 2 years ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 6 months ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- ☆184May 7, 2025Updated 10 months ago
- Source code of the paper "Prediction of Molecular Absorption Wavelength Using Deep Neural Networks"☆10May 29, 2022Updated 3 years ago
- ☆124May 28, 2024Updated last year
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆335Updated this week
- qwen-nsa☆87Oct 14, 2025Updated 5 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,432Updated this week