The simplest but fast implementation of matrix multiplication in CUDA.
☆40Jul 26, 2024Updated last year
Alternatives and similar repositories for simpleGEMM
Users that are interested in simpleGEMM are comparing it to the libraries listed below
Sorting:
- Personal solutions to the Triton Puzzles☆20Jul 18, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- PyTorch implementation of the Flash Spectral Transform Unit.☆21Sep 19, 2024Updated last year
- A tracing JIT compiler for PyTorch☆13Dec 11, 2021Updated 4 years ago
- ☆16Sep 24, 2024Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆18Jun 13, 2024Updated last year
- Code for the paper "Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving"☆19May 25, 2023Updated 2 years ago
- The accompanying code for "Simplifying and Understanding State Space Models with Diagonal Linear RNNs" (Ankit Gupta, Harsh Mehta, Jonatha…☆23Dec 30, 2022Updated 3 years ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- ☆22Dec 15, 2023Updated 2 years ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- ☆261Jul 11, 2024Updated last year
- ☆27Aug 30, 2023Updated 2 years ago
- Matrix Multiplication on GPU using Shared Memory considering Coalescing and Bank Conflicts☆25Aug 29, 2022Updated 3 years ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Jul 17, 2025Updated 7 months ago
- Accelerated First Order Parallel Associative Scan☆194Jan 7, 2026Updated last month
- ☆31Jul 2, 2023Updated 2 years ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,079Dec 30, 2024Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆168Aug 14, 2024Updated last year
- Feature Interaction Interpretability via Interaction Detection☆35Jun 12, 2023Updated 2 years ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Jan 12, 2026Updated last month
- Implement Flash Attention using Cute.☆101Dec 17, 2024Updated last year
- Student version of Mini-SLAM.☆10Mar 16, 2024Updated last year
- Train I3D on NTU-RGB+D dataset in keras☆12Feb 5, 2019Updated 7 years ago
- lshash for python3☆10Mar 21, 2018Updated 7 years ago
- Statistical discontinuous constituent parsing☆11Feb 15, 2018Updated 8 years ago
- Slimebound character mod for Slay the Spire☆14Jun 30, 2020Updated 5 years ago
- ☆11Jun 15, 2019Updated 6 years ago
- lab solutions of ICS course☆10Jan 20, 2013Updated 13 years ago
- Jupyter notebooks for the neuroptica simulator☆11Mar 7, 2019Updated 6 years ago
- Julia implementation of NEAT and HyperNEAT☆10Sep 3, 2020Updated 5 years ago
- Triton-based Symmetric Memory operators and examples☆85Jan 15, 2026Updated last month
- ☆20May 24, 2025Updated 9 months ago
- ☆12Jul 7, 2022Updated 3 years ago
- Codebase associated with the PyTorch compiler tutorial☆47Sep 7, 2019Updated 6 years ago