☆115Aug 26, 2024Updated last year
Alternatives and similar repositories for stk
Users that are interested in stk are comparing it to the libraries listed below
Sorting:
- Triton-based implementation of Sparse Mixture of Experts.☆270Oct 3, 2025Updated 5 months ago
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Mar 13, 2026Updated last week
- ☆14Mar 8, 2025Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆598Aug 12, 2025Updated 7 months ago
- A source-to-source compiler for optimizing CUDA dynamic parallelism by aggregating launches☆15Jun 21, 2019Updated 6 years ago
- A library of GPU kernels for sparse matrix operations.☆283Nov 24, 2020Updated 5 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆976Mar 6, 2026Updated last week
- ☆124May 28, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 6 months ago
- ☆20May 30, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆237Sep 24, 2023Updated 2 years ago
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆15Mar 6, 2025Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Block-sparse primitives for PyTorch☆158Apr 5, 2021Updated 4 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Collection of kernels written in Triton language☆184Jan 27, 2026Updated last month
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆41Nov 16, 2021Updated 4 years ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- The codes for ECCV'22: Learning to Train a Point Cloud Reconstruction Network without Matching☆10Nov 16, 2022Updated 3 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆146May 29, 2025Updated 9 months ago
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- ☆19Dec 4, 2025Updated 3 months ago
- ☆28Aug 14, 2024Updated last year
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- ☆36Feb 26, 2024Updated 2 years ago
- PyTorch-Based Fast and Efficient Processing for Various Machine Learning Applications with Diverse Sparsity☆121Updated this week
- Accelerated First Order Parallel Associative Scan☆196Jan 7, 2026Updated 2 months ago
- A language and compiler for irregular tensor programs.☆152Nov 29, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- play gemm with tvm☆92Jul 22, 2023Updated 2 years ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆144Mar 31, 2023Updated 2 years ago
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated 11 months ago
- The codes for RFNet: Recurrent Forward Network for Dense Point Cloud Completion☆20Jan 17, 2022Updated 4 years ago