☆114Aug 26, 2024Updated last year
Alternatives and similar repositories for stk
Users that are interested in stk are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Triton-based implementation of Sparse Mixture of Experts.☆272Oct 3, 2025Updated 6 months ago
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Mar 22, 2026Updated 2 weeks ago
- ☆14Mar 8, 2025Updated last year
- A source-to-source compiler for optimizing CUDA dynamic parallelism by aggregating launches☆15Jun 21, 2019Updated 6 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆598Aug 12, 2025Updated 7 months ago
- A library of GPU kernels for sparse matrix operations.☆285Nov 24, 2020Updated 5 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆981Mar 27, 2026Updated last week
- ☆124May 28, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆98Sep 19, 2025Updated 6 months ago
- ☆20May 30, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆239Sep 24, 2023Updated 2 years ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆15Mar 6, 2025Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Block-sparse primitives for PyTorch☆158Apr 5, 2021Updated 5 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Collection of kernels written in Triton language☆187Jan 27, 2026Updated 2 months ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- The codes for ECCV'22: Learning to Train a Point Cloud Reconstruction Network without Matching☆10Nov 16, 2022Updated 3 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- PyTorch bindings for CUTLASS grouped GEMM.☆148May 29, 2025Updated 10 months ago
- Awesome Triton Resources☆39Apr 27, 2025Updated 11 months ago
- ☆19Dec 4, 2025Updated 4 months ago
- ☆28Aug 14, 2024Updated last year
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- ☆36Feb 26, 2024Updated 2 years ago
- PyTorch-Based Fast and Efficient Processing for Various Machine Learning Applications with Diverse Sparsity☆121Mar 30, 2026Updated last week
- A language and compiler for irregular tensor programs.☆152Nov 29, 2024Updated last year
- Accelerated First Order Parallel Associative Scan☆197Jan 7, 2026Updated 3 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆11Oct 11, 2023Updated 2 years ago
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆144Mar 31, 2023Updated 3 years ago
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated 11 months ago
- The codes for RFNet: Recurrent Forward Network for Dense Point Cloud Completion☆20Jan 17, 2022Updated 4 years ago
- Kinetics: Rethinking Test-Time Scaling Laws☆87Jul 11, 2025Updated 8 months ago