exists-forall / striped_attentionLinks
☆45Updated 2 years ago
Alternatives and similar repositories for striped_attention
Users that are interested in striped_attention are comparing it to the libraries listed below
Sorting:
- ☆115Updated last year
- Triton-based Symmetric Memory operators and examples☆74Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆140Updated 7 months ago
- extensible collectives library in triton☆92Updated 9 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆91Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆260Updated 3 months ago
- ☆77Updated 4 years ago
- ☆22Updated 2 years ago
- ☆101Updated last year
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆152Updated last month
- A Python library transfers PyTorch tensors between CPU and NVMe☆124Updated last year
- Estimate MFU for DeepSeekV3☆26Updated last year
- ☆71Updated 9 months ago
- A bunch of kernels that might make stuff slower 😉☆75Updated this week
- Ship correct and fast LLM kernels to PyTorch☆132Updated last week
- Python package for rematerialization-aware gradient checkpointing☆27Updated 2 years ago
- Building the Virtuous Cycle for AI-driven LLM Systems☆121Updated this week
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆124Updated last year
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆80Updated 4 months ago
- Autonomous GPU Kernel Generation via Deep Agents☆217Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆104Updated 6 months ago
- ☆39Updated last month
- Framework to reduce autotune overhead to zero for well known deployments.☆92Updated 4 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆105Updated 7 years ago
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆109Updated 7 months ago
- ☆160Updated 2 years ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆52Updated 5 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆231Updated 2 years ago