exists-forall / striped_attentionLinks
☆41Updated last year
Alternatives and similar repositories for striped_attention
Users that are interested in striped_attention are comparing it to the libraries listed below
Sorting:
- ☆112Updated last year
- extensible collectives library in triton☆90Updated 6 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆84Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 5 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆216Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆246Updated 3 weeks ago
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month
- ☆71Updated 7 months ago
- ☆75Updated 4 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- Triton-based Symmetric Memory operators and examples☆48Updated last week
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆87Updated 4 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆41Updated 2 years ago
- ☆158Updated 2 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆120Updated 11 months ago
- Utility scripts for PyTorch (e.g. Memory profiler that understands more low-level allocations such as NCCL)☆62Updated last month
- ☆28Updated 9 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 4 months ago
- ☆93Updated 11 months ago
- Research and development for optimizing transformers☆131Updated 4 years ago
- How to ensure correctness and ship LLM generated kernels in PyTorch☆107Updated last week
- Effective transpose on Hopper GPU☆25Updated last month
- ☆82Updated 9 months ago
- ☆87Updated 3 years ago
- ☆22Updated last year
- Estimate MFU for DeepSeekV3☆26Updated 9 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆221Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆117Updated last year
- A bunch of kernels that might make stuff slower 😉☆63Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆215Updated last week