PyTorch implementation of the Flash Spectral Transform Unit.
☆22Sep 19, 2024Updated last year
Alternatives and similar repositories for flash-stu
Users that are interested in flash-stu are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆35Apr 12, 2024Updated 2 years ago
- ☆12Mar 7, 2022Updated 4 years ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- ☆33Oct 4, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆22May 5, 2025Updated 11 months ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆29Apr 17, 2024Updated 2 years ago
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19May 8, 2025Updated 11 months ago
- Code and data for paper "(How) do Language Models Track State?"☆22Mar 31, 2025Updated last year
- 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆18Jun 13, 2024Updated last year
- Notes for CIS 700 (Fall '19) at Syracuse U.☆13Nov 6, 2019Updated 6 years ago
- ☆20Dec 24, 2024Updated last year
- ☆40Dec 14, 2025Updated 4 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Full spectrum sheaf neural network over arbitrary CW complexes.☆16Apr 1, 2026Updated 2 weeks ago
- It's a baby compiler. (Lean btw.)☆16May 19, 2025Updated 11 months ago
- A GPU FP32 computation method with Tensor Cores.☆26Dec 8, 2025Updated 4 months ago
- Accelerated First Order Parallel Associative Scan☆198Jan 7, 2026Updated 3 months ago
- ☆30Oct 3, 2022Updated 3 years ago
- SMT-LIB benchmarks for shape computations from deep learning models in PyTorch☆18Dec 21, 2022Updated 3 years ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 8 months ago
- CUDA implementation of RTXX algorithm for multiplication of matrix and its transpose X^T X☆19Jun 9, 2025Updated 10 months ago
- Official code for UnICORNN (ICML 2021)☆27Oct 1, 2021Updated 4 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- KernelBench v2: Can LLMs Write GPU Kernels? - Benchmark with Torch -> Triton (and more!) problems☆23Jul 4, 2025Updated 9 months ago
- Experiments on Multi-Head Latent Attention☆101Aug 19, 2024Updated last year
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago
- Implementation of paper - RepVGG-GELAN: ENHANCED GELAN WITH VGG-STYLE CONVNETS FOR BRAIN TUMOR DETECTION☆10Jul 19, 2025Updated 9 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆179Oct 20, 2025Updated 6 months ago
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- official implementation of Training-free Boost for Open-Vocabulary Object Detection with Confidence Aggregation☆13Apr 15, 2024Updated 2 years ago
- Experiments on the impact of depth in transformers and SSMs.☆41Oct 23, 2025Updated 5 months ago
- ☆11Jul 26, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- ☆52Apr 13, 2026Updated last week
- Awesome Triton Resources☆39Apr 27, 2025Updated 11 months ago
- ☆11Oct 18, 2023Updated 2 years ago
- ONNXモデルをpyca/cryptographyを用いて暗号化/復号化するサンプル☆16Mar 19, 2022Updated 4 years ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆47Jul 16, 2023Updated 2 years ago
- Code for Draft Attention☆102May 22, 2025Updated 10 months ago