Transformers components but in Triton
☆34May 9, 2025Updated 9 months ago
Alternatives and similar repositories for Triformer
Users that are interested in Triformer are comparing it to the libraries listed below
Sorting:
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- TiledLower is a Dataflow Analysis and Codegen Framework written in Rust.☆14Nov 23, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 2 years ago
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- A bunch of kernels that might make stuff slower 😉☆75Feb 18, 2026Updated 2 weeks ago
- ☆33Oct 4, 2024Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆15Jan 23, 2024Updated 2 years ago
- ☆18Mar 10, 2023Updated 2 years ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆64Nov 8, 2024Updated last year
- ☆105Nov 7, 2024Updated last year
- mHC-lite: You Don’t Need 20 Sinkhorn-Knopp Iterations☆70Jan 12, 2026Updated last month
- Awesome Triton Resources☆39Apr 27, 2025Updated 10 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- ☆20Oct 11, 2023Updated 2 years ago
- TiledKernel is a code generation library based on macro kernels and memory hierarchy graph data structure.☆19May 12, 2024Updated last year
- Cataloging released Triton kernels.☆295Sep 9, 2025Updated 5 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆194Jan 28, 2025Updated last year
- Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence☆58Nov 11, 2025Updated 3 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Jan 12, 2026Updated last month
- Implement Flash Attention using Cute.☆101Dec 17, 2024Updated last year
- Statistical discontinuous constituent parsing☆11Feb 15, 2018Updated 8 years ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆237Jun 15, 2025Updated 8 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Oct 27, 2022Updated 3 years ago
- ☆12Jan 29, 2021Updated 5 years ago
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- source code for NAACL2022 main conference "Dynamic Programming in Rank Space: Scaling Structured Inference with Low-Rank HMMs and PCFGs"☆10Sep 26, 2022Updated 3 years ago
- ☆11Oct 11, 2023Updated 2 years ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Repository for SPECTRA: Sparse Structured Text Rationalization, accepted at EMNLP 2021 main conference.☆10Feb 14, 2024Updated 2 years ago
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 9 months ago
- [ICML 2024] Code for the paper "MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts"☆10Jul 1, 2024Updated last year
- a simple API to use CUPTI☆11Aug 19, 2025Updated 6 months ago
- Implementation of Hyena Hierarchy in JAX☆10Apr 30, 2023Updated 2 years ago
- [ICLR'25] "Understanding Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing" by Peihao Wang, Ruisi Cai, Yue…☆17Mar 21, 2025Updated 11 months ago
- ☆11Dec 22, 2024Updated last year