AlibabaPAI / FLASHNN
☆87Updated 6 months ago
Alternatives and similar repositories for FLASHNN:
Users that are interested in FLASHNN are comparing it to the libraries listed below
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆60Updated 7 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 3 weeks ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆89Updated 3 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆74Updated 4 months ago
- ☆46Updated 2 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak ⚡️ Performance.☆59Updated 2 weeks ago
- Implement Flash Attention using Cute.☆71Updated 3 months ago
- ☆55Updated 2 months ago
- ☆87Updated last week
- Standalone Flash Attention v2 kernel without libtorch dependency☆106Updated 6 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆178Updated last month
- ☆139Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆108Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆100Updated 8 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆85Updated 6 years ago
- ☆191Updated 8 months ago
- ☆87Updated last year
- ☆74Updated 3 months ago
- ☆145Updated 2 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆66Updated 9 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆108Updated last week
- High performance Transformer implementation in C++.☆109Updated 2 months ago
- llama INT4 cuda inference with AWQ☆53Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆145Updated last month
- ☆64Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆104Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆239Updated 4 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 7 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 4 months ago