Libraries-Openly-Fused / FusedKernelLibraryLinks
We aim to redefine Data Parallel libraries portabiliy, performance, programability and maintainability, by using C++ standard features, instead of creating new compilers.
☆46Updated this week
Alternatives and similar repositories for FusedKernelLibrary
Users that are interested in FusedKernelLibrary are comparing it to the libraries listed below
Sorting:
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- LLM training in simple, raw C/CUDA☆112Updated last year
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆189Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆105Updated 7 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆48Updated 5 months ago
- Fast and Furious AMD Kernels☆346Updated last week
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆94Updated 3 weeks ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆75Updated 2 months ago
- ☆53Updated 8 months ago
- ☆87Updated last week
- ☆117Updated 8 months ago
- ☆59Updated this week
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆41Updated last year
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆63Updated last week
- Ship correct and fast LLM kernels to PyTorch☆139Updated 2 weeks ago
- extensible collectives library in triton☆93Updated 10 months ago
- SYCL implementation of Fused MLPs for Intel GPUs☆51Updated 2 months ago
- Helpful kernel tutorials and examples for tile-based GPU programming☆617Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆94Updated 4 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆163Updated 2 months ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆164Updated last week
- ☆23Updated 6 months ago
- ☆71Updated 10 months ago
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆58Updated this week
- A bunch of kernels that might make stuff slower 😉☆75Updated last week
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 2 months ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆85Updated this week
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆194Updated last week
- ☆117Updated 3 weeks ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 7 months ago