Libraries-Openly-Fused / FusedKernelLibraryLinks
We aim to redefine Data Parallel libraries portabiliy, performance, programability and maintainability, by using C++ standard features, instead of creating new compilers.
☆46Updated last week
Alternatives and similar repositories for FusedKernelLibrary
Users that are interested in FusedKernelLibrary are comparing it to the libraries listed below
Sorting:
- High-Performance FP32 GEMM on CUDA devices☆117Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆49Updated 5 months ago
- LLM training in simple, raw C/CUDA☆112Updated last year
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆96Updated 3 weeks ago
- Fast and Furious AMD Kernels☆348Updated 2 weeks ago
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆194Updated this week
- ☆117Updated 8 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Updated 7 months ago
- Parallel framework for training and fine-tuning deep neural networks☆70Updated 2 months ago
- ☆53Updated 9 months ago
- Ship correct and fast LLM kernels to PyTorch☆140Updated 3 weeks ago
- ☆23Updated 6 months ago
- ☆118Updated 3 weeks ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆63Updated 2 weeks ago
- ☆91Updated this week
- Hand-Rolled GPU communications library☆81Updated 2 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆95Updated 4 months ago
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆41Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆68Updated this week
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆85Updated last week
- ☆38Updated last year
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆75Updated this week
- extensible collectives library in triton☆95Updated 10 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆165Updated 2 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- It is an LLM-based AI agent, which can write correct and efficient gpu kernels automatically.☆60Updated this week
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Updated 7 months ago
- ☆102Updated last year
- A dynamic binary instrumentation tool for tracing and analyzing CUDA kernel instructions.☆27Updated last week
- CUDA-L2: Surpassing cuBLAS Performance for Matrix Multiplication through Reinforcement Learning☆417Updated 3 weeks ago