intel / tiny-dpcpp-nnLinks
SYCL implementation of Fused MLPs for Intel GPUs
☆47Updated 3 weeks ago
Alternatives and similar repositories for tiny-dpcpp-nn
Users that are interested in tiny-dpcpp-nn are comparing it to the libraries listed below
Sorting:
- High-Performance SGEMM on CUDA devices☆95Updated 5 months ago
- JaxPP is a library for JAX that enables flexible MPMD pipeline parallelism for large-scale LLM training☆49Updated last month
- Reference Kernels for the Leaderboard☆60Updated last week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆90Updated 2 weeks ago
- TritonParse is a tool designed to help developers analyze and debug Triton kernels by visualizing the compilation process and source code…☆93Updated last week
- A CUTLASS implementation using SYCL☆27Updated this week
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆50Updated last week
- Patch convolution to avoid large GPU memory usage of Conv2D☆88Updated 5 months ago
- ☆46Updated this week
- LLM training in simple, raw C/CUDA☆99Updated last year
- ☆16Updated 9 months ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 10 months ago
- A block oriented training approach for inference time optimization.☆33Updated 10 months ago
- ☆88Updated last year
- extensible collectives library in triton☆86Updated 2 months ago
- ☆216Updated 3 weeks ago
- End to End steps for adding custom ops in PyTorch.☆23Updated 4 years ago
- A Quirky Assortment of CuTe Kernels☆117Updated this week
- Ahead of Time (AOT) Triton Math Library☆66Updated last week
- ☆50Updated last year
- ☆47Updated 3 weeks ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 3 months ago
- ☆32Updated last year
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆166Updated this week
- An extension library of WMMA API (Tensor Core API)☆99Updated 11 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆167Updated this week
- A tool for generating information about the matrix multiplication instructions in AMD Radeon™ and AMD Instinct™ accelerators☆98Updated last month
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 11 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆91Updated 2 weeks ago
- ☆71Updated last month