nod-ai / techtalks
☆16Updated 9 months ago
Alternatives and similar repositories for techtalks:
Users that are interested in techtalks are comparing it to the libraries listed below
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆94Updated 6 months ago
- ☆180Updated 6 months ago
- ☆48Updated 10 months ago
- ☆24Updated 2 weeks ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 10 months ago
- extensible collectives library in triton☆77Updated 4 months ago
- Stores documents and resources used by the OpenXLA developer community☆114Updated 5 months ago
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆56Updated 4 months ago
- ☆58Updated 8 months ago
- ☆67Updated last month
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆38Updated 8 months ago
- MLIR-based partitioning system☆58Updated this week
- Fastest kernels written from scratch☆131Updated 2 months ago
- ☆64Updated 2 months ago
- Benchmarks to capture important workloads.☆29Updated this week
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆99Updated this week
- ☆34Updated this week
- OpenAI Triton backend for Intel® GPUs☆157Updated this week
- Fast sparse deep learning on CPUs☆52Updated 2 years ago
- Collection of kernels written in Triton language☆91Updated 3 months ago
- An extension library of WMMA API (Tensor Core API)☆87Updated 6 months ago
- Shared Middle-Layer for Triton Compilation☆220Updated this week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆64Updated 6 years ago
- Fast low-bit matmul kernels in Triton☆199Updated last week
- ☆36Updated last month
- ☆15Updated 4 months ago
- A language and compiler for irregular tensor programs.☆134Updated 2 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆133Updated last year
- CUDA Matrix Multiplication Optimization☆155Updated 6 months ago