nod-ai / techtalksLinks
☆16Updated last year
Alternatives and similar repositories for techtalks
Users that are interested in techtalks are comparing it to the libraries listed below
Sorting:
- ☆50Updated last year
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- ☆28Updated 4 months ago
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆57Updated 2 months ago
- MLIR-based partitioning system☆86Updated this week
- A lightweight, Pythonic, frontend for MLIR☆81Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆88Updated last week
- Explore training for quantized models☆18Updated last week
- Benchmarks to capture important workloads.☆31Updated 4 months ago
- ☆96Updated last year
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- Ahead of Time (AOT) Triton Math Library☆64Updated last week
- An extension library of WMMA API (Tensor Core API)☆97Updated 10 months ago
- A language and compiler for irregular tensor programs.☆138Updated 6 months ago
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆100Updated 2 weeks ago
- Memory Optimizations for Deep Learning (ICML 2023)☆64Updated last year
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆153Updated this week
- ☆208Updated 10 months ago
- extensible collectives library in triton☆87Updated 2 months ago
- Stores documents and resources used by the OpenXLA developer community☆122Updated 10 months ago
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆62Updated 8 months ago
- ☆16Updated 8 months ago
- A lightweight MLIR Python frontend with support for PyTorch☆23Updated 9 months ago
- An IR for efficiently simulating distributed ML computation.☆28Updated last year
- End to End steps for adding custom ops in PyTorch.☆23Updated 4 years ago
- ☆110Updated 3 weeks ago
- GPU Performance Advisor☆65Updated 2 years ago
- A CUTLASS implementation using SYCL☆23Updated last week