jansel / pytorch-jit-paritybenchLinks
☆40Updated 6 months ago
Alternatives and similar repositories for pytorch-jit-paritybench
Users that are interested in pytorch-jit-paritybench are comparing it to the libraries listed below
Sorting:
- ☆50Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- MLIR-based partitioning system☆91Updated this week
- ☆144Updated 4 months ago
- A sandbox for quick iteration and experimentation on projects related to IREE, MLIR, and LLVM☆58Updated 3 months ago
- System for automated integration of deep learning backends.☆47Updated 2 years ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆90Updated 2 weeks ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆121Updated 3 years ago
- A lightweight, Pythonic, frontend for MLIR☆81Updated last year
- extensible collectives library in triton☆86Updated 2 months ago
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆133Updated last year
- Benchmarks to capture important workloads.☆31Updated 4 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆64Updated last year
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆166Updated this week
- Unified compiler/runtime for interfacing with PyTorch Dynamo.☆100Updated last month
- oneCCL Bindings for Pytorch*☆97Updated 2 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- Codebase associated with the PyTorch compiler tutorial☆46Updated 5 years ago
- ☆16Updated 9 months ago
- Benchmark scripts for TVM☆74Updated 3 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- Shared Middle-Layer for Triton Compilation☆255Updated this week
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆136Updated this week
- DietCode Code Release☆64Updated 2 years ago
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆26Updated 2 years ago
- ☆90Updated 5 months ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆43Updated 3 months ago
- ☆69Updated 2 years ago
- Repository for SysML19 Artifacts Evaluation☆54Updated 6 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 6 months ago