nod-ai / transformer-benchmarksLinks
benchmarking some transformer deployments
β26Updated 2 years ago
Alternatives and similar repositories for transformer-benchmarks
Users that are interested in transformer-benchmarks are comparing it to the libraries listed below
Sorting:
- Benchmarks to capture important workloads.β31Updated 4 months ago
- Nod.ai π¦ version of π» . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository β¦β106Updated 5 months ago
- A tracing JIT compiler for PyTorchβ13Updated 3 years ago
- Distributed ML Optimizerβ32Updated 3 years ago
- Fast sparse deep learning on CPUsβ53Updated 2 years ago
- Torch Distributed Experimentalβ116Updated 10 months ago
- A Python library transfers PyTorch tensors between CPU and NVMeβ116Updated 6 months ago
- Make triton easierβ46Updated last year
- A library for syntactically rewriting Python programs, pronounced (sinner).β69Updated 3 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large β¦β65Updated 3 years ago
- Home for OctoML PyTorch Profilerβ113Updated 2 years ago
- A collection of reproducible inference engine benchmarksβ31Updated 2 months ago
- A basic Docker-based installation of TVMβ11Updated 3 years ago
- Notes and artifacts from the ONNX steering committeeβ26Updated 2 weeks ago
- This repository contains the results and code for the MLPerfβ’ Training v0.7 benchmark.β56Updated 2 years ago
- Customized matrix multiplication kernelsβ56Updated 3 years ago
- Distributed preprocessing and data loading for language datasetsβ39Updated last year
- MLPerfβ’ logging libraryβ36Updated 2 months ago
- β72Updated 2 months ago
- Development repository for integrating FlexFlow (A distributed deep learning framework that supports flexible parallelization strategies)β¦β29Updated 3 years ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.β45Updated 11 months ago
- β50Updated last year
- β14Updated last year
- SParse AcceleRation on Tensor Architectureβ17Updated 2 months ago
- PyTorch RFCs (experimental)β132Updated 3 weeks ago
- β28Updated 5 months ago
- β52Updated 10 months ago
- WIP. Veloce is a low-code Ray-based parallelization library that makes machine learning computation novel, efficient, and heterogeneous.β18Updated 2 years ago
- A place to store reusable transformer components of my own creation or found on the interwebsβ56Updated last week
- A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-β¦β67Updated last year