FrancescoSaverioZuppichini / pytorch-2.0-benchmark
Benchmarking PyTorch 2.0 different models
☆21Updated last year
Related projects ⓘ
Alternatives and complementary repositories for pytorch-2.0-benchmark
- Memory Optimizations for Deep Learning (ICML 2023)☆60Updated 8 months ago
- Hacks for PyTorch☆17Updated last year
- Awesome Triton Resources☆18Updated last month
- ☆17Updated 3 weeks ago
- Triton Implementation of HyperAttention Algorithm☆46Updated 11 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆35Updated 4 months ago
- ☆20Updated last year
- ☆29Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆43Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated last month
- Experiment of using Tangent to autodiff triton☆72Updated 9 months ago
- Personal solutions to the Triton Puzzles☆16Updated 4 months ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆20Updated last week
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆22Updated this week
- Prototype routines for GPU quantization written using PyTorch.☆19Updated last week
- Unit Scaling demo and experimentation code☆16Updated 8 months ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Implementation of Hyena Hierarchy in JAX☆10Updated last year
- Here we will test various linear attention designs.☆56Updated 6 months ago
- ☆36Updated last year
- ☆44Updated 11 months ago
- Implementation of a holodeck, written in Pytorch☆17Updated last year
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆83Updated last year
- Source-to-Source Debuggable Derivatives in Pure Python☆14Updated 9 months ago
- ☆33Updated 5 months ago
- TensorRT LLM Benchmark Configuration☆11Updated 3 months ago
- Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.☆12Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago