tjyuyao / cutexLinks
PyCUDA based PyTorch Extension Made Easy
☆26Updated last year
Alternatives and similar repositories for cutex
Users that are interested in cutex are comparing it to the libraries listed below
Sorting:
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Customized matrix multiplication kernels☆57Updated 3 years ago
- A block oriented training approach for inference time optimization.☆34Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆160Updated 2 years ago
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆30Updated last year
- Patch convolution to avoid large GPU memory usage of Conv2D☆93Updated 11 months ago
- A library for unit scaling in PyTorch☆133Updated 6 months ago
- Quantize transformers to any learned arbitrary 4-bit numeric format☆50Updated 6 months ago
- Hacks for PyTorch☆19Updated 2 years ago
- pytorch-profiler☆50Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46Updated 2 years ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆69Updated 3 weeks ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- ONNX Command-Line Toolbox☆35Updated last year
- ☆29Updated 3 years ago
- ☆124Updated last year
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆32Updated 2 years ago
- ☆32Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆29Updated this week
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆69Updated 8 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆182Updated 3 weeks ago
- This repository contains code for the MicroAdam paper.☆21Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- Model compression for ONNX☆99Updated last year
- ☆21Updated 10 months ago
- PyTorch centric eager mode debugger☆48Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Updated 10 months ago
- Torch Distributed Experimental☆117Updated last year