tjyuyao / cutexLinks
PyCUDA based PyTorch Extension Made Easy
☆26Updated last year
Alternatives and similar repositories for cutex
Users that are interested in cutex are comparing it to the libraries listed below
Sorting:
- FlexAttention w/ FlashAttention3 Support☆27Updated 11 months ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆64Updated 5 months ago
- Customized matrix multiplication kernels☆56Updated 3 years ago
- A block oriented training approach for inference time optimization.☆34Updated last year
- Prototype routines for GPU quantization written using PyTorch.☆21Updated last month
- Hacks for PyTorch☆19Updated 2 years ago
- [WIP] Better (FP8) attention for Hopper☆33Updated 7 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆31Updated 2 years ago
- ☆159Updated 2 years ago
- Quantize transformers to any learned arbitrary 4-bit numeric format☆48Updated 2 months ago
- pytorch-profiler☆51Updated 2 years ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆92Updated 8 months ago
- A bunch of kernels that might make stuff slower 😉☆59Updated this week
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆30Updated last year
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆73Updated 9 months ago
- PyTorch implementation of the Flash Spectral Transform Unit.☆18Updated last year
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆181Updated last month
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆24Updated 3 months ago
- Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.☆13Updated 3 years ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Here we will test various linear attention designs.☆62Updated last year
- ☆32Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated last week
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- A code generator from ONNX to PyTorch code☆141Updated 2 years ago
- ☆98Updated 4 months ago
- PyTorch centric eager mode debugger☆48Updated 9 months ago