tjyuyao / cutexLinks
PyCUDA based PyTorch Extension Made Easy
☆26Updated last year
Alternatives and similar repositories for cutex
Users that are interested in cutex are comparing it to the libraries listed below
Sorting:
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- ☆159Updated 2 years ago
- Customized matrix multiplication kernels☆57Updated 3 years ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆30Updated last year
- Patch convolution to avoid large GPU memory usage of Conv2D☆93Updated 10 months ago
- Quantize transformers to any learned arbitrary 4-bit numeric format☆49Updated 4 months ago
- ☆32Updated last year
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆68Updated 7 months ago
- pytorch-profiler☆51Updated 2 years ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆182Updated 2 months ago
- A block oriented training approach for inference time optimization.☆33Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46Updated 2 years ago
- Prototype routines for GPU quantization written using PyTorch.☆21Updated 3 months ago
- A library for unit scaling in PyTorch☆132Updated 4 months ago
- Torch Distributed Experimental☆117Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆25Updated last week
- Triton implement of bi-directional (non-causal) linear attention☆56Updated 9 months ago
- ☆113Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 4 months ago
- Hacks for PyTorch☆19Updated 2 years ago
- ☆22Updated 2 years ago
- Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.☆13Updated 4 years ago
- ☆57Updated last year
- [WIP] Better (FP8) attention for Hopper☆32Updated 9 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆122Updated 11 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Updated last year
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆225Updated last year