tjyuyao / cutex
PyCUDA based PyTorch Extension Made Easy
☆24Updated last year
Alternatives and similar repositories for cutex:
Users that are interested in cutex are comparing it to the libraries listed below
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆53Updated 3 weeks ago
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆28Updated last year
- FlexAttention w/ FlashAttention3 Support☆26Updated 5 months ago
- Customized matrix multiplication kernels☆54Updated 3 years ago
- pytorch-profiler☆51Updated last year
- ☆157Updated last year
- A block oriented training approach for inference time optimization.☆32Updated 7 months ago
- ☆21Updated last year
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆12Updated 2 weeks ago
- Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.☆12Updated 3 years ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 8 months ago
- ONNX Command-Line Toolbox☆35Updated 5 months ago
- A library for unit scaling in PyTorch☆124Updated 4 months ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆23Updated last month
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆107Updated this week
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆30Updated last year
- [WIP] Better (FP8) attention for Hopper☆26Updated last month
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated last year
- Dynamic Neural Architecture Search Toolkit☆29Updated 3 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆84Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆102Updated 8 months ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- ☆46Updated last year
- extensible collectives library in triton☆84Updated 6 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- Hacks for PyTorch☆19Updated last year
- ☆21Updated 3 weeks ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆22Updated 9 months ago