lernapparat / torchhacksLinks
Hacks for PyTorch
☆19Updated 2 years ago
Alternatives and similar repositories for torchhacks
Users that are interested in torchhacks are comparing it to the libraries listed below
Sorting:
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆29Updated 2 years ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated 2 weeks ago
- Torch Distributed Experimental☆117Updated last year
- Experiment of using Tangent to autodiff triton☆80Updated last year
- MaskedTensors for PyTorch☆38Updated 3 years ago
- Customized matrix multiplication kernels☆56Updated 3 years ago
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆83Updated last year
- PyTorch centric eager mode debugger☆47Updated 8 months ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- Make triton easier☆47Updated last year
- ☆21Updated 5 months ago
- TorchFix - a linter for PyTorch-using code with autofix support☆144Updated this week
- Quantize transformers to any learned arbitrary 4-bit numeric format☆41Updated last month
- ☆75Updated 2 years ago
- FlexAttention w/ FlashAttention3 Support☆27Updated 10 months ago
- Prototype routines for GPU quantization written using PyTorch.☆21Updated 3 weeks ago
- Little article showing how to load pytorch's models with linear memory consumption☆34Updated 2 years ago
- ☆19Updated 2 years ago
- Utilities for Training Very Large Models☆58Updated 11 months ago
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated 2 years ago
- ☆87Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- ☆14Updated 3 months ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- Python pdb for multiple processes☆52Updated 3 months ago