lernapparat / torchhacks
Hacks for PyTorch
☆19Updated last year
Alternatives and similar repositories for torchhacks:
Users that are interested in torchhacks are comparing it to the libraries listed below
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated last year
- ☆29Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- An open source implementation of CLIP.☆32Updated 2 years ago
- Make triton easier☆47Updated 9 months ago
- ☆21Updated last month
- A place to store reusable transformer components of my own creation or found on the interwebs☆48Updated last week
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated 8 months ago
- ☆18Updated 2 years ago
- A dashboard for exploring timm learning rate schedulers☆19Updated 4 months ago
- Memory-Efficient CUDA kernels for training ConvNets with PyTorch.☆39Updated last month
- PyTorch centric eager mode debugger☆46Updated 3 months ago
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated last year
- This repository hosts the code to port NumPy model weights of BiT-ResNets to TensorFlow SavedModel format.☆14Updated 3 years ago
- Benchmarking PyTorch 2.0 different models☆21Updated 2 years ago
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 5 months ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.☆22Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis.☆13Updated 2 years ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago
- Local Attention - Flax module for Jax☆20Updated 3 years ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆53Updated 3 weeks ago
- A collection of optimizers, some arcane others well known, for Flax.☆29Updated 3 years ago
- Easily run PyTorch on multiple GPUs & machines☆45Updated 2 weeks ago
- Prototype routines for GPU quantization written using PyTorch.☆20Updated last month
- Utilities for PyTorch distributed☆23Updated last month
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- JAX Implementation of Black Forest Labs' Flux.1 family of models☆30Updated 5 months ago