lernapparat / torchhacks
Hacks for PyTorch
☆17Updated last year
Related projects ⓘ
Alternatives and complementary repositories for torchhacks
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆43Updated last year
- ☆29Updated 2 years ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆35Updated 3 months ago
- Experiment of using Tangent to autodiff triton☆71Updated 9 months ago
- FID computation in Jax/Flax.☆24Updated 3 months ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- JAX implementation of Learning to learn by gradient descent by gradient descent☆25Updated 3 weeks ago
- Make triton easier☆41Updated 4 months ago
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆17Updated this week
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆52Updated last year
- Efficient CUDA kernels for training convolutional neural networks with PyTorch.☆33Updated last week
- A dashboard for exploring timm learning rate schedulers☆18Updated last year
- Utilities for PyTorch distributed☆23Updated last year
- ☆17Updated 2 weeks ago
- Implementation of LogAvgExp for Pytorch☆32Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆37Updated 2 years ago
- Local Attention - Flax module for Jax☆20Updated 3 years ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.☆22Updated last year
- A minimal TPU compatible Jax implementation of NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis.☆13Updated 2 years ago
- Source-to-Source Debuggable Derivatives in Pure Python☆14Updated 9 months ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- Layerwise Batch Entropy Regularization☆22Updated 2 years ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated last week
- A simple Transformer where the softmax has been replaced with normalization☆18Updated 4 years ago