betacord / PSILinks
☆12Updated last week
Alternatives and similar repositories for PSI
Users that are interested in PSI are comparing it to the libraries listed below
Sorting:
- Kick-off repository for starting with Kaggle!☆12Updated 10 months ago
- Minimal implementation of scalable rectified flow transformers, based on SD3's approach☆611Updated last year
- ☆10Updated last year
- Efficient optimizers☆269Updated last week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆290Updated 4 months ago
- supporting pytorch FSDP for optimizers☆83Updated 10 months ago
- ☆27Updated 9 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆313Updated 3 months ago
- Focused on fast experimentation and simplicity☆75Updated 9 months ago
- Train VAE like a boss☆293Updated 11 months ago
- A simple implimentation of Bayesian Flow Networks (BFN)☆240Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆439Updated 5 months ago
- Text to Image Latent Diffusion using a Transformer core☆209Updated last year
- ☆49Updated 7 months ago
- Rebuild the Stable Diffusion Model in a single python script. Tutorial for Harvard ML from Scratch Series☆216Updated 8 months ago
- UNet diffusion model in pure CUDA☆649Updated last year
- Huggingface-compatible SDXL Unet implementation that is readily hackable☆426Updated 2 years ago
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆329Updated 9 months ago
- Official implementation of Würstchen: Efficient Pretraining of Text-to-Image Models☆552Updated last year
- ☆30Updated 10 months ago
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆562Updated last year
- ☆268Updated this week
- WIP☆93Updated last year
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- Annotated version of the Mamba paper☆489Updated last year
- Supporting code for the blog post on modular manifolds.☆77Updated 3 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆577Updated 2 months ago
- My annotated papers and meeting recordings for the EleutherAI ML Performance research paper reading group☆21Updated 3 weeks ago
- Diffusion Reading Group at EleutherAI☆324Updated 2 years ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆124Updated last year