titu1994 / simple_diffusionLinks
Simple notebooks to learn diffusion models on toy datasets
☆17Updated 2 years ago
Alternatives and similar repositories for simple_diffusion
Users that are interested in simple_diffusion are comparing it to the libraries listed below
Sorting:
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups☆36Updated 4 years ago
- A simple implementation of [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752)☆22Updated 2 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆59Updated 2 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆98Updated 4 years ago
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated 2 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 4 years ago
- Implementation of LogAvgExp for Pytorch☆37Updated 9 months ago
- Graph neural network message passing reframed as a Transformer with local attention☆70Updated 3 years ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆73Updated 3 years ago
- ☆88Updated 2 years ago
- A convolution-free, transformer-only version of the CycleGAN framework☆33Updated 3 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆57Updated 3 years ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆47Updated 2 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 5 years ago
- A simple implementation of a deep linear Pytorch module☆21Updated 5 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- ☆21Updated 4 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆66Updated 3 years ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- Little article showing how to load pytorch's models with linear memory consumption☆34Updated 3 years ago
- Implementation of Lie Transformer, Equivariant Self-Attention, in Pytorch☆97Updated 4 years ago
- JAX implementation of Learning to learn by gradient descent by gradient descent☆28Updated 5 months ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 3 years ago
- A PyTorch Dataset that caches samples in shared memory, accessible globally to all processes☆23Updated 3 years ago
- PyTorch implementation of IRMAE https//arxiv.org/abs/2010.00679☆48Updated 3 years ago
- An open source implementation of CLIP.☆33Updated 3 years ago
- A project to improve out-of-distribution detection (open set recognition) and uncertainty estimation by changing a few lines of code in y…☆44Updated 3 years ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated 2 years ago
- Axial Positional Embedding for Pytorch☆84Updated 11 months ago