titu1994 / simple_diffusionLinks
Simple notebooks to learn diffusion models on toy datasets
☆17Updated 2 years ago
Alternatives and similar repositories for simple_diffusion
Users that are interested in simple_diffusion are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation of LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification☆46Updated 3 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Parameterizing Mixing Links in Sparse Factors Works Better than Dot-Product Self-Attention (CVPR 2022)☆20Updated 2 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 3 years ago
- Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups☆36Updated 4 years ago
- Directed masked autoencoders☆14Updated 2 years ago
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated 2 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆58Updated last year
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- Graph neural network message passing reframed as a Transformer with local attention☆69Updated 2 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 4 years ago
- A simple implementation of a deep linear Pytorch module☆21Updated 4 years ago
- A simple implementation of [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752)☆22Updated last year
- ☆44Updated last year
- Implementation of LogAvgExp for Pytorch☆36Updated 3 months ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- ☆16Updated 2 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆91Updated 3 years ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆72Updated 2 years ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆46Updated 2 years ago
- Implementation for ACProp ( Momentum centering and asynchronous update for adaptive gradient methdos, NeurIPS 2021)☆15Updated 3 years ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆34Updated this week
- ☆89Updated last year
- Code for ICLR 2023 Paper, "Stable Target Field for Reduced Variance Score Estimation in Diffusion Models”☆76Updated 2 years ago
- A PyTorch Dataset that caches samples in shared memory, accessible globally to all processes☆20Updated 3 years ago
- ☆8Updated last year
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Updated 3 years ago
- Little article showing how to load pytorch's models with linear memory consumption☆34Updated 2 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- ☆25Updated 3 years ago