SHI-Labs / NATTEN
Neighborhood Attention Extension. Bringing attention to a neighborhood near you!
☆396Updated 3 weeks ago
Alternatives and similar repositories for NATTEN:
Users that are interested in NATTEN are comparing it to the libraries listed below
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆552Updated last month
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆270Updated last month
- Masked Diffusion Transformer is the SOTA for image synthesis. (ICCV 2023)☆544Updated 9 months ago
- [ECCV 2024] Official Repository for DiffiT: Diffusion Vision Transformers for Image Generation☆481Updated 2 months ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,079Updated 8 months ago
- Code for Fast Training of Diffusion Models with Masked Transformers☆386Updated 8 months ago
- This repo contains the code for 1D tokenizer and generator☆667Updated this week
- Official Pytorch Implementation of Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think (ICL…☆815Updated this week
- Official Jax Implementation of MaskGIT☆479Updated 2 years ago
- EDM2 and Autoguidance -- Official PyTorch implementation☆618Updated last month
- A PyTorch implementation of MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis☆548Updated last year
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"☆750Updated 10 months ago
- PyTorch implementation of MAR+DiffLoss https://arxiv.org/abs/2406.11838☆1,255Updated 4 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆619Updated 2 months ago
- This is the official code release for our work, Denoising Vision Transformers.☆352Updated 2 months ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆817Updated 7 months ago
- A PyTorch implementation of the paper "ZigMa: A DiT-Style Mamba-based Diffusion Model" (ECCV 2024)☆290Updated 2 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,000Updated 7 months ago
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"☆306Updated 9 months ago
- Implementation of a single layer of the MMDiT, proposed in Stable Diffusion 3, in Pytorch☆291Updated 2 weeks ago
- An unofficial implementation of both ViT-VQGAN and RQ-VAE in Pytorch☆295Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆370Updated last year
- unofficial MaskGIT reproduction in PyTorch☆182Updated 11 months ago
- Official PyTorch implementation of Video Probabilistic Diffusion Models in Projected Latent Space (CVPR 2023).☆314Updated 8 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆944Updated 10 months ago
- ☆260Updated 3 months ago
- Implementation of a U-net complete with efficient attention as well as the latest research findings☆271Updated 8 months ago
- Official Open Source code for "Masked Autoencoders As Spatiotemporal Learners"☆327Updated 2 months ago
- [NeurIPS 2024] The official code of "U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers"☆178Updated 4 months ago
- Implementation of MagViT2 Tokenizer in Pytorch☆587Updated 2 weeks ago