SHI-Labs / NATTEN
Neighborhood Attention Extension. Bringing attention to a neighborhood near you!
☆489Updated last month
Alternatives and similar repositories for NATTEN:
Users that are interested in NATTEN are comparing it to the libraries listed below
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆320Updated 4 months ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,115Updated 11 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆671Updated 5 months ago
- A PyTorch implementation of the paper "ZigMa: A DiT-Style Mamba-based Diffusion Model" (ECCV 2024)☆309Updated last month
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆849Updated last month
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆577Updated 5 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,049Updated 10 months ago
- Code for Fast Training of Diffusion Models with Masked Transformers☆401Updated 11 months ago
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"☆829Updated last year
- [ECCV 2024] Official Repository for DiffiT: Diffusion Vision Transformers for Image Generation☆491Updated 6 months ago
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,016Updated last month
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆377Updated last year
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆443Updated 5 months ago
- Masked Diffusion Transformer is the SOTA for image synthesis. (ICCV 2023)☆562Updated last year
- This repo contains the code for 1D tokenizer and generator☆857Updated last month
- [ICML 2024 Spotlight] FiT: Flexible Vision Transformer for Diffusion Model☆410Updated 6 months ago
- [ICLR 2025 Spotlight] Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures☆456Updated 2 months ago
- Official implementation of Inf-DiT: Upsampling Any-Resolution Image with Memory-Efficient Diffusion Transformer☆419Updated 10 months ago
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆889Updated 10 months ago
- [CVPR 2025 Oral] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models☆746Updated last month
- Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"☆338Updated 3 months ago
- Helpful tools and examples for working with flex-attention☆757Updated this week
- Scaling Diffusion Transformers with Mixture of Experts☆317Updated 8 months ago
- A PyTorch implementation of MAGE: MAsked Generative Encoder to Unify Representation Learning and Image Synthesis☆563Updated 2 years ago
- Implementation of MagViT2 Tokenizer in Pytorch☆601Updated 3 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆979Updated last year
- Implementation of rectified flow and some of its followup research / improvements in Pytorch☆285Updated 2 weeks ago
- PyTorch implementation for "Parallel Sampling of Diffusion Models", NeurIPS 2023 Spotlight☆136Updated last year
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆86Updated last year
- Implementation of Autoregressive Diffusion in Pytorch☆376Updated 6 months ago