SHI-Labs / NATTENLinks
Fast Multi-dimensional Sparse Attention
☆586Updated 3 weeks ago
Alternatives and similar repositories for NATTEN
Users that are interested in NATTEN are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"☆373Updated 7 months ago
- [ECCV 2024] Official Repository for DiffiT: Diffusion Vision Transformers for Image Generation☆496Updated 9 months ago
- [ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think☆1,255Updated 4 months ago
- This repo contains the code for 1D tokenizer and generator☆982Updated 4 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,083Updated last year
- [CVPR 2024] DeepCache: Accelerating Diffusion Models for Free☆917Updated last year
- Official PyTorch Implementation of "SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers"☆938Updated last year
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆601Updated 8 months ago
- Masked Diffusion Transformer is the SOTA for image synthesis. (ICCV 2023)☆575Updated last year
- [CVPR 2025 Oral] Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models☆1,084Updated 2 months ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,138Updated last year
- A library for calculating the FLOPs in the forward() process based on torch.fx☆124Updated 4 months ago
- [NeurIPS 2024] Official implementation of "Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models"☆339Updated 4 months ago
- A PyTorch implementation of the paper "ZigMa: A DiT-Style Mamba-based Diffusion Model" (ECCV 2024)☆329Updated 4 months ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆733Updated 2 weeks ago
- [ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention☆868Updated 3 weeks ago
- Code for Fast Training of Diffusion Models with Masked Transformers☆404Updated last year
- Implementation of a single layer of the MMDiT, proposed in Stable Diffusion 3, in Pytorch☆405Updated 7 months ago
- Helpful tools and examples for working with flex-attention☆924Updated 3 weeks ago
- When it comes to optimizers, it's always better to be safe than sorry☆356Updated this week
- [CVPR 2024 Highlight] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models☆700Updated 8 months ago
- DDT: Decoupled Diffusion Transformer☆269Updated last month
- Causal depthwise conv1d in CUDA, with a PyTorch interface