LeapLabTHU / Attention-MediatorsLinks
[ECCV 2024] Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators
☆45Updated last year
Alternatives and similar repositories for Attention-Mediators
Users that are interested in Attention-Mediators are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation☆34Updated last year
- [NeurIPS 2024] ENAT: Rethinking Spatial-temporal Interactions in Token-based Image Synthesis☆24Updated 10 months ago
- Official repository of Uni-AdaFocus (TPAMI 2024).☆49Updated 9 months ago
- Official implementation of Dynamic Perceiver☆43Updated last year
- [IEEE TPAMI] Latency-aware Unified Dynamic Networks for Efficient Image Recognition☆52Updated 6 months ago
- CODA: Repurposing Continuous VAEs for Discrete Tokenization☆28Updated 3 months ago
- A PyTorch implementation of the paper "Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis"☆46Updated last year
- [NeurIPS 2022] Latency-aware Spatial-wise Dynamic Networks☆24Updated 2 years ago
- ☆17Updated 7 months ago
- Official repository of InLine attention (NeurIPS 2024)☆55Updated 9 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆85Updated 4 months ago
- [ICML 2024] SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning☆30Updated last year
- ☆27Updated 7 months ago
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆48Updated 9 months ago
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- [IEEE TIP] Fine-grained Recognition with Learnable Semantic Data Augmentation☆30Updated last year
- ☆27Updated 3 years ago
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆90Updated 2 years ago
- Jittor implementation of Vision Transformer with Deformable Attention☆31Updated 3 years ago
- ☆13Updated 9 months ago
- Code for ICML 2025 Paper "Highly Compressed Tokenizer Can Generate Without Training"☆177Updated 3 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆87Updated 2 months ago
- [NeurIPS 2025 Oral] Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think☆140Updated this week
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆106Updated last week
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆101Updated 3 months ago
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆170Updated last year
- Repository of Vision Transformer with Deformable Attention (CVPR2022) and DAT++: Spatially Dynamic Vision Transformerwith Deformable Atte…☆19Updated last year
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆90Updated last year
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆102Updated last year
- Official repository of paper "Subobject-level Image Tokenization" (ICML-25)☆87Updated 3 months ago