LeapLabTHU / Attention-MediatorsLinks
[ECCV 2024] Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators
☆45Updated last year
Alternatives and similar repositories for Attention-Mediators
Users that are interested in Attention-Mediators are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation☆34Updated last year
- [NeurIPS 2024] ENAT: Rethinking Spatial-temporal Interactions in Token-based Image Synthesis☆24Updated 11 months ago
- A PyTorch implementation of the paper "Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis"☆46Updated last year
- CODA: Repurposing Continuous VAEs for Discrete Tokenization☆33Updated 4 months ago
- Official repository of InLine attention (NeurIPS 2024)☆56Updated 10 months ago
- [IEEE TPAMI] Latency-aware Unified Dynamic Networks for Efficient Image Recognition☆52Updated 7 months ago
- [Nature Machine Intelligence 2025] Emulating Human-like Adaptive Vision for Efficient and Flexible Machine Visual Perception☆87Updated last week
- [NeurIPS 2022] Latency-aware Spatial-wise Dynamic Networks☆24Updated 2 years ago
- Official implementation of Dynamic Perceiver☆43Updated 2 years ago
- Official repository of Uni-AdaFocus (TPAMI 2024).☆54Updated 11 months ago
- ☆17Updated 8 months ago
- [ICML 2024] SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning☆31Updated last year
- [IEEE TIP] Fine-grained Recognition with Learnable Semantic Data Augmentation☆30Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆86Updated 5 months ago
- ☆28Updated 8 months ago
- ☆27Updated 3 years ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆92Updated 4 months ago
- Jittor implementation of Vision Transformer with Deformable Attention☆31Updated 3 years ago
- Code release for Deep Incubation (https://arxiv.org/abs/2212.04129)☆90Updated 2 years ago
- [NeurIPS 2025 Oral] Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think☆184Updated last month
- The open-source code for the NeurIPS 2025 paper, "Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learn…☆25Updated this week
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆50Updated 10 months ago
- Learning 1D Causal Visual Representation with De-focus Attention Networks☆35Updated last year
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆107Updated last month
- Code for ICML 2025 Paper "Highly Compressed Tokenizer Can Generate Without Training"☆185Updated 5 months ago
- [NIPS 2025 DB Oral] Official Repository of paper: Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing☆114Updated 3 weeks ago
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆108Updated 4 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆125Updated 7 months ago
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆102Updated last year