qhfan / RALALinks
[CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention
☆29Updated 6 months ago
Alternatives and similar repositories for RALA
Users that are interested in RALA are comparing it to the libraries listed below
Sorting:
- Official repository of InLine attention (NeurIPS 2024)☆55Updated 9 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆85Updated 4 months ago
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆102Updated last year
- [CVPR 2024] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆41Updated 8 months ago
- FFNet: MetaMixer-based Efficient Convolutional Mixer Design☆30Updated 6 months ago
- ☆73Updated 7 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆47Updated 3 months ago
- ☆33Updated 6 months ago
- The official implementation of "[MASK] is All You Need"☆125Updated 2 months ago
- Learning 1D Causal Visual Representation with De-focus Attention Networks☆35Updated last year
- Official Pytorch implementation of "Vision Transformers Don't Need Trained Registers" (NeurIPS '25 Spotlight)☆110Updated 3 weeks ago
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆83Updated 6 months ago
- [NeurIPS 2025 Oral] Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think☆140Updated this week
- ☆29Updated last year
- [ECCV 2024 Workshop Best Paper Award] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion☆34Updated last year
- [CVPR 2025] DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention☆173Updated 7 months ago
- WeGeFT: Weight‑Generative Fine‑Tuning for Multi‑Faceted Efficient Adaptation of Large Models☆21Updated 2 months ago
- Code of our CVPR2024 paper - DiffusionMTL: Learning Multi-Task Denoising Diffusion Model from Partially Annotated Data☆58Updated last year
- [CVPR 2025 Highlight] TinyFusion: Diffusion Transformers Learned Shallow☆142Updated 6 months ago
- ☆30Updated last year
- Autoregressive Image Generation with Randomized Parallel Decoding☆77Updated 6 months ago
- [CVPR2025] Official code repository for SeTa: "Scale Efficient Training for Large Datasets"☆21Updated 6 months ago
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆43Updated 6 months ago
- [ICCV 2025] HQ-CLIP: Leveraging Large Vision-Language Models to Create High-Quality Image-Text Datasets☆50Updated 2 months ago
- (SRA) No Other Representation Component Is Needed: Diffusion Transformers Can Provide Representation Guidance by Themselves☆90Updated 2 months ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆106Updated last week
- [Preprint] GMem: A Modular Approach for Ultra-Efficient Generative Models☆39Updated 6 months ago
- [NIPS24] Official Implementation of Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation☆19Updated 11 months ago
- [ICCV2025]Generate one 2K image on single 3090 GPU!☆69Updated last month