qhfan / RALALinks
[CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention
☆39Updated 10 months ago
Alternatives and similar repositories for RALA
Users that are interested in RALA are comparing it to the libraries listed below
Sorting:
- Official repository of InLine attention (NeurIPS 2024)☆58Updated last year
- [ICCV2025 highlight]Rectifying Magnitude Neglect in Linear Attention☆56Updated 6 months ago
- FFNet: MetaMixer-based Efficient Convolutional Mixer Design☆31Updated 10 months ago
- [ICCV2025]Generate one 2K image on single 24GB 3090 GPU!☆83Updated 4 months ago
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆105Updated last year
- Official PyTorch implementation of The Linear Attention Resurrection in Vision Transformer☆15Updated last year
- Official repository of Circulant Attention (AAAI 2026)☆17Updated 2 weeks ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆89Updated 8 months ago
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆42Updated last year
- ☆24Updated 8 months ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆108Updated 4 months ago
- The official implementation of "[MASK] is All You Need"☆127Updated 6 months ago
- Explore how to get a VQ-VAE models efficiently!☆67Updated 6 months ago
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆114Updated 6 months ago
- [CVPR'25] MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization☆47Updated 6 months ago
- [ECCV 2024] AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation☆35Updated last year
- [ECCV 2024 Workshop Best Paper Award] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion☆34Updated last year
- [ICLR 2026] Autoregressive Image Generation with Randomized Parallel Decoding☆85Updated this week
- [NeurIPS 2025 Oral] Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think☆240Updated 3 months ago
- WeGeFT: Weight‑Generative Fine‑Tuning for Multi‑Faceted Efficient Adaptation of Large Models☆22Updated 6 months ago
- Code for the paper "Vamba: Understanding Hour-Long Videos with Hybrid Mamba-Transformers" [ICCV 2025]☆99Updated 6 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆51Updated 7 months ago
- ☆37Updated 3 months ago
- [CVPR 2024] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆101Updated last year
- [ECCV 2024] Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators☆47Updated last year
- A PyTorch implementation of the paper "Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis"☆47Updated last year
- This repository provides an improved LLamaGen Model, fine-tuned on 500,000 high-quality images, each accompanied by over 300 token prompt…☆30Updated last year
- [CVPR 2025] DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention☆177Updated 11 months ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- [NeurIPS '25 Spotlight] Official Pytorch implementation of "Vision Transformers Don't Need Trained Registers"☆168Updated 4 months ago