LeapLabTHU / InLineLinks
Official repository of InLine attention (NeurIPS 2024)
☆56Updated 10 months ago
Alternatives and similar repositories for InLine
Users that are interested in InLine are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators☆45Updated last year
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆35Updated 8 months ago
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆102Updated last year
- [ECCV 2024] AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation☆34Updated last year
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆86Updated 5 months ago
- [IEEE TPAMI] Latency-aware Unified Dynamic Networks for Efficient Image Recognition☆52Updated 7 months ago
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆84Updated 7 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆92Updated 4 months ago
- [ICCV 2025] HQ-CLIP: Leveraging Large Vision-Language Models to Create High-Quality Image-Text Datasets☆55Updated 3 months ago
- A PyTorch implementation of the paper "Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis"☆46Updated last year
- ☆30Updated last year
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆107Updated last month
- Learning 1D Causal Visual Representation with De-focus Attention Networks☆35Updated last year
- FFNet: MetaMixer-based Efficient Convolutional Mixer Design☆31Updated 8 months ago
- [ICCV2025]Generate one 2K image on single 3090 GPU!☆78Updated 2 months ago
- Official repository of Polarity-aware Linear Attention for Vision Transformers (ICLR 2025)☆78Updated 3 weeks ago
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆41Updated 9 months ago
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- [AAAI 2025] Linear-complexity Visual Sequence Learning with Gated Linear Attention☆113Updated last year
- [ICCV2025 highlight]Rectifying Magnitude Neglect in Linear Attention☆48Updated 3 months ago
- [CVPR 2024] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆100Updated last year
- ☆18Updated last year
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆132Updated 11 months ago
- CODA: Repurposing Continuous VAEs for Discrete Tokenization☆33Updated 4 months ago
- This repository includes the official implementation our paper "Scaling White-Box Transformers for Vision"☆47Updated last year
- ☆28Updated 8 months ago
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆225Updated last year
- Official repository of Uni-AdaFocus (TPAMI 2024).☆54Updated 11 months ago
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆108Updated 4 months ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year