LeapLabTHU / InLineLinks
Official repository of InLine attention (NeurIPS 2024)
☆55Updated 9 months ago
Alternatives and similar repositories for InLine
Users that are interested in InLine are comparing it to the libraries listed below
Sorting:
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆29Updated 6 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆85Updated 4 months ago
- [ECCV 2024] Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators☆45Updated last year
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆102Updated last year
- [ECCV 2024] AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation☆34Updated last year
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆83Updated 6 months ago
- [ECCV 2024 Workshop Best Paper Award] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion☆34Updated last year
- Official repository of Uni-AdaFocus (TPAMI 2024).☆49Updated 9 months ago
- Official repository of Polarity-aware Linear Attention for Vision Transformers (ICLR 2025)☆71Updated 4 months ago
- [IEEE TPAMI] Latency-aware Unified Dynamic Networks for Efficient Image Recognition☆52Updated 6 months ago
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆41Updated 8 months ago
- ☆30Updated last year
- ☆27Updated 7 months ago
- ☆73Updated 7 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆87Updated 2 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆47Updated 3 months ago
- Learning 1D Causal Visual Representation with De-focus Attention Networks☆35Updated last year
- FFNet: MetaMixer-based Efficient Convolutional Mixer Design☆30Updated 6 months ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆106Updated last week
- [CVPR 2024] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆223Updated last year
- [ICCV2025 highlight]Rectifying Magnitude Neglect in Linear Attention☆36Updated 2 months ago
- A PyTorch implementation of the paper "Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis"☆46Updated last year
- [NIPS24] Official Implementation of Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation☆19Updated 11 months ago
- ☆18Updated last year
- [AAAI 2025] Linear-complexity Visual Sequence Learning with Gated Linear Attention☆114Updated last year
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆90Updated 6 months ago
- [CVPR 2024] The official pytorch implementation of "A General and Efficient Training for Transformer via Token Expansion".☆45Updated last year
- Project Page for "Multi-Task Dense Prediction via Mixture of Low-Rank Experts"☆82Updated 4 months ago