LeapLabTHU / InLineLinks
Official repository of InLine attention (NeurIPS 2024)
☆54Updated 8 months ago
Alternatives and similar repositories for InLine
Users that are interested in InLine are comparing it to the libraries listed below
Sorting:
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆27Updated 5 months ago
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆100Updated last year
- [ECCV 2024] AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation☆34Updated 11 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆84Updated 2 months ago
- [ECCV 2024] Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators☆45Updated 11 months ago
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆79Updated 4 months ago
- [CVPR 2024] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- ☆18Updated last year
- ☆24Updated 6 months ago
- ☆70Updated 3 weeks ago
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆106Updated 4 months ago
- Learning 1D Causal Visual Representation with De-focus Attention Networks☆35Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- [IEEE TPAMI] Latency-aware Unified Dynamic Networks for Efficient Image Recognition☆51Updated 5 months ago
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆41Updated 7 months ago
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆87Updated 5 months ago
- A PyTorch implementation of the paper "Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis"☆45Updated last year
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆95Updated last month
- ☆30Updated last year
- FFNet: MetaMixer-based Efficient Convolutional Mixer Design☆30Updated 5 months ago
- [AAAI 2025] Linear-complexity Visual Sequence Learning with Gated Linear Attention☆111Updated last year
- Official repository of Uni-AdaFocus (TPAMI 2024).☆48Updated 8 months ago
- Official repository of Polarity-aware Linear Attention for Vision Transformers (ICLR 2025)☆69Updated 3 months ago
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆130Updated 9 months ago
- ☆72Updated 6 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆80Updated last month
- [CVPR 2024] The official pytorch implementation of "A General and Efficient Training for Transformer via Token Expansion".☆44Updated last year
- [NIPS24] Official Implementation of Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation☆19Updated 9 months ago
- [ICCV 2025] HQ-CLIP: Leveraging Large Vision-Language Models to Create High-Quality Image-Text Datasets☆48Updated 3 weeks ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆46Updated 2 months ago