LeapLabTHU / InLineLinks
Official repository of InLine attention (NeurIPS 2024)
☆56Updated 10 months ago
Alternatives and similar repositories for InLine
Users that are interested in InLine are comparing it to the libraries listed below
Sorting:
- [CVPR2025] Breaking the Low-Rank Dilemma of Linear Attention☆32Updated 7 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆86Updated 4 months ago
- [ECCV 2024] Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators☆45Updated last year
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆83Updated 6 months ago
- FFNet: MetaMixer-based Efficient Convolutional Mixer Design☆31Updated 7 months ago
- [NeurIPS2024 Spotlight] The official implementation of MambaTree: Tree Topology is All You Need in State Space Model☆102Updated last year
- [ECCV 2024] AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation☆34Updated last year
- ☆30Updated last year
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆87Updated 3 months ago
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆41Updated 9 months ago
- ☆27Updated 8 months ago
- Official repository of Polarity-aware Linear Attention for Vision Transformers (ICLR 2025)☆76Updated this week
- [ICCV2025 highlight]Rectifying Magnitude Neglect in Linear Attention☆44Updated 3 months ago
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆131Updated 11 months ago
- [ICCV 2025] HQ-CLIP: Leveraging Large Vision-Language Models to Create High-Quality Image-Text Datasets☆53Updated 2 months ago
- [ICCV2025]Generate one 2K image on single 3090 GPU!☆75Updated last month
- [IEEE TPAMI] Latency-aware Unified Dynamic Networks for Efficient Image Recognition☆52Updated 7 months ago
- Learning 1D Causal Visual Representation with De-focus Attention Networks☆35Updated last year
- [CVPR 2025] CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient☆107Updated last month
- [NIPS24] Official Implementation of Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation☆19Updated 11 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆48Updated 4 months ago
- [CVPR 2024] The official pytorch implementation of "A General and Efficient Training for Transformer via Token Expansion".☆46Updated last year
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆94Updated 7 months ago
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆225Updated last year
- [AAAI 2025] Linear-complexity Visual Sequence Learning with Gated Linear Attention☆113Updated last year
- ☆18Updated last year
- [NeurIPS 2025 Oral] Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think☆171Updated 3 weeks ago
- A PyTorch implementation of the paper "Revisiting Non-Autoregressive Transformers for Efficient Image Synthesis"☆46Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- [2025] Efficient Vision Language Models: A Survey☆32Updated 3 months ago