TencentARC / ViT-Lens
[CVPR 2024] ViT-Lens: Towards Omni-modal Representations
☆173Updated last month
Alternatives and similar repositories for ViT-Lens:
Users that are interested in ViT-Lens are comparing it to the libraries listed below
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆122Updated 7 months ago
- [CVPR'24] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆97Updated 10 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆97Updated 6 months ago
- ☆96Updated 10 months ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆193Updated 2 months ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆110Updated 3 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆199Updated 9 months ago
- Official repository of paper "Subobject-level Image Tokenization"☆65Updated 11 months ago
- Code release for "SegLLM: Multi-round Reasoning Segmentation"☆68Updated last month
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆104Updated 3 months ago
- Official repo for StableLLAVA☆94Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆137Updated 3 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆53Updated last month
- [CVPR 2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆138Updated 3 weeks ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆81Updated last year
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆154Updated 3 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆115Updated 8 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆86Updated 2 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆130Updated 4 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆315Updated 8 months ago
- ☆72Updated 10 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆313Updated 3 weeks ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆256Updated last year
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆90Updated 8 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆249Updated 2 months ago
- ☆68Updated last month
- Official implementation of the Law of Vision Representation in MLLMs☆151Updated 4 months ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆125Updated last year
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆217Updated 5 months ago