TencentARC / ViT-LensLinks
[CVPR 2024] ViT-Lens: Towards Omni-modal Representations
☆178Updated 6 months ago
Alternatives and similar repositories for ViT-Lens
Users that are interested in ViT-Lens are comparing it to the libraries listed below
Sorting:
- ☆99Updated last year
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆125Updated 11 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆60Updated 5 months ago
- Official repo for StableLLAVA☆95Updated last year
- [CVPR'24] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- Official repository of paper "Subobject-level Image Tokenization" (ICML-25)☆80Updated last month
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated last year
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆201Updated 6 months ago
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆227Updated 10 months ago
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scale☆111Updated 11 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆180Updated last month
- ☆72Updated last year
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆241Updated last year
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆144Updated 11 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆119Updated 2 months ago
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).☆168Updated 3 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated last year
- ☆138Updated 10 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated 10 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆342Updated last week
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆53Updated last week
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆102Updated 7 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆159Updated 7 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆150Updated 7 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆144Updated 8 months ago
- ☆69Updated last year
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆48Updated last month
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆70Updated 10 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆327Updated last year