zh460045050 / V2L-TokenizerLinks
☆135Updated last year
Alternatives and similar repositories for V2L-Tokenizer
Users that are interested in V2L-Tokenizer are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆134Updated 2 months ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆45Updated 4 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆100Updated 2 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆64Updated last week
- [ICCV 2023] Generative Prompt Model for Weakly Supervised Object Localization☆57Updated last year
- ☆91Updated 2 years ago
- ☆118Updated last year
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆241Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆150Updated 8 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆82Updated last year
- [ICCV 2023 Oral] Official Implementation of "Denoising Diffusion Autoencoders are Unified Self-supervised Learners"☆174Updated last year
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆211Updated 4 months ago
- HiMTok: Learning Hierarchical Mask Tokens for Image Segmentation with Large Multimodal Model☆58Updated 2 weeks ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆69Updated 2 weeks ago
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆86Updated 4 months ago
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated 9 months ago
- Open source implementation of "Vision Transformers Need Registers"☆184Updated 2 weeks ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆83Updated 2 months ago
- Official code for paper: Auto Cherry-Picker: Learning from High-quality Generative Data Driven by Language☆27Updated 5 months ago
- ☆62Updated 3 weeks ago
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆286Updated 6 months ago
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆97Updated last year
- LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft☆44Updated last year
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆191Updated last year
- CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts☆53Updated 11 months ago
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆161Updated 9 months ago
- A curated list of publications on image and video segmentation leveraging Multimodal Large Language Models (MLLMs), highlighting state-of…☆109Updated this week
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆69Updated 5 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆165Updated 10 months ago
- ☆81Updated 9 months ago