yichengchen24 / ACPLinks
Official code for paper: Auto Cherry-Picker: Learning from High-quality Generative Data Driven by Language
☆29Updated 7 months ago
Alternatives and similar repositories for ACP
Users that are interested in ACP are comparing it to the libraries listed below
Sorting:
- ☆21Updated 9 months ago
- Official Implementation of ICLR'24: Kosmos-G: Generating Images in Context with Multimodal Large Language Models☆73Updated last year
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated 11 months ago
- Official repository for CoMM Dataset☆48Updated 9 months ago
- ☆119Updated last year
- PyTorch implementation of "UNIT: Unifying Image and Text Recognition in One Vision Encoder", NeurlPS 2024.☆30Updated last year
- Empowering Unified MLLM with Multi-granular Visual Generation☆130Updated 9 months ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 5 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆156Updated 10 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆103Updated 4 months ago
- Unified layout planning and image generation, ICCV2025☆32Updated 6 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆120Updated 6 months ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆178Updated last year
- Official Repository of Personalized Visual Instruct Tuning☆32Updated 7 months ago
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understanding☆45Updated 9 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆70Updated 8 months ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆137Updated 9 months ago
- ☆39Updated 4 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆129Updated 4 months ago
- Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"☆111Updated 2 weeks ago
- Official repository for LLaVA-Reward (ICCV 2025): Multimodal LLMs as Customized Reward Models for Text-to-Image Generation☆20Updated 2 months ago
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆70Updated last year
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model☆36Updated 10 months ago
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆86Updated 6 months ago
- ☆40Updated 3 months ago
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆60Updated last year
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆222Updated 2 months ago
- Official code for NeurIPS 2025 paper "GRIT: Teaching MLLMs to Think with Images"☆152Updated this week
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆108Updated last month
- ☆52Updated 2 years ago