InfiMM / mllm-hdLinks
Official code for infimm-hd
☆16Updated 11 months ago
Alternatives and similar repositories for mllm-hd
Users that are interested in mllm-hd are comparing it to the libraries listed below
Sorting:
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆138Updated last year
- Matryoshka Multimodal Models☆112Updated 6 months ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆63Updated 3 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆44Updated last year
- ☆73Updated last year
- This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs☆31Updated 5 months ago
- ☆50Updated last year
- Code and Data for Paper: SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data☆34Updated last year
- ECCV2024_Parrot Captions Teach CLIP to Spot Text☆66Updated 11 months ago
- [Technical Report] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with …☆60Updated 10 months ago
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆46Updated 5 months ago
- ☆66Updated last year
- ☆65Updated last year
- [EMNLP 2024] Official PyTorch implementation code for realizing the technical part of Traversal of Layers (TroL) presenting new propagati…☆97Updated last year
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆152Updated last year
- Official repo for StableLLAVA☆95Updated last year
- This is a public repository for Image Clustering Conditioned on Text Criteria (IC|TC)☆90Updated last year
- Official implementation and dataset for the NAACL 2024 paper "ComCLIP: Training-Free Compositional Image and Text Matching"☆36Updated 11 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- [ICLR 2025] Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegr…☆76Updated 8 months ago
- ☆53Updated 3 months ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆60Updated 5 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆160Updated 7 months ago
- Preference Learning for LLaVA☆47Updated 9 months ago
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆55Updated 9 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- Democratization of "PaLI: A Jointly-Scaled Multilingual Language-Image Model"☆92Updated last year
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆56Updated 2 years ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆145Updated 8 months ago
- ☆65Updated last month