allenai / molmo
Code for the Molmo Vision-Language Model
☆136Updated this week
Alternatives and similar repositories for molmo:
Users that are interested in molmo are comparing it to the libraries listed below
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆172Updated this week
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning☆91Updated 3 months ago
- LL3M: Large Language and Multi-Modal Model in Jax☆66Updated 7 months ago
- ☆65Updated 5 months ago
- M4 experiment logbook☆56Updated last year
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M d…☆191Updated 3 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆107Updated 5 months ago
- Multimodal language model benchmark, featuring challenging examples☆152Updated this week
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆183Updated 2 months ago
- Matryoshka Multimodal Models☆85Updated 3 weeks ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆366Updated last week
- E5-V: Universal Embeddings with Multimodal Large Language Models☆186Updated 5 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆136Updated 6 months ago
- Object Recognition as Next Token Prediction (CVPR 2024 Highlight)☆163Updated 2 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆139Updated last month
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆143Updated last month
- When do we not need larger vision models?☆342Updated 2 weeks ago
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆323Updated 2 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆189Updated 2 weeks ago
- [COLM-2024] List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs☆128Updated 3 months ago
- This repo contains the code and data for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks"☆94Updated this week
- a family of highly capabale yet efficient large multimodal models☆170Updated 3 months ago
- This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆125Updated 6 months ago
- VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆177Updated last month
- Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image …☆57Updated last week
- This repo contains the code and data for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks"☆47Updated this week
- ☆151Updated 2 months ago
- [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models☆246Updated 2 months ago
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆217Updated 4 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆301Updated 5 months ago