Yuliang-Liu / Monkey
【CVPR 2024 Highlight】Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models
☆1,736Updated 2 weeks ago
Alternatives and similar repositories for Monkey:
Users that are interested in Monkey are comparing it to the libraries listed below
- Align Anything: Training All-modality Model with Feedback☆3,186Updated this week
- Accelerating the development of large multimodal models (LMMs) with one-click evaluation module - lmms-eval.☆2,289Updated this week
- The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"☆709Updated this week
- OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]☆1,265Updated 3 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆558Updated 10 months ago
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,052Updated 5 months ago
- Real-time and accurate open-vocabulary end-to-end object detection☆1,310Updated 3 months ago
- A family of lightweight multimodal models.☆1,005Updated 4 months ago
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆951Updated 8 months ago
- Collection of AWESOME vision-language models for vision tasks☆2,639Updated last week
- Build multimodal language agents for fast prototype and production☆2,456Updated 2 weeks ago
- [CVPR'23] Universal Instance Perception as Object Discovery and Retrieval☆1,268Updated last year
- Pioneering Multimodal Reasoning with CoT☆1,430Updated this week
- Official repository of ’Visual-RFT: Visual Reinforcement Fine-Tuning’☆1,500Updated 2 weeks ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆554Updated 9 months ago
- 🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos☆1,006Updated 2 weeks ago
- Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conv…☆431Updated 3 weeks ago
- [ECCV 2024] Official code implementation of Vary: Scaling Up the Vision Vocabulary of Large Vision Language Models.☆1,821Updated 3 months ago
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆2,137Updated this week
- Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,163Updated last week
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆256Updated 11 months ago
- [ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | 基于CPM基础模型的中英双语多模态大模型系列☆1,055Updated 9 months ago
- DocGenome: An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Models☆129Updated 2 months ago
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,314Updated this week
- [CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale☆1,113Updated 5 months ago
- A Semantic Controllable Self-Supervised Learning Framework to learn general human representations from massive unlabeled human images, wh…☆1,444Updated last year
- Official code implementation of Vary-toy (Small Language Model Meets with Reinforced Vision Vocabulary)☆619Updated 3 months ago
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆322Updated last month
- Official Repo for Paper ‘’HealthGPT : A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogeneous K…☆841Updated this week
- [ ICLR 2024 ] Official Codebase for "InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists"☆462Updated 11 months ago