Yuliang-Liu / MonkeyLinks
Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models (CVPR 2024 Highlight)
☆1,859Updated 2 weeks ago
Alternatives and similar repositories for Monkey
Users that are interested in Monkey are comparing it to the libraries listed below
Sorting:
- Align Anything: Training All-modality Model with Feedback☆4,026Updated 3 weeks ago
- One for All Modalities Evaluation Toolkit - including text, image, video, audio tasks.☆2,659Updated this week
- The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"☆733Updated last month
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆570Updated last year
- Real-time and accurate open-vocabulary end-to-end object detection☆1,324Updated 6 months ago
- Build multimodal language agents for fast prototype and production☆2,512Updated 3 months ago
- OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]☆1,303Updated 3 weeks ago
- [ECCV 2024] Official code implementation of Vary: Scaling Up the Vision Vocabulary of Large Vision Language Models.☆1,838Updated 5 months ago
- A family of lightweight multimodal models.☆1,024Updated 7 months ago
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,064Updated 8 months ago
- DocGenome: An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Models☆136Updated 5 months ago
- [ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | 基于CPM基础模型的中英双语多模态大模型系列☆1,061Updated last year
- An MBTI Exploration of Large Language Models☆485Updated last year
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆978Updated 10 months ago
- [CVPR'23] Universal Instance Perception as Object Discovery and Retrieval☆1,273Updated last year
- (AAAI 2024) BLIVA: A Simple Multimodal LLM for Better Handling of Text-rich Visual Questions☆260Updated last year
- Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,193Updated 2 months ago
- A Semantic Controllable Self-Supervised Learning Framework to learn general human representations from massive unlabeled human images, wh…☆1,457Updated last year
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆568Updated last year
- 🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing imp…☆3,256Updated last year
- [CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"☆822Updated last month
- On the Hidden Mystery of OCR in Large Multimodal Models (OCRBench)☆636Updated 4 months ago
- Collection of AWESOME vision-language models for vision tasks☆2,788Updated last month
- [ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for …☆1,346Updated last year
- Large-Scale Visual Representation Model☆691Updated last month
- [ICLR 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,480Updated this week
- Ola: Pushing the Frontiers of Omni-Modal Language Model☆344Updated 2 weeks ago
- Skywork-R1V2:Multimodal Hybrid Reinforcement Learning for Reasoning☆2,634Updated 2 weeks ago
- DeepRetrieval - 🔥 Training Search Agent with Retrieval Outcomes via Reinforcement Learning☆569Updated last week
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆948Updated last week