CircleRadon / Osprey
[CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"
☆806Updated last month
Alternatives and similar repositories for Osprey:
Users that are interested in Osprey are comparing it to the libraries listed below
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆555Updated 10 months ago
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆950Updated 7 months ago
- [CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale☆1,111Updated 5 months ago
- [ECCV 2024] Tokenize Anything via Prompting☆571Updated 3 months ago
- 🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos☆981Updated last week
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆783Updated 7 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆555Updated 9 months ago
- OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]☆1,262Updated 3 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆793Updated 7 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".☆241Updated 3 months ago
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,050Updated 5 months ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,093Updated last month
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆323Updated 4 months ago
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆972Updated 7 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆855Updated 4 months ago
- ☆772Updated 8 months ago
- Official repository for the paper PLLaVA☆643Updated 7 months ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆216Updated last month
- Multimodal Models in Real World☆452Updated last month
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆385Updated 8 months ago
- A family of lightweight multimodal models.☆1,006Updated 4 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆771Updated 7 months ago
- ☆377Updated 3 months ago
- 【CVPR 2024 Highlight】Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models☆1,735Updated last week
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆237Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆602Updated last month
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆297Updated last month
- Official code implementation of Vary-toy (Small Language Model Meets with Reinforced Vision Vocabulary)☆617Updated 2 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆570Updated 5 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆477Updated 7 months ago