CircleRadon / Osprey
[CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"
☆784Updated 5 months ago
Alternatives and similar repositories for Osprey:
Users that are interested in Osprey are comparing it to the libraries listed below
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆531Updated 7 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆502Updated 8 months ago
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆923Updated 5 months ago
- [ECCV 2024] Tokenize Anything via Prompting☆559Updated last month
- [CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale☆1,080Updated 3 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆758Updated 6 months ago
- 🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos☆797Updated last week
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆1,969Updated 3 weeks ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆471Updated 3 months ago
- ☆371Updated last month
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆473Updated 5 months ago
- SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree☆448Updated last month
- [NeurIPS 2024] An official implementation of ShareGPT4Video: Improving Video Understanding and Generation with Better Captions☆1,032Updated 3 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆767Updated 5 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆819Updated 2 months ago
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆877Updated last month
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆410Updated last month
- An open-source implementation for training LLaVA-NeXT.☆375Updated 3 months ago
- MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolution☆279Updated last month
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆312Updated 2 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".☆233Updated last month
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆736Updated 5 months ago
- Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs☆552Updated this week
- This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.☆707Updated last year
- Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"☆945Updated 5 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆376Updated 6 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆354Updated 2 weeks ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆230Updated 11 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆452Updated last week
- PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding. PixelLM is accepted by CVPR 2024.☆197Updated last week