☆86Feb 5, 2024Updated 2 years ago
Alternatives and similar repositories for Lenna
Users that are interested in Lenna are comparing it to the libraries listed below
Sorting:
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Sep 19, 2023Updated 2 years ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆252Feb 5, 2024Updated 2 years ago
- ☆424Jul 29, 2024Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆945Aug 5, 2025Updated 6 months ago
- Exploration of the multi modal fuyu-8b model of Adept. 🤓 🔍☆27Nov 7, 2023Updated 2 years ago
- This is a PyTorch implementation of 3DRefTR proposed by our paper "A Unified Framework for 3D Point Cloud Visual Grounding"☆26Aug 24, 2023Updated 2 years ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆603Oct 6, 2024Updated last year
- GAIIC2024无人机视角下的双光目标检测 - Rank6 解决方案☆11Jun 17, 2024Updated last year
- VisionLLM Series☆1,138Feb 27, 2025Updated last year
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆206Jan 8, 2025Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆336Jul 17, 2024Updated last year
- ACM MM 2022 - PPMN: Pixel-Phrase Matching Network for One-Stage Panoptic Narrative Grounding☆11Aug 12, 2022Updated 3 years ago
- Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)☆100Dec 5, 2024Updated last year
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆42Oct 19, 2025Updated 4 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Jul 13, 2024Updated last year
- Segment Anything with Deictic Prompting☆27May 13, 2025Updated 9 months ago
- Sambor: Boosting Segment Anything Model Towards Open-Vocabulary Learning☆32Dec 7, 2023Updated 2 years ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,334Apr 15, 2024Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆426Dec 22, 2024Updated last year
- [AAAI 2026 Oral] LENS: Learning to Segment Anything with Unified Reinforced Reasoning☆106Dec 3, 2025Updated 2 months ago
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆49Mar 18, 2024Updated last year
- ☆805Jul 8, 2024Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,589Feb 16, 2025Updated last year
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆318Apr 16, 2024Updated last year
- SODA: Story Oriented Dense Video Captioning Evaluation Framework☆13May 3, 2024Updated last year
- Run SOTA Vision-Language Model Florence-2 on your data!☆15Mar 27, 2025Updated 11 months ago
- Recognize Any Regions☆123Dec 18, 2024Updated last year
- ☆58Aug 7, 2023Updated 2 years ago
- ☆15Sep 11, 2023Updated 2 years ago
- ☆360Jan 27, 2024Updated 2 years ago
- GPT 4 Vision + TTS 多模态能力 Demo☆17Nov 15, 2023Updated 2 years ago
- (ECCVW 2025)GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆551Jun 3, 2025Updated 8 months ago
- [CVPR2023] Code Release of Aligning Bag of Regions for Open-Vocabulary Object Detection☆184Oct 25, 2023Updated 2 years ago
- ☆134Dec 22, 2023Updated 2 years ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆108May 29, 2025Updated 9 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆98Jan 16, 2025Updated last year
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆133Oct 24, 2024Updated last year
- [ECCV2024] This is an official implementation for "PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model"☆269Dec 30, 2024Updated last year
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆606May 8, 2024Updated last year