☆86Feb 5, 2024Updated 2 years ago
Alternatives and similar repositories for Lenna
Users that are interested in Lenna are comparing it to the libraries listed below
Sorting:
- Repository of paper: Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models☆37Sep 19, 2023Updated 2 years ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆253Feb 5, 2024Updated 2 years ago
- ☆425Jul 29, 2024Updated last year
- Strong and Open Vision Language Assistant for Mobile Devices☆1,345Apr 15, 2024Updated last year
- This is a PyTorch implementation of 3DRefTR proposed by our paper "A Unified Framework for 3D Point Cloud Visual Grounding"☆26Aug 24, 2023Updated 2 years ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆949Aug 5, 2025Updated 7 months ago
- GAIIC2024无人机视角下的双光目标检测 - Rank6 解决方案☆11Jun 17, 2024Updated last year
- ☆788Aug 7, 2024Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆336Jul 17, 2024Updated last year
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆77Jul 13, 2024Updated last year
- VisionLLM Series☆1,139Feb 27, 2025Updated last year
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆605Oct 6, 2024Updated last year
- Exploration of the multi modal fuyu-8b model of Adept. 🤓 🔍☆27Nov 7, 2023Updated 2 years ago
- ACM MM 2022 - PPMN: Pixel-Phrase Matching Network for One-Stage Panoptic Narrative Grounding☆11Aug 12, 2022Updated 3 years ago
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆42Oct 19, 2025Updated 5 months ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆209Jan 8, 2025Updated last year
- ☆806Jul 8, 2024Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,606Feb 16, 2025Updated last year
- ☆17Aug 7, 2024Updated last year
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆392Jul 9, 2024Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆436Dec 22, 2024Updated last year
- ☆360Jan 27, 2024Updated 2 years ago
- Official PyTorch implementation of CODA-LM(https://arxiv.org/abs/2404.10595)☆100Dec 5, 2024Updated last year
- ☆152Aug 23, 2023Updated 2 years ago
- Segment Anything with Deictic Prompting☆27May 13, 2025Updated 10 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆584Jun 7, 2024Updated last year
- ☆59Aug 7, 2023Updated 2 years ago
- Sambor: Boosting Segment Anything Model Towards Open-Vocabulary Learning☆32Dec 7, 2023Updated 2 years ago
- [AAAI 2026 Oral] LENS: Learning to Segment Anything with Unified Reinforced Reasoning☆109Dec 3, 2025Updated 3 months ago
- [ECCV2024] This is an official implementation for "PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model"☆270Dec 30, 2024Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆296Mar 13, 2024Updated 2 years ago
- A list of video instance segmentation papers, codes and datasets.☆61Mar 13, 2020Updated 6 years ago
- [CVPR2023] This is an official mmdet implementation of paper "DETRs with Hybrid Matching".☆49Jan 14, 2023Updated 3 years ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆318Apr 16, 2024Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆506Aug 9, 2024Updated last year
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆256Feb 11, 2025Updated last year
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Nov 10, 2023Updated 2 years ago
- ☆19Jan 7, 2026Updated 2 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆133Oct 24, 2024Updated last year