Train InternViT-6B in MMSegmentation and MMDetection with DeepSpeed
☆109Oct 25, 2024Updated last year
Alternatives and similar repositories for InternVL-MMDetSeg
Users that are interested in InternVL-MMDetSeg are comparing it to the libraries listed below
Sorting:
- The official implementation of ADDP (ICLR 2024)☆12Mar 27, 2024Updated last year
- Unofficial implement of "Pix2seq: A Language Modeling Framework for Object Detection" on mmdetection☆34Apr 18, 2022Updated 3 years ago
- [CVPR 2023] implementation of Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information.☆91Jun 1, 2023Updated 2 years ago
- [NeurIPS 2024 Spotlight ⭐️ & TPAMI 2025] Parameter-Inverted Image Pyramid Networks (PIIP)☆111Aug 5, 2025Updated 7 months ago
- The first large-scale multimodal dialogue dataset focusing on Synthetic Aperture Radar (SAR) imagery.☆67Feb 15, 2025Updated last year
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,476Jun 3, 2025Updated 9 months ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,879Sep 22, 2025Updated 5 months ago
- [CVPR 2023]Implementation of Siamese Image Modeling for Self-Supervised Vision Representation Learning☆41Jun 6, 2024Updated last year
- (NeurIPS 2024) Official repository of paper "Frozen-DETR: Enhancing DETR with Image Understanding from Frozen Foundation Models"☆35Mar 22, 2025Updated 11 months ago
- ☆13Sep 22, 2025Updated 5 months ago
- Mixed Pseudo Labels for Semi-Supervised Object Detection☆69Mar 7, 2024Updated 2 years ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆414May 5, 2025Updated 10 months ago
- Adapting LLaMA Decoder to Vision Transformer☆30May 20, 2024Updated last year
- Chinese CLIP models with SOTA performance.☆60Aug 28, 2023Updated 2 years ago
- openmmlab models visualization☆15Jan 22, 2023Updated 3 years ago
- ☆11Jan 12, 2023Updated 3 years ago
- This is the official implementation to the EMNLP 2024 paper: Modeling Layout Reading Order as Ordering Relations for Visually-rich Docume…☆31Jan 19, 2026Updated 2 months ago
- ☆31Dec 20, 2022Updated 3 years ago
- ☆125Jul 29, 2024Updated last year
- GeoGround: A Unified Large Vision-Language Model for Remote Sensing Visual Grounding☆80May 10, 2025Updated 10 months ago
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆201Feb 5, 2024Updated 2 years ago
- Open-Vocabulary Panoptic Segmentation☆27Jun 15, 2025Updated 9 months ago
- MMPD Dataset from ECCV'2024 "When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset"☆21Jul 15, 2024Updated last year
- OpenMMLab Detection Toolbox and Benchmark☆11Aug 1, 2023Updated 2 years ago
- Pytorch 1.0 codes(including cuda codes) for Deformable Convolution Version 2☆18Mar 2, 2019Updated 7 years ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆608May 8, 2024Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,652Aug 1, 2024Updated last year
- VisionLLM Series☆1,137Feb 27, 2025Updated last year
- ☆133Jan 19, 2023Updated 3 years ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆64Nov 5, 2024Updated last year
- ☆12Jun 5, 2024Updated last year
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆44Sep 12, 2024Updated last year
- [CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions☆2,801Mar 25, 2025Updated 11 months ago
- OpenMMLab Detection Toolbox and Benchmark for V3Det☆15Apr 3, 2024Updated last year
- [AAAI 2026] Code for "SAM2MOT: A Novel Paradigm of Multi-Object Tracking by Segmentation".☆164Nov 18, 2025Updated 4 months ago
- ☆23Nov 29, 2024Updated last year
- Detection Transformers with Assignment☆263Sep 16, 2023Updated 2 years ago
- ☆42Dec 10, 2024Updated last year
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆209Jan 8, 2025Updated last year