A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
☆1,065Oct 6, 2024Updated last year
Alternatives and similar repositories for ONE-PEACE
Users that are interested in ONE-PEACE are comparing it to the libraries listed below
Sorting:
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,343Oct 5, 2023Updated 2 years ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,554Apr 24, 2024Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,592Dec 6, 2024Updated last year
- [CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions☆2,793Mar 25, 2025Updated 11 months ago
- [ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions☆1,475Jun 3, 2025Updated 9 months ago
- Grounded Language-Image Pre-training☆2,575Jan 24, 2024Updated 2 years ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,539Apr 2, 2025Updated 11 months ago
- VisionLLM Series☆1,138Feb 27, 2025Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆505Aug 9, 2024Updated last year
- ImageBind One Embedding Space to Bind Them All☆8,980Nov 21, 2025Updated 3 months ago
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"☆748Jan 22, 2024Updated 2 years ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,772Aug 19, 2024Updated last year
- Meta-Transformer for Unified Multimodal Learning☆1,652Dec 5, 2023Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,030Jan 23, 2026Updated last month
- An open source implementation of CLIP.☆13,430Updated this week
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,815Nov 27, 2025Updated 3 months ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,128Jun 4, 2024Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,765Jan 12, 2026Updated last month
- Open-source and strong foundation image recognition models.☆3,591Feb 18, 2025Updated last year
- An open-source framework for training large multimodal models.☆4,071Aug 31, 2024Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,204Dec 15, 2025Updated 2 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆870Mar 25, 2024Updated last year
- ☆285Aug 14, 2025Updated 6 months ago
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"☆2,808Jul 10, 2025Updated 7 months ago
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,760Aug 12, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,478Aug 12, 2024Updated last year
- [NeurIPS2022] This is the official implementation of the paper "Expediting Large-Scale Vision Transformer for Dense Prediction without Fi…☆86Oct 29, 2023Updated 2 years ago
- [ECCV 2024] Tokenize Anything via Prompting☆603Dec 11, 2024Updated last year
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆6,545Aug 7, 2024Updated last year
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration☆1,592Jan 1, 2025Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆945Aug 5, 2025Updated 6 months ago
- Multimodal-GPT☆1,517Jun 4, 2023Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- detrex is a research platform for DETR-based object detection, segmentation, pose estimation and other visual recognition tasks.☆2,276Sep 11, 2025Updated 5 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,402Aug 4, 2025Updated 6 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆864May 8, 2025Updated 9 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆322Jan 20, 2025Updated last year