[CVPR 2024] OneLLM: One Framework to Align All Modalities with Language
☆666Oct 22, 2024Updated last year
Alternatives and similar repositories for OneLLM
Users that are interested in OneLLM are comparing it to the libraries listed below
Sorting:
- Meta-Transformer for Unified Multimodal Learning☆1,651Dec 5, 2023Updated 2 years ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆870Mar 25, 2024Updated last year
- ☆643Feb 15, 2024Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"☆871Aug 27, 2024Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,765Jan 12, 2026Updated last month
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆190Feb 3, 2025Updated last year
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆859Jul 29, 2024Updated last year
- VisionLLM Series☆1,138Feb 27, 2025Updated last year
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,875Jan 8, 2026Updated last month
- An Open-source Toolkit for LLM Development☆2,805Jan 13, 2025Updated last year
- ☆4,577Sep 14, 2025Updated 5 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆763Feb 1, 2024Updated 2 years ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆603Oct 6, 2024Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,986Nov 7, 2025Updated 3 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,303Jul 15, 2025Updated 7 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆864May 8, 2025Updated 9 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆336Jul 17, 2024Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆945Aug 5, 2025Updated 6 months ago
- Code for 3D-LLM: Injecting the 3D World into Large Language Models☆1,177Jun 6, 2024Updated last year
- Latest Advances on Multimodal Large Language Models☆17,355Updated this week
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆322Jan 20, 2025Updated last year
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,921May 26, 2025Updated 9 months ago
- mPLUG-Owl: The Powerful Multi-modal Large Language Model Family☆2,539Apr 2, 2025Updated 10 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆874Mar 8, 2025Updated 11 months ago
- Next-Token Prediction is All You Need☆2,355Jan 12, 2026Updated last month
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,589Feb 16, 2025Updated last year
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆318Apr 16, 2024Updated last year
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆391Jul 9, 2024Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆368Jul 24, 2025Updated 7 months ago
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,836Sep 22, 2025Updated 5 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆246Aug 14, 2024Updated last year
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,450Dec 3, 2024Updated last year
- Align 3D Point Cloud with Multi-modalities for Large Language Models☆459Dec 9, 2023Updated 2 years ago
- Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model☆3,617May 13, 2025Updated 9 months ago
- [ECCV2024] Any2Point: Empowering Any-modality Large Models for Efficient 3D Understanding☆124Jul 2, 2024Updated last year
- [ECCV 2024 Best Paper Candidate & TPAMI 2025] PointLLM: Empowering Large Language Models to Understand Point Clouds☆975Aug 14, 2025Updated 6 months ago