csuhan / OneLLM
[CVPR 2024] OneLLM: One Framework to Align All Modalities with Language
☆633Updated 6 months ago
Alternatives and similar repositories for OneLLM:
Users that are interested in OneLLM are comparing it to the libraries listed below
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆739Updated last year
- ☆610Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆611Updated 2 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆434Updated 4 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆311Updated last year
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆802Updated last year
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆360Updated 5 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆864Updated 4 months ago
- 🐟 Code and models for the NeurIPS 2023 paper "Generating Images with Multimodal Language Models".☆452Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆506Updated last month
- Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"☆836Updated 7 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆337Updated 3 months ago
- Official implementation of SEED-LLaMA (ICLR 2024).☆610Updated 7 months ago
- Efficient Multimodal Large Language Models: A Survey☆337Updated last month
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆419Updated 3 months ago
- [AAAI-25] Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference☆272Updated 3 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆802Updated 8 months ago
- ☆777Updated 9 months ago
- VisionLLM Series☆1,050Updated last month
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆516Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆594Updated last year
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆420Updated last week
- MM-EUREKA: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning☆574Updated this week
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆528Updated last week
- [ICLR 2025] Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,350Updated 3 weeks ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆527Updated last week
- Next-Token Prediction is All You Need☆2,090Updated last month
- Long Context Transfer from Language to Vision☆373Updated last month
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆374Updated this week
- Emu Series: Generative Multimodal Models from BAAI☆1,712Updated 6 months ago