shikras / shikra
☆775Updated 9 months ago
Alternatives and similar repositories for shikra:
Users that are interested in shikra are comparing it to the libraries listed below
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆525Updated 10 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆479Updated 8 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆514Updated 11 months ago
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆520Updated last year
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆737Updated last year
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆310Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆863Updated 4 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆347Updated last year
- VisionLLM Series☆1,041Updated last month
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆373Updated 2 weeks ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆337Updated 3 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆799Updated 8 months ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆803Updated last year
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆863Updated 4 months ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆627Updated 5 months ago
- ☆324Updated last year
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆863Updated last month
- Emu Series: Generative Multimodal Models from BAAI☆1,706Updated 6 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆432Updated 4 months ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆327Updated 5 months ago
- ☆607Updated last year
- A Framework of Small-scale Large Multimodal Models☆796Updated 3 weeks ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆338Updated 3 weeks ago
- [ECCV 2024] Tokenize Anything via Prompting☆577Updated 4 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆579Updated 6 months ago
- The code of the paper "NExT-Chat: An LMM for Chat, Detection and Segmentation".☆238Updated last year
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages☆310Updated last year
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆524Updated 5 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆607Updated 2 months ago
- Research Trends in LLM-guided Multimodal Learning.☆357Updated last year