AIDC-AI / Ovis
A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.
☆507Updated this week
Related projects ⓘ
Alternatives and complementary repositories for Ovis
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆553Updated last month
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆676Updated 3 months ago
- Next-Token Prediction is All You Need☆1,793Updated 2 weeks ago
- Official repository for the paper PLLaVA☆581Updated 3 months ago
- Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation☆913Updated last week
- ✨✨VITA: Towards Open-Source Interactive Omni Multimodal LLM☆947Updated 2 weeks ago
- HPT - Open Multimodal LLMs from HyperGAI☆312Updated 5 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆847Updated this week
- A family of lightweight multimodal models.☆928Updated 2 weeks ago
- ☆178Updated last week
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆318Updated last month
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆357Updated 2 weeks ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆807Updated 4 months ago
- ☆258Updated this week
- Long Context Transfer from Language to Vision☆328Updated 2 weeks ago
- Multimodal Models in Real World☆400Updated last week
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆703Updated 9 months ago
- A Framework of Small-scale Large Multimodal Models☆635Updated 3 weeks ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆170Updated 2 weeks ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆852Updated 7 months ago
- 🔥🔥First-ever hour scale video understanding models☆150Updated last week
- RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness☆230Updated this week
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarks☆1,305Updated this week
- Official implementation of SEED-LLaMA (ICLR 2024).☆574Updated last month
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆728Updated 3 months ago
- ☆2,824Updated 3 weeks ago
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆270Updated 2 weeks ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆301Updated this week
- Code for "AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling"☆773Updated 2 months ago
- ✨ ✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆402Updated 4 months ago