jy0205 / LaVIT
LaVIT: Empower the Large Language Model to Understand and Generate Visual Content
☆535Updated last month
Related projects ⓘ
Alternatives and complementary repositories for LaVIT
- Official implementation of SEED-LLaMA (ICLR 2024).☆583Updated 2 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆397Updated 7 months ago
- [CVPR 2024] Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers☆528Updated 3 weeks ago
- Open-MAGVIT2: Democratizing Autoregressive Visual Generation☆705Updated last month
- ☆289Updated 9 months ago
- Implementation of MagViT2 Tokenizer in Pytorch☆564Updated last month
- Long Context Transfer from Language to Vision☆334Updated this week
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆185Updated last week
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆217Updated 2 weeks ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆367Updated 4 months ago
- [CVPR2024 Highlight] VBench - We Evaluate Video Generation☆590Updated this week
- Official repo for paper "MiraData: A Large-Scale Video Dataset with Long Durations and Structured Captions"☆371Updated 2 months ago
- Official repository for the paper PLLaVA☆594Updated 3 months ago
- [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models☆232Updated last month
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆319Updated last month
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆274Updated this week
- This repo contains the code for 1D tokenizer and generator☆554Updated this week
- ✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆407Updated 5 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆683Updated 3 months ago
- ☆573Updated 9 months ago
- Multimodal Models in Real World☆404Updated 3 weeks ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆246Updated 4 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆315Updated 4 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆198Updated 7 months ago
- Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,033Updated this week
- 🔥🔥🔥 A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).☆364Updated last week
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆302Updated 7 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆297Updated 4 months ago
- When do we not need larger vision models?☆336Updated last week
- [CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent Diffusion Models☆207Updated last month