BradyFU / Video-MME
✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis
☆365Updated 3 months ago
Related projects: ⓘ
- ☆277Updated 7 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆303Updated 2 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆267Updated 3 months ago
- Long Context Transfer from Language to Vision☆293Updated 3 weeks ago
- Official repository for the paper PLLaVA☆551Updated last month
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆378Updated 5 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆496Updated 2 months ago
- [CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".☆205Updated 3 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆363Updated this week
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆297Updated 5 months ago
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆255Updated 3 weeks ago
- [ECCV 2024] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Mo…☆207Updated last month
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆283Updated last month
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆246Updated 3 weeks ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perception☆474Updated 4 months ago
- ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models. The first work to correct hallucinations in MLLMs.☆593Updated 3 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆444Updated last month
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆298Updated last month
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆185Updated last month
- Efficient Multimodal Large Language Models: A Survey☆230Updated last month
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆591Updated last month
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆237Updated 2 months ago
- A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆282Updated 2 months ago
- MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities (ICML 2024)☆252Updated 3 weeks ago
- The official repository of "Video assistant towards large language model makes everything easy"☆199Updated 6 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆235Updated 8 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆244Updated 6 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆273Updated 2 months ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆719Updated this week
- [CVPR 2024] 🎬💭 chat with over 10K frames of video!☆488Updated last week