wangyouze / Trust-videoLLMsLinks
☆33Updated 2 months ago
Alternatives and similar repositories for Trust-videoLLMs
Users that are interested in Trust-videoLLMs are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆317Updated last year
- R1-like Video-LLM for Temporal Grounding☆133Updated 7 months ago
- [NeurIPS 2022 Spotlight] Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations☆143Updated last year
- Official Repo of "MMBench: Is Your Multi-modal Model an All-around Player?"☆286Updated 8 months ago
- Official implementation for "Real20M: A Large-scale E-commerce Dataset for Cross-domain Retrieval"☆25Updated 3 months ago
- ☆24Updated last year
- ☆157Updated 11 months ago
- Awesome papers & datasets specifically focused on long-term videos.☆352Updated 4 months ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆296Updated last year
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆71Updated 10 months ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆153Updated 5 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆152Updated last year
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆114Updated last month
- ☆80Updated last year
- Collections of Papers and Projects for Multimodal Reasoning.☆107Updated 9 months ago
- R1-Vision: Let's first take a look at the image☆48Updated 11 months ago
- NExT-QA: Next Phase of Question-Answering to Explaining Temporal Actions (CVPR'21)☆181Updated 6 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆105Updated 5 months ago
- 🔥🔥MLVU: Multi-task Long Video Understanding Benchmark☆241Updated 5 months ago
- [ICCV 2025] LVBench: An Extreme Long Video Understanding Benchmark☆136Updated 7 months ago
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆116Updated last year
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆381Updated 11 months ago
- [NeurIPS2023] Exploring Diverse In-Context Configurations for Image Captioning☆43Updated last year
- ☆155Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆281Updated last year
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Models☆47Updated 3 months ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆245Updated 5 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆150Updated last year
- Efficient Multimodal Large Language Models: A Survey☆387Updated 9 months ago
- official impelmentation of Kangaroo: A Powerful Video-Language Model Supporting Long-context Video Input☆67Updated last year