saxenarohit / MovieSum
☆13Updated 8 months ago
Alternatives and similar repositories for MovieSum:
Users that are interested in MovieSum are comparing it to the libraries listed below
- ☆29Updated 8 months ago
- This repo contains code for the paper "Both Text and Images Leaked! A Systematic Analysis of Data Contamination in Multimodal LLM"☆13Updated 3 weeks ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆50Updated 4 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆42Updated 9 months ago
- HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models☆40Updated 5 months ago
- ☆16Updated 3 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆79Updated 6 months ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆42Updated last month
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆24Updated 2 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆33Updated 10 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆34Updated last month
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆34Updated 9 months ago
- ☆32Updated 2 weeks ago
- ☆44Updated last month
- ☆73Updated last year
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search"☆24Updated last month
- Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆32Updated last month
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆100Updated last month
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated last year
- The official github repo for MixEval-X, the first any-to-any, real-world benchmark.☆14Updated 2 months ago
- ☆27Updated 2 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆37Updated 7 months ago
- ☆86Updated 2 weeks ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆58Updated 2 months ago
- This repo contains code and data for ICLR 2025 paper MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs☆30Updated last month
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 4 months ago
- ☆36Updated 7 months ago
- LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models☆17Updated 3 weeks ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆71Updated 3 weeks ago
- A project for tri-modal LLM benchmarking and instruction tuning.☆30Updated 3 weeks ago