google-deepmind / neptune
☆57Updated last week
Alternatives and similar repositories for neptune
Users that are interested in neptune are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆62Updated 10 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 8 months ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆96Updated 6 months ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆111Updated 2 months ago
- Language Repository for Long Video Understanding☆31Updated 11 months ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆67Updated 2 months ago
- ☆10Updated last month
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆124Updated 10 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆30Updated last month
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆111Updated last month
- ☆44Updated last month
- Unsolvable Problem Detection: Evaluating Trustworthiness of Vision Language Models☆77Updated this week
- [Neurips 24' D&B] Official Dataloader and Evaluation Scripts for LongVideoBench.☆97Updated 9 months ago
- Pytorch implementation of Twelve Labs' Video Foundation Model evaluation framework & open embeddings☆25Updated 8 months ago
- Explore VLM-Eval, a framework for evaluating Video Large Language Models, enhancing your video analysis with cutting-edge AI technology.☆34Updated last year
- ☆32Updated 3 months ago
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated last year
- [ICLR2025] SPORTU: A Comprehensive Sports Understanding Benchmark for Multimodal Large Language Models☆14Updated 2 months ago
- ☆90Updated 4 months ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆50Updated 4 months ago
- ☆65Updated 10 months ago
- Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models☆107Updated last month
- Official implementation of paper VideoLLM Knows When to Speak: Enhancing Time-Sensitive Video Comprehension with Video-Text Duet Interact…☆31Updated 3 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆161Updated last month
- Official implementation of CVPR 2024 paper "vid-TLDR: Training Free Token merging for Light-weight Video Transformer".☆47Updated last year
- Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation☆28Updated 4 months ago
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆24Updated last month
- ☆71Updated 5 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆45Updated 3 months ago
- Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning☆22Updated last month