md-mohaiminul / BIMBA
☆10Updated last month
Alternatives and similar repositories for BIMBA
Users that are interested in BIMBA are comparing it to the libraries listed below
Sorting:
- Official PyTorch Code of ReKV (ICLR'25)☆17Updated 2 months ago
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆38Updated 2 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆60Updated 8 months ago
- ☆30Updated 9 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 5 months ago
- Official Implementation (Pytorch) of the "VidChain: Chain-of-Tasks with Metric-based Direct Preference Optimization for Dense Video Capti…☆18Updated 3 months ago
- Open-Vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models (ICCV 20…☆18Updated last year
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆24Updated 3 weeks ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆33Updated last month
- Official implementation of "Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data" (ICLR 2024)☆31Updated 6 months ago
- ☆34Updated 7 months ago
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆64Updated 8 months ago
- Language Repository for Long Video Understanding☆31Updated 10 months ago
- The official repository for paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".☆38Updated 2 months ago
- [CVPR 2025] Official PyTorch code of "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation".☆28Updated last week
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆44Updated 3 months ago
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆20Updated 4 months ago
- NegCLIP.☆31Updated 2 years ago
- [CVPR 2025] OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆56Updated last month
- ☆31Updated 3 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆24Updated 4 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆58Updated 3 weeks ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 4 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆28Updated 7 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆29Updated last month
- ☆41Updated 6 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆33Updated last month
- Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning☆21Updated last month
- ☆23Updated 2 weeks ago
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆31Updated 6 months ago